Please help! All of my data grid loads on my app are extremely slow. I am on the personal plan.
My table is only 4 rows long.
Here is my client side:
s = datetime.datetime.now()
self.repeating_panel_1.items = anvil.server.call('get_load_board')
print("Elapse time creating rows" + str(datetime.datetime.now()-s))
Here is my server side:
s = datetime.datetime.now()
results = app_tables.data.search()
print("Elapse time retrieving data" + str(datetime.datetime.now()-s))
Here is the result:
Elapse time retrieving data 0:00:00.106685
Elapse time creating rows 0:00:11.605000
This issue is definitely on the data grid side.
I can’t share a link to my application, but I can work on a simple version if needed. However I notice this only occurs with two of my data grids. ( I have 5 seperate data grid loads).
One of the fastest loads is my largest table ~200 rows.
The differentiator from my fast/slow load tables are the amount of “setup” have have for each repeating panel. I am adding a lot of labels for linked columns. I am also adding some hovering using the anvil extras library and some buttons to edit/add comments.
Is there a better way to create repeating panels that isn’t so slow?
If, in setting up each row, you’re following links from that row to other rows, then you’re incurring new database reads, on on each link.
Links generally resolve faster on the Server than on the Client. (The Server’s closer to the data.) You might save time by doing all of the link-following, for the entire set of rows, on the Server, and returning only the finished result to the Client as a single list.
Alternatively, a different data layout or visual layout might save some time.
I have links within each row (several). I thought this could be a potential issue, however, my largest table, ~200 rows, (All call it table B and the slow/small table **table A**) has several links as well. However, table B does not act on those links for each row until it is clicked on. Table A does act on the links for each row at initialization. I’ll try @p.colbert 's sugesstions and report back!
note: retrieving data is server side / creating row is client side
Elapse time retrieving data0:00:01.427745
Elapse time creating rows0:00:02.291000
versus the previous:
Elapse time retrieving data 0:00:00.106685
Elapse time creating rows 0:00:11.605000
So overall I am saving ~8 seconds which is huge, but still 3.6 seconds seems pretty large for the small amounts of data I am querying. I worry when my tables get even larger this could cause some issues.
Could location be a factor? I know I am over here on the west coast of the U.S., and my servers (on the personal plan) are probably in England? (more of a question)
I’m in Missouri, a little closer to the UK than you, but not so much.
I have tried to call an HTTP endpoint and it takes:
1.5 seconds the first time I call it after editing the app (compilation + caching + loading + execution + roundtrip)
0.9 seconds with Keep server running disabled (loading from cache + execution + roundtrip)
0.35 seconds with Keep server running enabled (execution + roundtrip)
I have a dedicated plan and I can keep the server running, but my dedicated server is still in the UK (I think).
I assume that the time for a call from a form to a server function or from any client to an HTTP endpoint are identical, but you can optimize an HTTP endpoint by creating a leaner app that only contains the endpoint code only. The leaner app can share the tables with the main app, so the endpoints will load faster without the need to load all the server modules with all the code required by the forms. I have not tried this type of optimization because (I assume that) I don’t need it, because I always keep the server running.
Another factor, could be the personal plan vs dedicated plan, but I haven’t tested that aspect.
You can also create server only apps with conventional callable functions and set those as dependencies in your client side app. The dependency doesn’t load the code of the ‘parent’ app.
Moving to serialization (converting from an iterator to dictionary), to the server, decreased the time of my smaller tables because it was faster to resolve all the links on the server side. However, my larger table ~200 rows is, now, taking much longer (It did not have much as much links, but was just larger in rows). I believe this was because it was faster to resolve the rows in chucks vs. consuming all the data at once. That means once my smaller tables reach a decent size (larger than 200), it will slow down significantly.
There’s a middle ground between iterators and complete list of dictionaries. It’s more work, but would give you complete control over how much data you return and how many server calls you make.
You’d write a client-side iterator to give to the repeating panel. That iterator would call your server function to fetch the right amount of data. That server function would return a list of dictionaries, and would resolve any linking fields into dictionary elements.
Owen has an excellent example of doing this in the ORM library: Anvil ORM Library
The amount of work this would be might not be worth it for what you’re doing, but it’s another option.
Here’s an Idea, it would involve more round trips, but only one more per ‘chunk’.
Since even on the client side the row iterator objects are really only links to the table until the data is needed, what if you returned ‘chunks’ (like a list of 50) row objects back to the server for deserialization, return, and display, on the client side. …Wait until that was done, then request the next ‘chunk’ from the server by passing it more row iterator objects.
Wrap it all up in a generator on the client side, and you could have something that could at least have a chance of loading some data while waiting for the rest.
I’m not sure if this would work this way, I haven’t tested it, its just an idea. ymmv.
You can load first the rows for the first few pages, so the datagrid is displayed quickly. Then get a timer to load the rest after half a second. The rest of the rows will be available before the user has time to click on the pager.
Here is the description of the timer approach, on the “Delay loading the bulk of the data” paragraph: [Wiki] Performance optimization