Very Slow Data Grid Load

What I’m trying to do:

Please help! All of my data grid loads on my app are extremely slow. I am on the personal plan.

My table is only 4 rows long.

Here is my client side:

    s = datetime.datetime.now()
    self.repeating_panel_1.items = anvil.server.call('get_load_board')
    print("Elapse time creating rows" + str(datetime.datetime.now()-s))

Here is my server side:

  s = datetime.datetime.now()
  results = app_tables.data.search()
  print("Elapse time retrieving data" + str(datetime.datetime.now()-s))

Here is the result:

Elapse time retrieving data 0:00:00.106685
Elapse time creating rows 0:00:11.605000

This issue is definitely on the data grid side.

I can’t share a link to my application, but I can work on a simple version if needed. However I notice this only occurs with two of my data grids. ( I have 5 seperate data grid loads).

One of the fastest loads is my largest table ~200 rows.

The differentiator from my fast/slow load tables are the amount of “setup” have have for each repeating panel. I am adding a lot of labels for linked columns. I am also adding some hovering using the anvil extras library and some buttons to edit/add comments.

Is there a better way to create repeating panels that isn’t so slow?

Just a guess…

If, in setting up each row, you’re following links from that row to other rows, then you’re incurring new database reads, on on each link.

Links generally resolve faster on the Server than on the Client. (The Server’s closer to the data.) You might save time by doing all of the link-following, for the entire set of rows, on the Server, and returning only the finished result to the Client as a single list.

Alternatively, a different data layout or visual layout might save some time.

1 Like

What does anvil.server.call('get_load_board') return?

If it returns row objects and the rows contain links to other rows or simple objects or media, they may trigger more roundtrips.

You are both correct.

I have links within each row (several). I thought this could be a potential issue, however, my largest table, ~200 rows, (All call it table B and the slow/small table **table A**) has several links as well. However, table B does not act on those links for each row until it is clicked on. Table A does act on the links for each row at initialization. I’ll try @p.colbert 's sugesstions and report back!

Thank you for the insight!

Here are a few tips for improving performance: [Wiki] Performance optimization

1 Like

Thank you for all the tips!

I combined your suggestions:

I did @stefano.menci suggestion using anvil extras serialisation library to expand all my links as dictionaries , on the server side, as suggested by @p.colbert

Here was the timing improvement:

note: retrieving data is server side / creating row is client side

Elapse time retrieving data0:00:01.427745
Elapse time creating rows0:00:02.291000

versus the previous:

Elapse time retrieving data 0:00:00.106685
Elapse time creating rows 0:00:11.605000

So overall I am saving ~8 seconds which is huge, but still 3.6 seconds seems pretty large for the small amounts of data I am querying. I worry when my tables get even larger this could cause some issues.

Could location be a factor? I know I am over here on the west coast of the U.S., and my servers (on the personal plan) are probably in England? (more of a question)

either way, thank you both very much!

1 Like

I’m in Missouri, a little closer to the UK than you, but not so much.

I have tried to call an HTTP endpoint and it takes:

  • 1.5 seconds the first time I call it after editing the app (compilation + caching + loading + execution + roundtrip)
  • 0.9 seconds with Keep server running disabled (loading from cache + execution + roundtrip)
  • 0.35 seconds with Keep server running enabled (execution + roundtrip)

I have a dedicated plan and I can keep the server running, but my dedicated server is still in the UK (I think).

I assume that the time for a call from a form to a server function or from any client to an HTTP endpoint are identical, but you can optimize an HTTP endpoint by creating a leaner app that only contains the endpoint code only. The leaner app can share the tables with the main app, so the endpoints will load faster without the need to load all the server modules with all the code required by the forms. I have not tried this type of optimization because (I assume that) I don’t need it, because I always keep the server running.

Another factor, could be the personal plan vs dedicated plan, but I haven’t tested that aspect.

2 Likes

Thank you very much. I’ll have to look into that optimization. (Trying to squeeze a dime out of a penny)
:sweat_smile:

You can also create server only apps with conventional callable functions and set those as dependencies in your client side app. The dependency doesn’t load the code of the ‘parent’ app.

2 Likes

Moving to serialization (converting from an iterator to dictionary), to the server, decreased the time of my smaller tables because it was faster to resolve all the links on the server side. However, my larger table ~200 rows is, now, taking much longer (It did not have much as much links, but was just larger in rows). I believe this was because it was faster to resolve the rows in chucks vs. consuming all the data at once. That means once my smaller tables reach a decent size (larger than 200), it will slow down significantly.

Looks like I’ll have to keep wacking at it.

There’s a middle ground between iterators and complete list of dictionaries. It’s more work, but would give you complete control over how much data you return and how many server calls you make.

You’d write a client-side iterator to give to the repeating panel. That iterator would call your server function to fetch the right amount of data. That server function would return a list of dictionaries, and would resolve any linking fields into dictionary elements.

Owen has an excellent example of doing this in the ORM library: Anvil ORM Library

The amount of work this would be might not be worth it for what you’re doing, but it’s another option.

2 Likes

Here’s an Idea, it would involve more round trips, but only one more per ‘chunk’.

Since even on the client side the row iterator objects are really only links to the table until the data is needed, what if you returned ‘chunks’ (like a list of 50) row objects back to the server for deserialization, return, and display, on the client side. …Wait until that was done, then request the next ‘chunk’ from the server by passing it more row iterator objects.
Wrap it all up in a generator on the client side, and you could have something that could at least have a chance of loading some data while waiting for the rest.

I’m not sure if this would work this way, I haven’t tested it, its just an idea. ymmv.

1 Like

You can load first the rows for the first few pages, so the datagrid is displayed quickly. Then get a timer to load the rest after half a second. The rest of the rows will be available before the user has time to click on the pager.

Here is the description of the timer approach, on the “Delay loading the bulk of the data” paragraph: [Wiki] Performance optimization

1 Like

First of all. I love this community. Thank you for the great ideas. I am going to try a few.

@stefano.menci I actually started to try and implement that idea this morning. Your optimization post is very insightful. I’ll report back my results!

2 Likes

i agree this community is awesome!

1 Like