Optimize search iterator slicing

When I slice a search iterator, the higher the starting number of the slice, the slower the search.

I would like for the slicing to be optimized to efficiently skip the unused rows (for example, using OFFSET in the underlying SQL query).

3 Likes

After lots of googling and going down the rabbit hole of API pagination, I am not sure this is possible with Postgres (which is what Anvil uses), or even possible at all.

Offset-based pagination loads all the data into memory every time.

Cursor based pagination is more efficient, but you can’t skip unused rows - it works like a search iterator.

Slicing a search iterator is slow because it fetches all the skipped rows and discards them. If the row skipping happens inside postgres, it will take x time, if it happens in python on the server side it will take 10x time, if it happens on the client side it will take 1,000x time.

It doesn’t really matter how this is implemented, but I don’t want to modify my table structure only to slice it into a DataGrid.

The problem here is that a DataGrid with a search iterator is fast at showing the first few pages of a large search, but it’s unusable if the user clicks on the last page button and the app sends thousands of rows to the client only to show the last page.

I have tried to create my own iterator, but I wasn’t able to get it to work nicely with the DataGrid, hence this request. I wasn’t after saving the x time in postgres or the 10x time in the server, I wanted to save the 1,000x time in the client.

1 Like

That is a good point.

I haven’t tried these implementations as I don’t use direct SQL but I wonder if trying those implementations with an external postgres DB as an experiment would show a performance improvement over what you currently have.

I may do that at some point as I find myself in the same situation as you, except I am disabling the “last page” button (I use custom buttons for pagination on my data grids now).

1 Like