[Accelerated Tables] Advantages of using Search Iterator vs List of Rows

What I’m trying to do:
Most of my data display forms use search iterators to provide the information for the data grid or flow panel with cards. If I update an item on the server the cached row automatically gets updated (thanks to Accelerated Tables). But If I add a new row, the search iterator doesn’t know about it and so I have to return a whole new iterator and then either repopulate the entire form (data grid or flow panel) or work out what is different and manually add the new item and update the reference to the new search iterator.

The question is if I can’t add to a search iterator what is the benefit to using the search iterator instead of a list / dict which I can add to? This would mean I don’t have to repopulate everything. Lazy loading comes to mind but what else? Does the caching mechanism care if the row is in an iterator or not?

Is there something under the hood (or soon to be under the hood) which would be reason enough to keep using the iterator? Or will a list give me the same functionality.

4 Likes

Bump

I’m interested in this too

The repeating panel doesn’t know whether you are using a search iterator or a list.

A search iterator fetches and caches blocks of (I think) 100 rows, which means:

  1. if you work with fewer than 100 rows, then it’s identical to a list - same on client or server side
  2. if you work with more than 100 rows and rarely access more than 100 rows, then it’s better than a list because it will not waste time fetching rows you don’t need - same on client or server side
  3. if you work with more than 100 rows and often access more than 100 rows, then it’s worse than a list, because it will require one round trip every 100 rows - on server side a request to the database server, which happens anyway if you convert the iterator to a list, on the client side a very time expensive round trip

Any caching or updating of the fields of each row happens because the row object manages it, it has nothing to do with it coming form an iterator or a list.


I never pass search iterators to the client, I always pass lists. In the few cases where the search may return thousands+ of rows, I manage the pagination myself and make sure I always fetch the correct “slice” of rows from the database and pass it as a list to the client.

I usually don’t even pass row objects, I convert them to dictionaries, pass the list of dictionaries to the client, convert them to objects and have their class add calculated fields and other behaviors that will help avoiding other unplanned round trips that happen when the row object fetches uncached fields. The accelerated tables allow better planning of those round trips, so in simple cases I use row objects instead of dictionaries.

Couple of additions

  1. You can specify the number of rows to fetch in one round trip using the q.page_size

  2. There can be cases when dealing with more than 100 rows where search iterator is still better. Think about, let’s say, 1000 rows here. Fetching 1000 rows as a list would take a lot of time initially and that means keeping the user waiting for that long.

On the other hand, while the search iterator will take more time overall, the user will not have to keep staring at a blank screen. Depending on your case, it may even be possible that the next 100 rows have been already loaded by the time the user goes through the previous 100.

Basically, the search iterator will break down the loading into multiple parts.

1 Like