The search iterator returned by app_tables.mytable.search
does its job lazily, that is it will get what it needs to get as late as possible.
I don’t know the details, but it will very likely access the database and get a number of rows, keep them in cache and return them one by one. Then access the database again to get the net batch when it runs out of rows.
Accessing the database every time you need a row, would be too slow.
Accessing the database and getting all the rows you need in one shot, could run out of memory and it’s often useless, because you often need only the first few rows from a search.
You can crate an iterator that never stop returning stuff:
def all_positive_integers():
n = 0
while True:
n += 1
yield n
# this will never end
for x in all_positive_integers():
print(x)
# this will run out of memory
l = list(all_positive_integers())
So, to address your specific case, the search
will very likely get the first 100-ish rows that satisfy your search filter and sorting, then will slice them and get the first 5. If your search returns a million rows, this will work and be fast, because the 100 rows will go from the database to the interpreter, and the 5 rows only will reach the client.
Well, looking at your code, it looks like you are executing this code on the client, so I don’t know the details of the implementation, whether the 100 rows are all passed to the client or not.
I usually do the searches on server functions and return to the client only what the client needs, so I’m sure that the traffic is kept to a minimum and there are no vulnerabilities, because the form has no access to the database (has access to server functions that have access to the database and will filter the data returned to the form).