What I’m trying to do:
Transform data from backend and serve it
What I’ve tried and what’s not working:
Querying it on a server side function, transforming it and returning it to the client.
Issue:
I sometimes get a timeout error because 30s have passed.
With a lot of messages, processing itself works fine. The issue is mainly with transferring within the time limit to the front end.
I’m looking to see if others have workaround issues like this. I’m thinking an alternative is generating JSON files and then allowing the front end to read those. Another alternative is saving the JSON payload to a table and allowing the frontend to query that table.
Am I missing another alternative? I’m building some data dashboards so I probably do have more data than expected.
Efficiency questions are pretty much impossible to answer without seeing the code. Since you’re getting a server timeout, just code involved on the server side should be enough for people to give some advice.
Hey Jay,
I was thinking more about discussions around what people have found or done.
As an example,
I have code that works fine when I write, push it and test it. Now, if I hope to a video meeting, it fails with the timeout error.
I’m using this approach to only have to do transformations once and cache things, Simple Caching approach
And this is an example of a functions that fails during meetings. It’s just more data than expected.
@anvil.server.callable
def single_search_server_get_data(force_rebuild=False):
c_data = global_funcs.cache_get(f'single_search_server_get_data')
if c_data and not force_rebuild:
data = json.loads(c_data)
return data
overview_data = app_tables.imported_view_overview.search()
out = []
keys_to_remove = [
'example1',
'example2'
]
for item in overview_data:
datum = dict(item)
for key in keys_to_remove:
del datum[key]
out.append(datum)
c_data = json.dumps(out)
global_funcs.cache_set(f'single_search_server_get_data', c_data)
return out
Using this with an anvil extras pivot, GitHub - anvilistas/anvil-extras
This is probably one of the biggest ones I’m working with. This JSON is an array of objects. The array has 5,691 objects. Each object has 30 keys. Some of them are small numeric values, but other are names of organizations (which can be around 70 characters).
The data will be updated via an uplink connection.
And while I can look at optimizing the functions and data, my understanding is that time to transfer is a factor, not just time to run the function. If someone’s at a slow enough connection, they’ll still have issues with this approach.
Is it necessary to transfer all the data back at once? Paging mechanisms are typically used to manage this sort of thing, at the cost of the client needing to go back to the server for future pages. I don’t know how the data is used by the client, but if you’re transferring it all I’m assuming you’re using all of it, not just using aggregate values.
If you use a simple object column rather than text for your cached data, you’ll eliminate the need to convert to and from json, and open up some possibilities for returning only portions of the data to support paging.
For this, I do need all the data for the pivot table.
But, after reading your comment, I’m thinking I can chunk the data into N batches and store them. Then from the client, request the batches, and rebuild the full data.
1 Like
You could create a server function that returns a chunk and in the client load the chunks with a timer, so you can update a label or a progress bar and tell the user what’s going on, while the interface is responsive and the user can click on a button and change page or interrupt the process. Something like this:
self.n_chunks = anvil.server.call('get_number_of_chunks')
self.current_chunks = 0
self.timer_1.interval = 0.1
def timer_1_tick(self, **event_args):
self.n_chunk += 1
if self.current_chunk == self.n_chunks:
self.timer_1.interval = 0
else:
self.progress_label.text = f'Loading chunk {self.current_chunk}'
chunks.append(anvil.server.call('get_chunk', self.current_chunk))
3 Likes
oh, I like that.
Thank you!