I’m now fairly consistently getting an error saying “RuntimeUnavailableError: Remote server not available (downlink_call). Please contact support@anvil.works.” when I try to access the production version of my Anvil app.
Are others having this same issue? I tried emailing Anvil’s support, but haven’t received any response. My server code imports a number of libraries and uses the ‘Full Python 3’ build. I also tried using a custom set of python libraries using the Beta ‘Python 3.10 Build’, but didn’t have any better success with that.
From the designer I get the following error when I try to ‘Run’ the app:
" anvil.server.RuntimeUnavailableError: Could not launch server runtime - image build failed. at [Form1, line 18](javascript:void(0))"
Thanks in advance for any ideas on how to get this working again.
I currently only have one app, so I don’t have a good test of that. I’m assuming that other Anvil users have apps that are currently functioning? If something got corrupted in my app setup in Anvil, is there a way to force it to be totally refreshed on Anvil’s system? I think that the error is happening before any of my code even runs.
I can create a new app and it seems to work correctly, but cloning my existing app I get the same error.
A little more detail: the error that I get depends a bit on the version of Python that I select for the server modules. When I choose the ‘Python 3.0 Full’ I’m now getting the following message:
Saying the ‘Undefined’ server runtime is not activated on your account. But, switching to the Python 3.1 Beta with custom packages, I’m now always getting the error that I originally described.
Do you have any other ideas of things that I might try to get back to a point that this app works again, short of starting over from scratch?
Despite the misleading error message, after more troubleshooting I think I’ve found that there error has to do with retrieving data from a datatable that has really big columns. I was storing a large JSON formatted string in a text field, and once there were a few hundred records, I was unable to sort a search() result or to loop through the rows. Converting that column to a ‘media’ type helped, but it seems like there is still a big performance hit when looping through search() results. I’ve even resorted to doing a csv export of the table (which is relatively fast) and reading from that rather than using the search() results directly. Surely there is a better way?
this sounds to me like a bad design - no fence, I have no clue what are you doing and why, but what you wrote here it seems like you are trying to store a huge data in one record and instead of using the sqlDB power (leave the search to the db), you are getting records and what to iterate them over? If you are using the db to search (ie. sql) then performance can downgrade a lot if your records holds too much data.
If this is the problem you should get rid of the performance problems with redesign the db a bit and move the big data out of the record (store in another table) and use some keys to store the data which is important for search instead. This would make easy and quick to search. So don’t try to get the data out of the json and dont put huge data in a table where you do search, but add some extra data field to the db which can be easily used for searching and move out the big data to another table, linked by an id. This way you wont need to search or the DB does not need to touch bid data during search, however you can easily retrieve the json by the results.
Hint:
If you use db views to retrieve data or stored procedure/pgsql to add data, you can hide this completely from the py/code and leave the logic how 2 tables store search param and data chunks within the db. So you could basically leave the py code as it is and add some extra code in the DB to actually fill the records or retrieve them as one table