How to upload your ML model to anvil servers?

Uplink allows you to host some Python code on a server somewhere. It could really be anything with an internet connection.

Here is a tutorial on how to use Uplink with a Jupyter Notebook, but it will work similarly with .py files.

Do you have an idea what the inference time is on a laptop type machine? The main thing to consider here is that if you are serving lots of inferences this is not nice to the web server, and performance will be poor.

If it’s not too long you still have the option of loading it to the filesystem. This requires an individual plan though. The problem with loading it to the filesystem is that your app doesn’t always load from the same machine, which means you can’t be sure your file will always be there. You can work around it, but pretty annoying.

If you are working on OSS or a public demo Digital Ocean has a Droplets for Demos program where you could host your model with Uplink. Then use Anvil as the UI for it.