How to upload your ML model to anvil servers?

I want to upload my model to the anvil servers so that once the jupyter notebook instance is closed the App can still be functional. Any idea How to upload your keras model to the servers so that u can load the model and predict on the server itself.

Hello and welcome,

Here is a post that might help in general.

ML libraries usually let you “pickle” or save models to disk. Therefore you could save the model in an Anvil DataTable and read it into memory when you want to do predictions.

Unfortunately, I cannot see Keras on the list of libraries that are installed on Anvil servers (I believe that people on an Individual plan or higher can request libraries to be installed by contacting Anvil support).

Perhaps other people will have a direct solution but this is what comes to my mind.

1 Like

Here is a demo a made with a scikit learn model. I uploaded the pickled model into a row in a datatable. But this will mostly depend on the size of your model. Mine was pretty small (800kb).

Another thing to consider is the CPU requirements for inference. If the model is pretty beefy it would be a good idea to execute the inference on a remote server that is connected through an Uplink.

Hey , my models is like 30 Mb and the data table has limit of 8 Mb. Could you explain more about remote server. Any services that could provide me with a server so that I can host my model there and connect it via uplink.

Thanks for the answer.

Uplink allows you to host some Python code on a server somewhere. It could really be anything with an internet connection.

Here is a tutorial on how to use Uplink with a Jupyter Notebook, but it will work similarly with .py files.

Do you have an idea what the inference time is on a laptop type machine? The main thing to consider here is that if you are serving lots of inferences this is not nice to the web server, and performance will be poor.

If it’s not too long you still have the option of loading it to the filesystem. This requires an individual plan though. The problem with loading it to the filesystem is that your app doesn’t always load from the same machine, which means you can’t be sure your file will always be there. You can work around it, but pretty annoying.

If you are working on OSS or a public demo Digital Ocean has a Droplets for Demos program where you could host your model with Uplink. Then use Anvil as the UI for it.