How to upload your ML model to anvil servers?

Here is a demo a made with a scikit learn model. I uploaded the pickled model into a row in a datatable. But this will mostly depend on the size of your model. Mine was pretty small (800kb).

Another thing to consider is the CPU requirements for inference. If the model is pretty beefy it would be a good idea to execute the inference on a remote server that is connected through an Uplink.