How to deploy ml model in anvil server itself

Hi @nikhilragha,

The other option would be to run your Uplink code somewhere persistent, like an Amazon EC2 instance. This is what Hannah did for her Star Wars Ship Classifier:

You can run this notebook anywhere, as long as it has an internet connection. So that I don’t have to leave my laptop on all the time, I actually now have this notebook running on an Amazon EC2 instance to drive the app.

That way you can close your colab notebook without interrupting the Uplink for your app.

3 Likes