How to deploy a machine learning model made using turicreate in anvil’s server(the web app that i built) itself so that no need to run jupyter notebook whenever we need to test the feature ?
Hi @nikhilragha,
I assume you want to make inference in a Web App?
You have basically two choices:
- Infer on Server
In this case you send the raw Data e.g. an Image to a Server where, your model computes a result which is send back to the client.
Of course you could do that on Anvil Server, however if you rely on GPU Processing you proboably should write Anvil Support if the rent out GPU Server space. Also you will proboably want persistent server calls, otherwise the model will be reloaded (from Datatables) each Server Call.
1…1 Any Python script can get your server using uplink. For testing out different ML models I often use Anvil Uplink on Google Colab which essentially gives me a GPU powered Anvil Server which is just one python call away.
- Infer on Client
For example with https://www.tensorflow.org/js you can infer ML Models right in the Browser and thus eliminate the latency. You should be able to implement this in Anvil as well, although I havent tested it yet. Maybe even wrapping it in a custom component so it can be reused by the community.
But this of course all depends on the nature of your ML model and your use case.
I haven’t worked with turicreate yet so I can’t be of help there…
Cheers,
Mark
Hi @mark.breuss
Actually turicreate is a module developed by APPLE , which runs only on windows10 (with WSL), Mac OS,Linux,
hence i have been google colab in my pc to use turicreate and create machine learning model.
I used uplink and then i created a web app using ANVIL , but the problem is ,
if i close my colab notebook, then the web app created using ANVIL work,
Hence i need my model to be present in the ANVIL’s Server itself , so that my web app works anytime anywhere,
but in anvil “TURICREATE” is not supported in free account and hence i can’t upload my model to anvil’s server
So is there any way to make my model always available even if my notebook is closed, using ANVIL
?
I’ll check out with the tensorflow js , thanks mark
I see, without using Uplink I guess upgrading to the Individual Plan seems to be the easiest way to get your model running.
Mark
Hi @nikhilragha,
The other option would be to run your Uplink code somewhere persistent, like an Amazon EC2 instance. This is what Hannah did for her Star Wars Ship Classifier:
You can run this notebook anywhere, as long as it has an internet connection. So that I don’t have to leave my laptop on all the time, I actually now have this notebook running on an Amazon EC2 instance to drive the app.
That way you can close your colab notebook without interrupting the Uplink for your app.