What I’m trying to do:
Create an anvil web page that allow users to upload pictures to the server and then get the pictures back with labels of the detected objects What I’ve tried and what’s not working:
I was able to train my model online using colab and do test with the test image, which saved into the detected folder in colab. However I can’t figure out how to upload the images from anvil app directly to detect.py as it a command, and how to retrieve the saved result from the detected folder. With unlink I was able to communicate with colab, but not invoke detect.py and retrieve the result.
Any help is appreciated… Code Sample:
# this is a formatted code snippet.
# paste your code between ```
I guess I should start with some fundamental questions: Can I run my trained object detection model in Anvil server? If yes, any reference/documentation/tips I can use? If not, any idea what server I can use and how to connect it with Anvil?
Thank you…
I’m certain that you can accomplish this with Anvil.
However, please show your uplink code snippets formatted nicely here on the forum. You seem to be pretty close to an answer but there are too many unknowns for other to help effectively.
Please do your best to be very clear so that people don’t have to do background research and guessing in order to help you. You should show us what you have tried when using uplink and where the documentation and related posts were unclear.
Please search the forum for related terms. Example, Jupyter, Colab, machine learning, image classifier, etc. I’m pretty sure there are examples, even official tutorials that could help.
It seems to me that you just need to know how to pass objects back and forth between Colab and Anvil using uplink so the docs may help as well.
I’m certain that the issue is quite simple but you have to do a bit more work to communicate that to us so we can help more directly. We’re pretty happy to help when we can .
Sorry for being very brief, I have the following colab link (Google Colab) for a Yolov5 object detector, after running the code (and entering google drive API), you can upload an image to detect the objects inside it, the processed image will be saved in a folder called detect
What I am trying to do is:
A- to see if it is at all possible to avoid colab and work directly with anvil server (i.e. import the trained model file into anvil server and use it from there). if that is not possible, then;
B- I want to find a way to call “!python detect.py” command on images uploaded by anvil app to Colab, and then return the processed image (image with labels) and the list of the detected labels to anvil app.
I have paid plan, but I have issues with the code it self, when I run it with colab, I get anvil.server.uplinkdisconnectederror: unlink disconnected, so I am not even sure If the code it self is correct.
Hi @adhamfaisal
One (slightly clunky) option is to separate the three steps:
-First an Anvil form publishes image file to a designated Google Drive Folder (say ‘image queue’)
A colab server continually ‘processes’ this image queue from Google Drive(in your case runs YOLO based semantic segmentation)
A dialog within the Anvil frontend form then queries the backend colab server, which returns processed results when available
I published an app called DejaVu to the community that demonstrates the last two bits albeit with MaskRCNN vs Yolo. See details at – DejaVu Instructions - Google Docs