Starting a new / background thread in an http endpoint

How do I get an http endpoint to return a response after 1 second and run a 10 second long process (thread)?

I tried with threading without success, see details below. I thought about making the worker thread a second http endpoint and call it with an asynchronous request, but requests-futures is not available, and it doesn’t sound kosher.

Here is the description of my use case and some details with my experiments.

I need to create an http endpoint that responds to something like this (I wouldn’t put the parameters on the url, but that’s another story):

  • requests.get('[...]/lockfile/filename/username')

And does something like:

  • Check on the database if filename is already locked by another user and…
  • If the file is already locked respond ERROR: file locked by <username>
  • if the file is not locked:
    • Start a slow background thread that creates thumbnails, etc.
    • Respond OK immediately, before the background thread has finished

I have played a little with background threads in Anvil, but I was disappointed by two things:

  • The background threads have no access to the database
  • The background threads are killed as soon as the main thread returns (even if they are not daemon)

Here is the test app: https://anvil.works/ide#clone:IPYCU6ABQVG3OZMO=ZPYQWV3T373T3YDVZ2SXXVEJ

Here is how to test it:

import requests
requests.get('<app-url>/test/1/2/print')
requests.get('<app-url>/test/2/1/print')
requests.get('<app-url>/test/1/2/db')
requests.get('<app-url>/test/2/1/db')

After running this test the Log table shows only rows created on the main thread, not the ones created on the worker thread, while the app log shows rows printed by both main and worker threads, but it doesn’t show the calls of the worker thread when it takes longer than the main.

By default, in Anvil, the process that runs your server code is killed as soon as it has finished serving a request. This is for two reasons:

  1. To prevent you accidentally “leaking” memory/threads/etc that get “left behind” but keep consuming resources

  2. So we don’t have to keep your code running between requests, consuming CPU and memory. (Eg if you loaded a ton of data into RAM and we left your server code running, that’s RAM we can’t use for anyone else!)

However, if you are on a Business plan or above, you can change this. If you open the runtime options (at the top right of a server module), you will see a checkbox marked “Keep server running”:

image

This…does what it says on the tin! Your server process will keep running between server calls. This means that if you have costly setup operation (eg you need to load a machine-learning model from disk), you only pay that cost once, rather than on every call. This also means that background threads will continue running after your server function returns.

Each call runs on a separate thread, so your code will need to be thread-safe if you’re serving multiple clients concurrently!

Your server process will be killed and restarted when you update your app (after it’s finished any calls in progress) - so if you create a runaway thread by mistake, it will be terminated as soon as you fix your mistake.


Note: There can still be more than one process serving your app at any given time, for reasons of volume or redundancy. Do not rely on global variables being shared between all requests!

ADMIN EDIT: I have moved my conversation with @stefano.menci to private messaging because it requires more details of his account configuration than I want to share on an open forum! -M