Right, forget what I said earlier, that would apply to processes, not threads/tasks. As far as I know, a dedicated server has only one Anvil server, that is only one process, that is all the threads for all the tasks for all the apps run on the same core.
Tasks are threads, and they all run in the same process as the main server, so, 4 background tasks will all run in the same core, and you will only use 25% of your 4 available cores.
If you want to use multiple cores, you can’t use tasks or threads or async, you need multiple python processes.
As @p.colbert mentioned, you could have multiple uplink processes running in parallel, either in the same computer or in different computers.
I’ve never tried running other processes in my dedicated server. I have other VMs for my uplinks, because they need local resources. In your case, since you don’t need local resources, you could run the same processes in the same machine the Anvil server runs, (that’s the very point of this question).
Perhaps you can figure out a way to add your scripts to the repository. Those script can’t be in the server_code
directory, otherwise they would be imported every time the app starts. Perhaps you could put them in a table, have a server module create a file in /tmp
and run a process with that file.
The app’s server code would spawn 3 new processes, each running your script that connects with an uplink key. A scheduled task would check that they are still up, maybe trying to call one of their callable functions.
This is dangerous. Anvil has a refined way to ensure that no threads are stuck and you don’t run out of resources. And if something goes wrong, they kill the server process and restart from scratch. But if you screw up and leave hundreds of your processes hanging, there is no one doing the housekeeping for you. So… you are playing with fire.