Anvil Uplink Questions

I’m working on the design for an MVP that I’m really excited to put together. Uplink is really key for this but I have some questions.

  1. Uplink is multi-threaded I’ve read. Is this implemented in a way where I could expect the ability to do n concurrent calls for n logical processors on the uplink machine? Maybe I’m totally off on threading in Python, I’m mostly going off my own experience running multiple kernels on my local machine.

  2. I’ve read in the post below that Uplink load balancing isn’t implemented. But that’s from a while back. Is it still TBD?
    Multiple Uplinks To Same App

Thanks!

To answer #1:
Uplink is multi-threaded in this sense: each incoming request starts its own Python thread. This allows requests to come in and be accepted in a timely fashion, no matter how long each one actually takes to finish.

The flip side: threads (and the functions they call) should not share writeable variables in the usual way. This includes any global state, such as database connections. Anything shared should be made thread-safe before it is used.

Thank you, that answers number 1. My question was really regarding how much I should be expected to scale an individual Uplink based on an expected amount of calls. But I suppose I shouldn’t naively think about it in this way.

For me, the main factors were:

  1. What is Anvil’s time limit on individual calls?
  2. How long does an individual function call take?
  3. How likely is it that another call will come in during this time?

In my application, some of my calls were so compute-intensive (on the Uplink side) that I had to build a in-house “pipeline” to handle them. The initial Uplink function simply stuffed the compute job into the pipeline, and acknowledged receipt of the job, by returning right away (i.e., within the time limit). Its caller created a database row to receive the result. At the other end of the pipeline, a second Uplink program delivered the result to the database row.

This architecture allows load-balancing. If one CPU can’t handle the compute load, then I can widen that part of the pipeline, allowing two or more jobs to calculate in parallel. So far, we haven’t had the need.

But you should measure your case. Very few Anvil developers actually need to go as far as I did.

1 Like

Thank you very much for the detailed answer. I’ve got enough to go on now.