Scaling App Packages

I have other apps as libraries for my main app, its pretty light weight I just separated them to reuse for other apps in the future, but its kind of slow loading them so I was thinking if I could uplink the libraries from local machines or even client/customer machines. I read some stuff on other post about this but since uplinks uses the anvil.server.call to access through the uplink. What about classes and method? do we decorate classes with the @anvil.server.callable or is that for functions only?

I don’t have the business plan for persistent server just the professional. So does my main app call the app package from the other server which has to spin up or the server package gets copied over to the main app to use?

I have a 32 core 64 thread machine, what’s the best way to scale? Launch many instances with PyCharm or separate python environments with notepad++ on the same machine?

You could Uplink code that uses the libraries, but only the @anvil.server.callable functions would be available.

Bear in mind that “slow loading” could have many different causes. Shared libraries is often not the cause. This could be a case of premature optimization. Be sure to measure where the time’s going, and find the actual bottlenecks, before trying to find a fix.

I’ve never seen them used on classes. To the best of my knowledge, it’s functions only.

I use Uplink for one of my compute-intensive tasks, in just this manner.

You probably don’t want to give these machines your Server-permission-level Uplink Key. With that, they can do all sorts of mischief, including destroy all your data. The door’s wide open for anyone who hacks up their local copy of your Python Uplink program.

They might use your Client-permission-level Uplink Key instead. It lets their Uplink programs call anything the browser can call. But they can’t provide any Uplink functions for others to call with that key.

“Best” is always in the eye of the beholder, usually because of unstated constraints or priorities. But you could certainly spin up multiple Uplink programs on such a machine. I have. (How you do that, whether with PyCharm or batch files or anything else, is a matter of preference.)

They can even offer the exact same @anvil.server.callable functions. Anvil’s servers will try to distribute the calls among the programs, in a form of load-balancing.

If you’re concerned about single points of failure, you might use a small group of such machines, each running one or more Uplink programs. If one PC fails (or has to reboot), the others can fill in for it, in the meantime.

1 Like

I will add some details to what Phil just said.

Some of the functions running on my uplinks can run for hours and use resources that cannot be shared, like Excel, CAD or CAM macros designed to run lonely and to use one temporary file with one hardwired name.

My solution was to use the big box with tons of cores and memory to run multiple virtual machines, each with two cores. Then to add the name of the machine to the name of the decorated callable function, something like “function1_vm2”. Then use some logic to make sure that the same machine doesn’t use the same resource twice.

In short I have my own load balancer / queue manager to decide what to execute and where, and what should wait before being executed.


Other server callable functions that run for just a few seconds and don’t have resource sharing problems are setup like standard uplink callables: each VM has the same script, each script registers the same functions with the same name, and the server randomly calls one of them whenever it feels like.

In these cases I had to make sure that the functions were thread safe, because the same script could have multiple calls to the same function running concurrently.


Some uplink scripts interact with software that only runs on Windows, and the user needs to be signed in, so my VMs cannot run as services / agents / demons. The VMs are setup to automatically sign in and run the uplink scripts at reboot. This ensures that the uplinks keep running through automatic OS updates, power outages, etc.

When one VM is rebooting the others will take care of the business, and all runs smoothly.

1 Like

ok thanks for the feedback, I’m trying to wrap my head around this I haven’t really used uplink yet, I was able to just open up a websocket connection to establish the uplink.

so here’s the example
app_packag_1 running as a seperate anvil app and is a dependency for my main app

class app_package
   def __init__(self):
      pass
  def method1(self):
     print("this app_package")

so then my main app imports

import app_package

@anvil.server.callable
def run_app_package():
    imported_app = app_package()
    imported_app.method1()

#Lets assume I can execute with the code below
anvil.server.call('run_app_package')

So i want to then scale up my app package to run on my local machine

anvil.server.connect("server_123secretcode from main program or shoudld it be from app_package")
#I import a copy my code via github
class app_package
   def __init__(self):
      pass
  def method1(self):
     print("this app_package")

@anvil.server.callable
def run_app_package():
    imported_app = app_package()
    imported_app.method1()
anvil.server.wait_forever()

Am I on the right track on how to do this? Seem like the uplink key should be from my main anvil app calling the same anvil.server.call(run_app_package)

So the code that actually performs this task is on other anvil app where I’m hosting the app_package and on my local computer with the same code, I just need to also instantiate it and recreate def run_app_package() and combine it on my local machine for it to actually run the code on my local machine too?

The code that actually performs the task has to reside on the machine that performs it, right?
If my faster local machine takes longer than 30 seconds to finish, the process gets killed right?

I also make calls to background task, so if my machine takes longer than 30 seconds to process, can background tasks be call via uplink too?

Er, no. Each Anvil App has its own set of Uplink codes. anvil.server.call works only within the scope of a single Uplink code, so it’s limited to those programs that Uplink via that App’s Uplink code. It’s not a carte blanche to call any function in any App, even if you happen to own both those Apps.

Moreover, an Uplink program can connect to (and communicate with) only one Anvil App at a time.

If you want to communicate between Anvil Apps, there are two well-known methods:

  1. Via a database table they both share.
  2. Via HTTP endpoints.

Yes.

No, but the call times out.

I have a similar situation. My calcs often take longer than 30 seconds. So I set up a “pipeline” on my PC. The caller creates a database table row to receive the result, and passes the row ID along with the calculation job. My “listener” accepts the job and returns immediately (so that it can’t time out). It passes the job to the calc engine. When that finishes, it posts the result to the database row. The caller can check the database row, periodically, to see when the job is done.

Not directly. But if you call a function that resides on an Anvil server, that function can create the background task for you.

When the uplink function is called for a quick job, I let it finish the job and quickly return to the caller.

When the uplink function is going to work for a long time, I let it spin a new thread or process, or even a new script that connects with its own uplink connection, and return immediately the control to the caller.

So it’s difficult to identify one single pattern.

In general, whatever you can do in a server-callable function in a server module, you can do it in an uplink. In fact, I use this very feature to debug server modules: an app that normally runs and calls server functions, has those functions running on the server. But as soon as I start an uplink that registers server-callable functions with the same name, the next call from the app that is already running, will run on my computer, and will be debuggable step by step in PyCharm.

You can manage this by playing with some environment settings.

EDIT
But going back to the subject of this post, I have never used uplink for solving scalability problems.

For me uplink is a way to do stuff that I can’t do on the server, because the server can’t access my local network drives, my CAD, CAM or Excel macros and other local resources.

Have you already had some problems caused by bad scalability?
What type of problems?
You mentioned a very generic “its kind of slow loading”.
Do you know what’s slow and how slow it is?

As @p.colbert said, measuring is the first thing to do, before premature optimization.

1 Like

Well I made a video demoing how slow it is, I guess I was exaggerating it being 1 - 2 min, at least it feels like it when it takes 10 to 20 seconds to load. I’m spoiled…