Connecting uplink code to multiple apps

I have some uplink functions that access an internal database and make the content available to an anvil app. So far, so good.

However, I now have a second app which needs access to some of those functions and I’m struggling to connect to both. Whichever uplink key I connect to first succeeds, but the second always fails.

Is this actually possible or do I need to be slightly less DRY on the uplink side?

Are the functions called the same name?
Sorry misread.

I’ve not tried connecting to two apps from one server script - I’ll give it a go and see what i find when I get back later today.

I’m guessing the anvil.server object is a singleton. Would probably need Anvil Central to say as to whether you could instantiate multiple objects.

1 Like

Just poking around in the uplink source code and it looks like you only get one connection. :frowning:

I’ll just have to repeat things on the uplink side

Is there no way to wrap it all into a class and instantiate that several times?

Quite possibly. I shall have a play and report back…

I’ve ended up shifting the functions into a separate library and then having a service for each connected anvil app (currently 2 of those). Both those services import the library and wrap the calls they use with anvil.server decorators.

Not ideal, but not too disgusting either!

1 Like

Do you fancy creating a “Show n Tell” on how to do that?

I think others (me included) would find that interesting.

I have found a different solution for functions/requests that need to be used by multiple apps and also necessitate a lot of RAM (to be snappy). On a machine external to anvil (be it your home workstation or a virtual machine in the cloud), Set up two files, and have them both run simultaneously:

1) API.py - this application will keep whatever you need to "share"

from flask import Flask
from flask import request
from flask import jsonify

app = Flask(__name__)
#~Load bunches of data into memory~
@app.route("/somename")
def function():
  if 'argname' in request.args:
    arg = request.args['argname']
    
  return jsonify(stuff)

if __name__ == "__main__":
  app.run(host='127.0.0.1',port=5001)#use some arbitrary port not used for a common purpose

2) AnvilStuff.py - this function will communicate with your app

from requests import get
import anvil.server

@anvil.server.callable
def serverfunction(args):
  resp = get('http://127.0.0.1:5001/somename?argname='+arg).json()
  return stuff

anvil.server.connect()
anvil.server.wait_forever()

Hope this helps.

2 Likes

Yep, that’s a very similar solution - move the shared functions to a separate file in your case (in mine, a separate pip installable library) and then wrap them with @anvil.server.callable decorators for each anvil app that needs them.

The only way I made it work is using multiprocessing library and 2 queues, for command and response. You can customize it with params if needed

import multiprocessing
import time

def run(anvil_key, queue_in, queue_out):
    print('starting')
    import anvil.server
    anvil.server.connect(anvil_key)
    while True:
        # Get the next command from the queue
        command = queue_in.get()
        res = anvil.server.call(command)
        queue_out.put(res)

q1_in = multiprocessing.Queue()
q1_out = multiprocessing.Queue()
app1 = multiprocessing.Process(target=run, args=('app1_key', q1_in, q1_out))
app1.start()

q2_in = multiprocessing.Queue()
q2_out = multiprocessing.Queue()
app2 = multiprocessing.Process(target=run, args=('app2_key', q2_in, q2_out))
app12start()


while True:
    q1_in.put('test')
    res = q1_out.get()
    print('app1 res', res)
    q2_in.put('test')
    res = q2_out.get()
    print('app2 res', res)
    time.sleep(1)

I hope it helps.

Have you tried this both on Linux and Windows?

If I remember correctly, multiprocessing has an humongous overhead on Windows.

I have used it for an app that I develop on my Windows laptop and deploy to Linux for production. The production app is snappy, but on Windows is unusable.

I can’t speak to this specific use-case but I’ve done a bit of multiprocessing on Windows and it’s been fine. Absolutely no doubt it’s better on Linux (I’m way over my skis here but this link talks about how Linux forks processes while Windows has to spawn entirely new processes). But in the right use-cases multiprocessing still delivers enormous benefit on a Windows machine - I’m betting that for many (though probably not all) use cases the difference is more incremental than fundamental.