I have some uplink functions that access an internal database and make the content available to an anvil app. So far, so good.
However, I now have a second app which needs access to some of those functions and I’m struggling to connect to both. Whichever uplink key I connect to first succeeds, but the second always fails.
Is this actually possible or do I need to be slightly less DRY on the uplink side?
I’ve ended up shifting the functions into a separate library and then having a service for each connected anvil app (currently 2 of those). Both those services import the library and wrap the calls they use with anvil.server decorators.
I have found a different solution for functions/requests that need to be used by multiple apps and also necessitate a lot of RAM (to be snappy). On a machine external to anvil (be it your home workstation or a virtual machine in the cloud), Set up two files, and have them both run simultaneously:
1) API.py - this application will keep whatever you need to "share"
from flask import Flask
from flask import request
from flask import jsonify
app = Flask(__name__)
#~Load bunches of data into memory~
@app.route("/somename")
def function():
if 'argname' in request.args:
arg = request.args['argname']
return jsonify(stuff)
if __name__ == "__main__":
app.run(host='127.0.0.1',port=5001)#use some arbitrary port not used for a common purpose
2) AnvilStuff.py - this function will communicate with your app
from requests import get
import anvil.server
@anvil.server.callable
def serverfunction(args):
resp = get('http://127.0.0.1:5001/somename?argname='+arg).json()
return stuff
anvil.server.connect()
anvil.server.wait_forever()
Yep, that’s a very similar solution - move the shared functions to a separate file in your case (in mine, a separate pip installable library) and then wrap them with @anvil.server.callable decorators for each anvil app that needs them.
If I remember correctly, multiprocessing has an humongous overhead on Windows.
I have used it for an app that I develop on my Windows laptop and deploy to Linux for production. The production app is snappy, but on Windows is unusable.
I can’t speak to this specific use-case but I’ve done a bit of multiprocessing on Windows and it’s been fine. Absolutely no doubt it’s better on Linux (I’m way over my skis here but this link talks about how Linux forks processes while Windows has to spawn entirely new processes). But in the right use-cases multiprocessing still delivers enormous benefit on a Windows machine - I’m betting that for many (though probably not all) use cases the difference is more incremental than fundamental.