What I’m trying to do:
My App has DevTest data, in its Default Database, accessible from a couple of Environments. I want to copy some “bootstrap” data from that database to its other database named Public, accessible from an environment named Published.
When Server code runs, it connects to only one database. So I’m writing an Uplink program that connects to each Environment in turn, via its unique server uplink key. When I’m done with the each connection, I close it. Code is successfully connecting to both Environments:
*** Remote Interpreter Reinitialized ***
[Dbg]>>>
Connecting to wss://anvil.works/uplink
Anvil websocket open
Connected to “Main Development” as SERVER
Anvil websocket closed (code 1006, reason=Going away)
Connecting to wss://anvil.works/uplink
Anvil websocket open
Connected to “Published” as SERVER
Reading the Default Database works just fine. Trying to read from the Public database (i.e., to see how much data I’ve already transferred) fails, with the message
anvil.tables._errors.TableError: ‘This table cannot be written or searched by this app’
But as you can see here, Environment Published does let table Members be searched by Server code:
I suspected that I needed to flush some kind of cache before connecting to the new Environment.
With a bit of digging, I think I may have found the culprit: app_tables.cache
, a.k.a., AppTables.cache
.
This cache can be reset, while the Uplink program is disconnected, by setting
anvil.tables.AppTables.cache = None
as in
# web-service imports
import anvil.server
import anvil.tables as tables
from anvil.tables import app_tables
# in-house packages
from common.anvil_uplink_mgr import anvil_uplink_to
# constants
DEFAULTS_USER_ID = *private*
def main():
with anvil_uplink_to('Main Development'):
# fetch user record
defaults_user = dict(
app_tables.members.search(email = DEFAULTS_USER_ID)[0] )
anvil.tables.AppTables.cache = None # req'd for search() below to succeed
with anvil_uplink_to('Published'):
# does user already exist?
search_iterator = app_tables.members.search(email = DEFAULTS_USER_ID)
However, it is not clear that this undocumented hack is at all safe or wise.
Is there a better way to reset the globals of anvil-uplink
, between connections?
Edit 1:
For this particular task, I can delegate some of the 2nd connection’s work (the database stuff) to Server-Module code. That should resolve my immediate problem. However, in many cases, code for database “surgery” should not be left in Server code, for various reasons.
Edit 2:
I suggest that upon successful anvil.server.disconnect()
, the anvil-uplink
package should “clear out” any stale data that could adversely impact [use of] the next connection. Alternatively, it might wait until the next successful anvil.server.connect()
.