In Uplink, is it safe to reset anvil.tables.AppTables.cache between connections? Is there a better way?

What I’m trying to do:
My App has DevTest data, in its Default Database, accessible from a couple of Environments. I want to copy some “bootstrap” data from that database to its other database named Public, accessible from an environment named Published.

When Server code runs, it connects to only one database. So I’m writing an Uplink program that connects to each Environment in turn, via its unique server uplink key. When I’m done with the each connection, I close it. Code is successfully connecting to both Environments:

*** Remote Interpreter Reinitialized ***

[Dbg]>>>

Connecting to wss://anvil.works/uplink
Anvil websocket open
Connected to “Main Development” as SERVER
Anvil websocket closed (code 1006, reason=Going away)
Connecting to wss://anvil.works/uplink
Anvil websocket open
Connected to “Published” as SERVER

Reading the Default Database works just fine. Trying to read from the Public database (i.e., to see how much data I’ve already transferred) fails, with the message

anvil.tables._errors.TableError: ‘This table cannot be written or searched by this app’

But as you can see here, Environment Published does let table Members be searched by Server code:
image

I suspected that I needed to flush some kind of cache before connecting to the new Environment.

With a bit of digging, I think I may have found the culprit: app_tables.cache, a.k.a., AppTables.cache.

This cache can be reset, while the Uplink program is disconnected, by setting

anvil.tables.AppTables.cache = None

as in

# web-service imports
import anvil.server
import anvil.tables as tables
from anvil.tables import app_tables

# in-house packages
from common.anvil_uplink_mgr import anvil_uplink_to

# constants
DEFAULTS_USER_ID = *private*

def main():
    with anvil_uplink_to('Main Development'):
        # fetch user record
        defaults_user = dict(
            app_tables.members.search(email = DEFAULTS_USER_ID)[0] )

    anvil.tables.AppTables.cache = None  # req'd for search() below to succeed

    with anvil_uplink_to('Published'):
        # does user already exist?
        search_iterator = app_tables.members.search(email = DEFAULTS_USER_ID)

However, it is not clear that this undocumented hack is at all safe or wise.

Is there a better way to reset the globals of anvil-uplink, between connections?

Edit 1:
For this particular task, I can delegate some of the 2nd connection’s work (the database stuff) to Server-Module code. That should resolve my immediate problem. However, in many cases, code for database “surgery” should not be left in Server code, for various reasons.

Edit 2:
I suggest that upon successful anvil.server.disconnect(), the anvil-uplink package should “clear out” any stale data that could adversely impact [use of] the next connection. Alternatively, it might wait until the next successful anvil.server.connect().

I just looked into something similar the other day, below is the literal contents of AppTables.cache when an AppTables object is instantiated.

So to ‘refresh’ it, I would think what you are doing should work, since it gathers the information again if cache is None .

That’s exactly the library code that led me to try it.

It works, in a limited sense. But how many other variables in the library just happen to refer to the original cache object, out of convenience? Those other variables, if they exist, don’t get updated when we assign None to this one. Who knows what trouble might ensue with such inconsistencies?

Of course, our friends at anvil.works do know, and so they’d be the best ones to fix it (or tell us “don’t do that!”).

The “principle of least surprise” says, whether it’s the Uplink program’s 101st connection or its 1st, the database access should work as documented in all cases. If that’s impractical, well, then it’s impractical; but then the documentation should eliminate that surprise.

1 Like

I agree, I also completely agree with this :point_up: , it should probably be a feature request since it’s pretty close to a specific fully fleshed out idea.

And I did check also and you are absolutely right that there seems to be nothing in the uplink that appears to be even attempting to clear anything other than the connection object during connect() / disconnect().

Yeah, I’ll do that now. Thanks for the nudge!