How much data can be sent through uplink

Hi!

I am creating a booking site and are using an uplink project to integrate an existing ms access database.

At the moment each separate user will call a server function when the app loads, the user can also change week like a calendar. The server function is running in the uplink project. The function will query the database and return a json object containing “free” appointments.

Should I be worried about “high traffic” on the uplink? Currently my anvil account has a Personal subscription.

The data is calculated to be 609 Bytes and i have provided a example of the data below.

{"2022-07-18": {"09:00": false, "10:00": false, "11:00": false, "12:00": false, "13:00": false, "14:00": false, "15:00": false, "16:00": false, "17:00": false}, "2022-07-19": {"09:00": false, "10:00": false, "11:00": false, "12:00": false, "13:00": false, "14:00": false, "15:00": false, "16:00": false, "17:00": false}, "2022-07-20": {"09:00": false, "10:00": false, "11:00": false, "12:00": false, "13:00": false, "14:00": false, "15:00": false, "16:00": false, "17:00": false}, "2022-07-21": {"09:00": false, "10:00": false, "11:00": false, "12:00": false, "13:00": false, "14:00": false, "15:00": false, "16:00": false, "17:00": false}, "2022-07-22": {"09:00": true, "10:00": true, "11:00": true, "12:00": true, "13:00": true, "14:00": true, "15:00": true, "16:00": true, "17:00": true}}

Uplink has no timeout or data size limits.

I use it to upload tables with thousands of rows in one call. I could add the rows one by one directly to the table, but it would be slow, too many calls. So I split larger tables in chunks of 1000 rows and call a server function that adds 1000 rows to the database. In this case the limit is the 30 seconds for the server call to process the list with 1000 rows.

1 Like

Great! I guess that I don’t need to worry :slight_smile:

I just want to share that I just found a library called cachetools.
That library made the user experience sooo good!

Within my uplink project I installed the cachetools (pip install cachetools), included the library (from cachetools import cached, TTLCache) and decorated the server function.
E.g.

cache = TTLCache(maxsize=100, ttl=300) #5min

@cached(cache)
@anvil.server.callable
def Prepare_free_appointments(dates_array, room):
    dts = tuple(dates_array)
    return Get_free_appointments(dts, room)
1 Like

If you’re concerned about traffic volume, then you should be aware of a related problem, that is likely to turn up in high-volume circumstances.

Sooner or later, you’re going to have two independent calls come in at nearly the same instant, so that one starts before the other finishes. (In this case, anvil-uplink will run each call on its own Python thread.)

If the two calls share any changeable data, such as a variable, a file, or a database connection, then they may interfere with each other. Such a problem may crop up once a month – or an hour. It can be virtually impossible to reproduce on command.

If that’s not acceptable, then you’ll want to code up the response to that call very carefully, i.e., to prevent data conflicts (if possible), or to resolve them (e.g., queue up the calls).

Thats interesting, thanks for your input!

Regarding a to que up the the server calls - is it possible to store session specific information within anvil?

Yes, absolutely! There are several ways to do it, depending on the “lifetime” of the information.

For information that lasts as long as the browser session, see Sessions and Cookies.

For longer-lived information, you can use Anvil’s own database tables. See Storing Data in Data Tables.

A small caution: session != user. The same user could be logged in from several browser tabs at once, each producing a distinct session. Conversely, in a single browser tab, one user may log out, and another log in afterwards. Both users would be using the same session.

Thank you so much :slight_smile: I will start with looking at sessions and cookies!

To summarize this thread.

  • @stefano.menci => Uplink has no timeout or data size limit
  • To get better user experience and save a bit of resources at the Uplink computer, use the cachetools library.
  • @p.colbert => To prevent two independent function calls at the same time, try to que up the Uplink function calls.

Last I checked my calls to Uplink code, a few years back, mine timed out after 30-60 seconds. It may have changed since then, but I doubt that there are no limits at all.

1 Like

Yes, uplink functions called from server code will timeout after 30 seconds.

I forgot to mention that I use uplink in 3 ways:

  1. My large uploads are initiated by the task scheduler (Windows’ version of cron). They do use uplink connection so they have access to datatables and callable server functions, but do not use wait_forever(). These functions have no time limit

  2. Other long running uplink functions are called by background tasks running on the server. For example when the user clicks a button or when an HTTP endpoint is called, the handler adds a row to a queue table, starts a background task and returns immediately. The background task calls the uplink function and waits for it to be done. Some functions last hours. While the uplink does its job, it also updates the status on the row on the queue table, so a timer an a form can poll every 3 seconds to check the progress

  3. Normal uplink function are called and respond quickly

(I can’t think of any uplink function or the 3rd type that I am currently using)

1 Like

Thanks for the clarification, @stefano.menci. That will help my future designs!

For @tobias.carlbom, for any Uplink code called [in]directly by Client code, if there’s any shared variables, I agree, the called code should probably return as quickly as possible. This minimizes the chance of simultaneous calls.

This may mean handing the “job” off to some other object (e.g., an Anvil background task, or a Python queue), so that each job can proceed in sequence, avoiding conflict, with the result being picked up later.

One new possible twist on this design pattern is using the new .disconnect() feature, having a python script that runs continuously, then connects to anvil for an update every once in a while, either gathering jobs to do, or just sending the final results of something. :robot:

1 Like