How many times does this client code call the server?

How many times does this client code call the server?

def get_server_state(self):
    with anvil.server.no_loading_indicator:
      _state = anvil.server.call_s('get_state')
      self.hand = _state['hand']
      self.selected_dice = _state['selected_dice']
      self.active_player_index = _state['active_player_index']
      self.active_player = _state['active_player']
      self.scores = _state['player_scores']

The server code retrieves a single row from the database

@anvil.server.callable
def get_state():
  return app_tables.gamestate.get(id=1)

I know, from a previous question, I should add that all the values in the row are either numbers, text, or simple objects (list or dictionary). Does the client call the server once, or six times? (If the answer is six, are there ways to make it only once?)

Thanks, as always!
Al

I believe that, by default, number and string columns are loaded eagerly and included when the row object is first sent to the client. In contrast, simple object columns are loaded lazily, meaning they will trigger an additional round trip when accessed.

You can control which columns are loaded immediately using fetch_only.

(You probably wouldn’t be wondering about this if this feature request had been implemented :wink:)

2 Likes

With accelerated tables, simple object columns are also fetched by default.

Without accelerated tables, both linked rows and simple object columns are lazily loaded.

In the following, id is a number field and data is a simple object column.

@anvil.server.callable
def fetch():
    return app_tables.testing.get(id=1)
        start = time.time()
        row = anvil.server.call('fetch')
        end = time.time()
        print(f"fetch {end-start}")
        start = end
        print(row['id'])
        end = time.time()
        print(f"id {end-start}")
        print(row['data'])
        end = time.time()
        print(f"data {end-start}")

With accelerated tables:

fetch 0.7669999599456787
id 0.0
data 0.0

Without accelerated tables:

fetch 0.5420000553131104
id 0.001000165939331055
data 0.1100001335144043
2 Likes

This means you might still want to use fetch_only if there are other simple object columns that could be hundreds of kilobytes and aren’t needed on the client, though that’s a bit outside the scope of your original question.

2 Likes

Thank you both. Your responses were really interesting, to me; I didn’t know some of this stuff!

Out of curiosity, I tried seeing how long my server calls were taking. The call to get_server_state (above) seems to take between 0.8 and 1.4 seconds. It returns a single row whose fields are four small simple objects (three lists, one dictionary), a number, and a string. They’re all short; the simple objects have lengths=3 or 4.

That seems longer than Jay’s numbers. I’m wondering whether that’s a function of my code, or variables outside my control? I’m also wondering how often I should be polling the server, given these times (or if I should learn about another way to push changes between users).

Regardless, thank you both again. Your responses were valuable, to me.

The two main factors are location and permanent server setting.

Anvil server calls are always a round trip to London. If you are in the UK it will be faster than if you are in India.

Some plans offer the option to keep the server running, saving the time for server spin up at every call.

I’m guessing Jay has the keep server running option enabled, and he’s either in the US East cost or in Europe.

That makes sense. I am in the US West coast. Perhaps I should just move to London :joy:

US East coast, but I don’t have a plan to keep the server running. Those times were in a fresh app with the basic server image, so it didn’t need built before launching.

There was a thread at one point where people posted their latency times, and they’d vary a fair amount depending on location and network traffic. A lot of it is out of your control. I find most of the delay is the network traffic time.

I think there’s a difference between server calls, which spin up a new server instance (without persistent server or if one isn’t already running), and lazily fetching column data. It’s possible that data fetching is handled through a persistent socket connection, similar to the one the IDE uses while editing an app, which wouldn’t require starting a new server instance.

So the distance from London will still affect the latency, while the necessity to spin up a new instance will not.

1 Like

That’s been my experience as well. Server calls are the more expensive operation, because of the spin up.

Using the info y’all gave me I’ve continued working with my code (turning on accelerated tables, trying to reduce server calls). It’s a little faster (thank you), but there are still bottlenecks.

In one method, my code calls the server three times to set values. Seems like that would slow things. Are there any ways to speed that up, e.g., send all the data to the server in one call? (I’m talking about small amounts of simple data. It gets sent to the server, and then other users get the info every time a timer ticks and calls an update() method.)

P.S. I’m not sure whether this should be in a separate question.

That’s one popular strategy, and it works.

Another strategy: From what I read here, if you are directly updating a database table row, client-side, then that particular step usually bypasses any server-side Python code. Eliminating it as a middle-man, as it were, and the overhead of starting the Python run-time.

The tradeoff would be reduced security: server-side code doesn’t run, so it doesn’t get a chance to sanity-check the update before it happens.

If you update three columns of a row object individually, you’ll end up with three separate round trips. These are a bit faster than calling the server three times directly, but they are still three round trips:

row['column1'] = value1
row['column2'] = value2
row['column3'] = value3

You can avoid this by using the update method, which combines all changes into a single round trip:

row.update(column1=value1, column2=value2, column3=value3)

Similarly, if you have code like this with three separate server calls:

anvil.server.call('update1', value1)
anvil.server.call('update2', value2)
anvil.server.call('update3', value3)

It’s more efficient to refactor it to a single call:

anvil.server.call('update_all', value1, value2, value3)

Or, for more flexibility and to make future changes easier:

values = {
  'table1': {
    'column1': value1,
    'column2': value2,
  },
  'table2': {
    'column3': value3,
    'column4': value4,
  },
}
anvil.server.call('update_all', values)

I usually consider two things:

  1. Has this topic already been answered, or am I looking for a different angle? In your case, my second post, along with @jshaffstall’s clarification about accelerated tables, did address your question. But…
  2. Will future readers find the new post adds value, or is it just noise? In this case, the follow-up posts add useful discussion, including your additional question, so I don’t think there’s any issue with keeping it all in the same thread.
1 Like

Combining server calls, as @stefano.menci suggested, made a noticeable difference! In addition, closely examining my code, I found a method that ultimately updated the same database fields twice; I fixed that.

Thank you all, as always.

2 Likes

FWIW here’s some code we put in our startup module when we really want to know what server calls are occuring. It’s monkey patching, super gross, and should NOT be used in production, but can be really helpful for trouble-shooting performance issues related to server calls (that you may or may not expect).

from datetime import datetime
import anvil.server 

original_callable = anvil.server.call

def new_callable(fn, *args, **kwargs):
    start = datetime.now()
    print(f"Calling {fn} with: args: {args}, kwargs: {kwargs}")
    value = original_callable(fn, *args, **kwargs)
    str_value = str(value)
    if len(str_value) > 100:
        str_value = f"{str_value[0:100]}... ({len(str_value)} total chars)"
    print(f"    --> {fn} took {datetime.now() - start}: returned: {str_value}")
    return value

anvil.server.call = new_callable

# If at any point you want to dynamically "turn off" this level 
# of detail, you can simply reset anvil.server.call:

anvil.server.call = original_callable
1 Like

OP here. Everything’s working fine, thanks in part to the helpful information in this thread.

Out of curiosity, if I were to refactor my app so all the data (a simple one row table) was instead something like a single JSON string, would data upload and download speeds change, transferring a single cell vs a row? I’m thinking they probably would not because variables like the actual calls to and from the server are what takes time, but wanted to check.

I’m also wondering whether Anvil includes any other ways we can send a simple string between two users besides clients sending and receiving database cells or rows. I haven’t seen anything in Anvil’s documentation leading me to think this is a feature … but I’m often wrong :slight_smile:

I think your instinct is right, although I don’t have any hard evidence on that. For a row and its columns to be transferred to the client, it has to be serialized, and that’s likely to JSON for the transfer. So having a single JSON field should be much the same.

As always, though, timing the different approaches is the only way to know for sure.

If you search on the forum for Firebase integrations, some folks have wrapped the Firebase Javascript library for Anvil. That would allow more direct user to user transfer. Vanilla Anvil is limited to going through the server or data tables, unless they’ve snuck a new feature in there lately.

3 Likes