Hi everyone, I’m starting to work with anvil and immediately after a few minutes I’m stuck with a big doubt, so I ask you if you can advise me on some best practices:
I have to set up all the server code logic with individual functions to each operate on a table using the native Anvil syntax app_tables.TABLES.add_row so in a 5 table example this will result in having 5 function for each table to handle CRUD operation
OR
Better to create generic CRUD functions and dynamically pass the table to work example:
@anvil.server.callable
def create_record(table_name, fields):
table = getattr(app_tables, table_name)
new_record = table.add_row(**fields)
return new_record
@anvil.server.callable
def read_records(table_name, **search_args):
table = getattr(app_tables, table_name)
records = table.search(**search_args)
return list(records)
@anvil.server.callable
def update_record(table_name, record_id, updated_fields):
table = getattr(app_tables, table_name)
record = table.get_by_id(record_id)
if records:
record.update(**updated_fields)
return record
else:
return None
@anvil.server.callable
def update_record_by_search(table_name, search_fields, updated_fields):
table = getattr(app_tables, table_name)
record = table.get(**search_fields)
if records:
record.update(**updated_fields)
return record
else:
return None
@anvil.server.callable
def delete_record(table_name, record_id):
table = getattr(app_tables, table_name)
record = table.get_by_id(record_id)
if records:
record.delete()
return True
else:
return False
Using the [table_name] constructor it’s only a sintax difference, behavior it’s the same than using getattr for dinamically pass table name, I’m wrong? Btw accelerated table it’s still on beta after 2 years?
No you’re not wrong, it is just a syntax difference (though there might be a performance difference with the accelerated tables, idk). But, subscription is much more stylistically nicer than getattr imho.
Also, Anvil keeps things in beta for a long time traditionally speaking – it was the same with the new IDE – so I wouldn’t be hung up on that. there haven’t been any big reports against it in a while, and definitely not major ones, so I think it’s stable.
I have accelerated tables on in all my apps since day 1.
There were a few hiccups on week 1, they were solved and I forgot it’s labeled beta.
I don’t see anything wrong with your approach, but I would add that working with row objects on the client side may trigger unintentional round trips, because some columns may be loaded lazily. What is lazy with and without accelerated tables is different, and it can be controlled if you are using accelerated tables.
I personally like to (1) get only what I need, that is for example avoid fetching large simple object columns if I don’t need them, and (2) convert all to dictionaries and send those to the client. So I know I only have one round trip. But for simple apps your approach is flexible and works just fine.
Maybe I’ve lost some step but the logic I’m writing is intended as server side logic. I will not load directly data from customer side. My assumption was to create a more reusable possible database function to run server side and pass result to client side. I’m wrong with my sample function setup with this goal in mind?
For update I usually send the row itself from the client back to the server (if I sent the row to the client in the first place, which I like to do) and if not that I send the server the I’d of the row to update and get that row with
Ok, maybe it’s a non sense question, but you can pass the row “object” to the client and get back to the server to use directly with the app table methods, but if you pass a dictionary to the client? When you get back you can’t use the row object directly but need to search over the dictionary to find equivalent row?
You example returns row objects to the client. Row objects contain some of the column values, but will lazily load other of the values, possibly triggering unexpected round trips.
For example, you could even do return records instead of return list(records). This would be a more elegant and Anvillic way of working, because Anvil’s magic is able to transfer an iterator from the server to the client. That iterator comes with a few rows (I think 100 by default) and will lazily load more rows if required. The rows are row objects, not dictionaries. Those row objects come with some field values, but not all of them, and will lazily load the missing one on demand.
Anvil does its best to guess what should be included in the first round trip and what should be lazily loaded, so your implementation will do the job in simple cases. But in more complex cases, for example when your return list(records) would return thousands of rows and you only need the first 10, or when a row of your table includes a column that contains 100KB of text and you are not using it, you can use some optional arguments available when using the accelerated tables and convert to lists and dicts before returning to the client.
So, yeah, the answer is “it depends”.
You can start with this, and optimize only when needed, if needed. Optimizing to early is often bad practice, but just keep in mind that you may need to change your strategy in the future.
Thanks very clear answer, I’ve ask this dictionary related questions because before in this topic we talk returning dict to client.
So a small recap:
When possible return row object to customer
When possible avoid return a list of row object
This way anvil take care of loading as few result as possible and increase speed.
Also In this way (returning object) you can work on it client side and send back to server to use directly with method.
All Right?
And if I like to return a row object made of a single record, I can let anvil do the job returning the same row object (filtered to ensure only one record is fetched) ? Anvil concert row iterator to single row result in the same object?
The database is already updated, and col2 is already set to 20. There is no need to call a function to update it.
This happens because your create_record function returns a row object, not a dict, and the row object is smart enough to (1) only load small values immediately and large values lazily, and (2) to write back to the database immediately after a value has been changed.
This magic is great, but it does trigger one round trip, so, if done many times inside the same function call, it may affect performances.
Another way is to have your function returning return dict(new_record), so you have a dict on the client side and no magic round trips happen under the hood. If you do this you need to take care of the updates, maybe with a function similar to what you have shown, but you have better control over how many round trips your app does.
If you are starting now, you can stick with the Anvil magic: leave row['col1'] = 30 doing its magic. If you get to the point where things get too slow, you can rethink it.
Hi, there is some security issue to expose directly in the client the row object allowing the client code to update its value directly?
Example Server @anvil.server.callable
def read_single_record(table_name, search_fields):
table = app_tables[table_name]
record = table.get(**search_fields)
return record
Example Client
record = anvil.server.call(‘read_single_record’, ‘users’, {‘name’: ‘Mario’})
if record:
record[‘email’] = ‘new_email@example.com’
this approach has security issue or can safely used?
This save to me a lot of work generating dict for return to client and updating record from dict passed back by the client.
Yes, this is a big security issue. The problem is that if your client code can update the row value, so can any Javascript a hacker might inject into your client code. In general, you should treat the client as untrustworthy, and do updates on the server.
The one possible exception to this is client-writeable views. Those views have already been limited to data the current user should be able to affect. It might still not be a good idea to allow the client to update them, if there are rules about how they get updated (e.g. a field must always be between 1 and 100), since a hacker could ignore those rules. But with the right table design client-writeable views might be a good option for client updating.
If allowing the client to change any value (with the exception of the row id) in that row is a security issue, then this is a security issue.
It as as safe as your initial functions, where the server does what the client asks. Whether you do it in one line of code on the client side or with one line on the client plus a function on the server, the result is the same.
If you want to keep using row objects, you could use client writable views, where you can decide what’s visible and editable on the client side.
If you want to keep server side functions, then you could improve their security by adding checks on users and permissions before executing any other code.
As I mentioned earlier I never pass row objects to the client. I create classes, the client works with objects, then uses the objects methods to tell them to save themselves. The classes know what to call on the server side to save themselves.
EDIT
I need to add that, when I say “It as as safe as your initial functions” I mean keeping things as they are. Your original functions can become safe by adding good server side permission checks, while row['col1'] = value1 is not going to be safe if the permission checks are on the client side.
Ok that’s right, can I ask if the business plan with per row permission setup can be used with row object? This security setting solve the problem of the trusted owner but there is no other way to check data validity passing row object to customer side, right?
I’m a bit confused because anvil row object seem wonderful, but completely lack the possibility to execute validation before passing data to database ! Why? 100% there is no option where we can setup and call validation function on row object update logic (the one handled by anvil with magic)?
And finally you write:
As I mentioned earlier I never pass row objects to the client. I create classes, the client works with objects, then uses the objects methods to tell them to save themselves. The classes know what to call on the server side to save themselves.
As I mentioned earlier I never pass row objects to the client. I create classes, the client works with objects, then uses the objects methods to tell them to save themselves. The classes know what to call on the server side to save themselves.
Can you write an example of a complete script (class and function) you use server side and client side? This will help me better understand how to write good anvil application
Because other no-code or low-code platforms expose a million settings, they require you to spend months learning them, and they decide what you can do.
Anvil is an only-code platform. There are settings, but the most important thing is the code you write. You decide whether to add your checks and validation, and how to make them. You do need to learn a few settings and a few magic behaviors, like database row and iterators, or repeating panels, but the learning curve is much shorter (in my experience) and once you learned those the sky is the limit.
Yeah, I will, but right now I have a day job .
Here is a quick description of how I do validation.
In this old post I describe some of my best practices. Emphasis on “my”!
Anvil is an only-code platform → I completely agree with you, BTW the most powerful features (row object) are completely useless without an integrated security system. There is no way we can change the way anvil handles row object direct update / load / search ecc.