At the moment I managed to delete all rows in the table with this function “app_tables.data.delete_all_rows()”, and I managed to delete one row in the column with the following formula:
This is not my understanding of how anvils in_transaction blocks work. As I understand it, really the only thing the in_transaction block does is open and close a transaction for you as your function is entered/exited, such that if the function throws an error at any point the entire transaction is rolled back (so you don’t end up with partially committed changes). It also makes sure that multiple concurrent database calls that might affect the same rows will not overlap. However, it does NOT combine the database calls - so for example, multiple inserts will still be executed one after another, and not as a bulk insert.
Long story short - I wouldn’t expect the in_transaction block to affect performance.
You are probably 100% correct, I was just hoping maybe somebody threw in a performance increase, since it would be possible to boil down the process of ‘iteratively do x to this row’ to something similar to at least an executemany(x, rows) function, handing the postgresql cursor a bunch of the row_id’s. It should be trivial to check if the same thing is being done to all (or even a significant fraction) of the rows in a table.
I think we should make a feature request thread just to request multiple easy to implement data-table optimizations like this, there are quite a few times people want to apply the same thing across a whole table, and it is literally wasting everyone’s time and anvils cloud compute cycles to do it row by row when modern SQL DB architectures were built specifically to automatically optimize these kinds operations in C .