anvil.tables.TableError: Internal database error: ERROR: canceling statement due to user request

A long-time-ago deployed app that has been running flawlessly for months, this month is giving me this error too often:
anvil.tables.TableError: Internal database error: ERROR: canceling statement due to user request

What I’m trying to do:
The line giving the error is just trying to add a row in the log-row table.
This table is usually empty, during the task execution it is filled, then at the end the log is dumped to a text file, then zipped and archived as media object in another table, then the log_rows table is emptied.

In details, the code line giving the error is the very simple:

        app_tables.log_rows.add_row(execution_log_uuid=task_uuid,
                                    timestamp=now,
                                    level=level,
                                    text=msg_str)

For the Anvil-team, if you want to check your logs, the SESSION-ID is N7PGDX3SDPCZPZ7FS77D3R3UY6SOLNQ5

What I’ve tried and what’s not working:
This stacks up with the other DB error: syntax error at or near "-" that was given last week.

I’ve been contacting support for this problem and I was answered this is a temporary problem, just retry.

So, ok, I’m going to retry, but this is becoming annoying and a little worrying.

What’s happening on DB management in Anvil-land?

Bye

This happened again tonight. Session ID CZ3HVG2XGGQSZKLBJPIOXDVZVKWGQATZ

Session X2HZCJSHB6ACXVHUPFRIDKFJDCDE4K6W - started manually this morning (usually it’s a background task starting at 03.00 UTC) completed successfully.

Hello @anvil I really need your help to sort this out.
This is happening way too often since mid-march.
Before mid-march, I’ve never experienced this specific error:

anvil.tables.TableError: Internal database error: ERROR: canceling statement due to user request

Here’s the full list of logs available to me filtered by “Errors only”:

Date - Session ID - Error
17/02/2022 - LGS74INYPUYY53AWEAXOC7VLUPD2A7C7 - anvil.server.UplinkDisconnectedError: Uplink disconnected
16/03/2022 - C3YJUWPFSOCWPR4NM26S7KRDENYRAU4G - anvil.server.UplinkDisconnectedError: Uplink disconnected
16/03/2022 - NSOMZMPRYS3IHNU3IOJKIRTRJ5W3H5XX - anvil.server.UplinkDisconnectedError: Uplink disconnected
24/03/2022 - VQPRBSD2SPJCXXSWJJ27NDQKEVR2MPGO - anvil.tables.TableError: Internal database error: ERROR: canceling statement due to user request
04/04/2022 - CXZIBQGOXELU7QQXOCWU5PK2QUHYRNCB - anvil.tables.TableError: Internal database error: ERROR: canceling statement due to user request
11/04/2022 - N7PGDX3SDPCZPZ7FS77D3R3UY6SOLNQ5 - anvil.tables.TableError: Internal database error: ERROR: canceling statement due to user request
12/04/2022 - CZ3HVG2XGGQSZKLBJPIOXDVZVKWGQATZ - anvil.tables.TableError: Internal database error: ERROR: canceling statement due to user request
17/04/2022 - VIP3VCT6ZOPJPNBQXUNBDMNZE6S2AM4N - anvil.tables.TableError: Internal database error: ERROR: canceling statement due to user request
19/04/2022 - KVQ5PX3ADFVBNOC2IX25YDJ7PEHTAOYU - anvil.tables.TableError: Internal database error: ERROR: canceling statement due to user request
19/04/2022 - S76WCYCN37DLRE3RH7BEDB4TI6IVZTK6 - anvil.tables.TableError: Internal database error: ERROR: canceling statement due to user request

You see how this is a constant and frequent error since march the 24th.
This is a background task failing (named sentinella, in case you want to check logs).
It starts at 03.00 am.
Usually when I re-start it in the morning after a failure, it is able to complete. But not today. The second fail on 19/04 is this morning’s run.

From Anvil’s session-error log I see the error is given by the code line in my first post.
That is a statement inside the function:

def log_db(level, messaggio):
    now = datetime.now()
    msg_str = str(messaggio)
    if logging_levels[level] <= logging_levels[logging_level]:
        app_tables.log_rows.add_row(execution_log_uuid=task_uuid,
                                    timestamp=now,
                                    level=level,
                                    text=msg_str)

Looking at my application log I see it was inside this loop when the log_db statement failed:

def get_eids_documenti_archiviati():
    log_db('info', 'get_eids_documenti_archiviati - enter')
    log_db('info', '  getting documents in database for campaign "{campagna}"'.format(campagna=campagna))
    campagna_row = app_tables.configurazione.get(campagna=campagna)
    archiviati = app_tables.documenti.search(q.all_of(campagna=campagna_row, envelope_id=q.not_('')))
    archived_ids = {}
    for doc_archiviato in archiviati:
        log_db('info', '    getting archived IDs for envelope {eid}'.format(eid=doc_archiviato['envelope_id'])) # --> THIS IS THE STATEMENT GENERATING THE ERROR
        archived_ids[doc_archiviato['envelope_id']] = {
            'id_pdv_documento': doc_archiviato['id_pdv_documento'],
            'id_pdv_audittrail': doc_archiviato['id_pdv_audittrail']
        }

    log_db('debug', repr(archived_ids))
    log_db('info', 'get_eids_documenti_archiviati - exit')
    return archived_ids

My application log looks indeed like:

...
2022-04-19 07:01:10.319712+00:00 - info -     getting archived IDs for envelope 91625d8a-4b70-4d20-b9dc-bb84ecd65eb0
2022-04-19 07:01:10.330127+00:00 - info -     getting archived IDs for envelope e8404bdb-5225-4895-bd0a-c3dc775bd361
2022-04-19 07:01:10.342542+00:00 - info -     getting archived IDs for envelope a4f88ff4-f1c6-4f3b-b986-064a2a2dcc43
2022-04-19 07:01:10.354813+00:00 - info -     getting archived IDs for envelope ff1ec906-dec3-43b4-ace5-1c5dffa177ef
2022-04-19 07:01:10.369307+00:00 - info -     getting archived IDs for envelope 0a93c551-aac4-4939-b22d-20cf4e421fda
2022-04-19 07:01:11.075294+00:00 - info -     getting archived IDs for envelope 3aa26672-1259-4693-bffa-356e27231bc4
...

As already said, the log_rows table is usually empty. During the task execution it is filled, then at the end the log is dumped to a text file, then zipped and archived as media object in another table, then the log_rows table is emptied.

I can’t figure out

  1. why this should not work
  2. why this stopped working suddenly. Something happened on DB side?
  3. if I could do that in some other way, given that logging on temporary files does not guarantee that the log will still be there when I need it. May be I could simply print things out but logging to a table allowed me to give admin user access to logs at an application level.

Again, sorry to address this to @anvil directly, I don’t know if this fits more the forums or the support channel.
Let me know if I should send an email to support for this.

Thanks and BR

Hi,

Are you using the transaction decorator (@anvil.tables.in_transaction) before your function? Perhaps lack of this decorator is what is causing the issue.

See the related docs for more background information and application:

Regards,
Jeroen

Hi @j.vanrenen and thanks for answering.
I am writing to a log table, I think each write should be committed, otherwise there is the risk that the log won’t contain the last rows, the ones that are the most important from a troubleshooting-log point of view.
Isn’t that true?

Edit:
I’ve just modified the code so to build a string inside the for loop and - at the end of the loop - log that to db.

def get_eids_documenti_archiviati():
    log_db('info', 'get_eids_documenti_archiviati - enter')
    log_db('info', '  getting documents in database for campaign "{campagna}"'.format(campagna=campagna))
    campagna_row = app_tables.configurazione.get(campagna=campagna)
    archiviati = app_tables.documenti.search(q.all_of(campagna=campagna_row, envelope_id=q.not_('')))
    archived_ids = {}
    log_string = ''
    for doc_archiviato in archiviati:
        log_string += '\n{timestamp}: getting archived IDs for envelope {eid}'.format(timestamp=datetime.now(), eid=doc_archiviato['envelope_id'])
        #log_db('info', '    getting archived IDs for envelope {eid}'.format(eid=doc_archiviato['envelope_id']))
        archived_ids[doc_archiviato['envelope_id']] = {
            'id_pdv_documento': doc_archiviato['id_pdv_documento'],
            'id_pdv_audittrail': doc_archiviato['id_pdv_audittrail']
        }
    log_db('info', log_string)
    log_db('debug', repr(archived_ids))
    log_db('info', 'get_eids_documenti_archiviati - exit')
    return archived_ids

While I think this solves the problem, I feel like I’ve lost something in my logging capabilities.
This way, if there’s a problem, I cannot know what ID was the last before the problem.
Yes I may set up an exception handling inside the for loop so to log to db as last action (will do soon), but nevertheless I feel like now I need to manage by-hand something that “just worked” before.
Can I live with that? Sure!
But to me this is a step backward, not forward.

Just so you know, @anvil.tables.in_transaction will re-try the transaction up to 5 times before it fails.

If that still isn’t good enough, you can use transactions in a context manager like

with anvil.tables.Transaction() as txn:
  ...

Which will raise a TransactionConflict error if there is one. This means you can throw it in a loop and use a try / except block as many times as you wish, to make sure your data gets written.

Hi @ianb
thanks again for the information and yes, I read your link.
I try to explain better what I meant, may be I’m missing something and this could be instructive to me.
I don’t see how the error can be related to the transaction management and how transactions can help me.
I have no multi user access to the table, it’s just a bunch of quick add_row on log_db table.
I understand this:

  1. without transaction (i.e. the “normal” table access) each statement is a separate transaction, so at every add_row Anvil takes care of the overhead of opening and closing the transaction
  2. with transaction well, it depends:
    2.a if the decorator is set on the log_db function declaration well, nothing changes respect to 1): I’ll have a transaction opened before each add_row and closed just after
    2.b if the transaction is opened inside the get_eids_documenti_archiviati function, before the for doc_archiviato in archiviati: loop then ok, all the add_row happen in a single transaction.

As far as I understand it, from 2.b) I get a consistent speed increase because the transaction is just one, but all the add_row statements will succeed or fail together because (as said in the docs) “This ensures your Data Table operations are carried out as a group”.

So, if I have for some reason some exception (not only db-related ones) in that for loop, either I take care of managing it through some try: .... except: or I’m going to lose all the add_rows.

Hi @anvil bump on this.
tonight the scheduled task failed again, with the (sadly) same recurrent scheme:

anvil.tables.TableError: Internal database error: ERROR: canceling statement due to user request
<running on the server>
called from /downlink/anvil/_server.py, line 42
called from server_module, line 670
called from server_module, line 219

Session ID: NAZEDJBFIPADV6X6BJ7VYNJPIAUFB2YZ

While the failing statement is still an add_row, this time it’s not the add_row on the log table (the fixes applied in this post are in place) but:

            app_tables.esaw_files_buffer.add_row(envelope_id=documento['envelope_id'],
                                                  document_id=documento['audit_trail_id'],
                                                  file_name=file_name,
                                                  file_bytes=file_bytes, file_size=file_size)

This archives a PDF document in a buffer table.
Differently from the case of the rows added to the log table, this is not executed consecutively at fast pace, let say it’s executed once every 2 secs, for let say max 10 times (it depends on the number of docs to add to the buffer).

I’m not writing now to get advices on how to solve this particular specific problem, I need to understand the context:

  • (to @anvil) no user (on my side) canceled the statement. What kind of error is that?
  • (to @anvil) since march, I am constantly fighting with the database (this post and this other post where transaction fails while the operation is carried on single-user), while in more than 1 year being with Anvil (and in particular this very same application) I’ve never had such problems. Have you made significant changes on DB side? What are they?
  • (to @anvil) is it preferred I ask these questions to support@anvil.works rather than here in the forums?
  • (to the community): am I alone? Are you all going along “well as before” with database interaction?

Thanks

I have only run into this when banging on the server right up to the established limits for quite a while.
Like trying to avoid a timeout by catching the timeout error and then immediately resubmitting at an unlimited rate without any timed delay over hundreds or thousands of times.

I’m not saying you are doing that, just that’s the only time I have had this error. Only mentioning it because it maybe could be related to some kind of database protection software? :man_shrugging:

1 Like

Failed again tonight.
Same error, on the same row:

anvil.tables.TableError: Internal database error: ERROR: canceling statement due to user request
<running on the server>
called from /downlink/anvil/_server.py, line 42
called from server_module, line 670
called from server_module, line 219

Session ID:3SO2AOSD4MRS24ZQGM6EQCQDPFDTUT5U

The code line is the same as yesterday. This time a loop over 24 PDFs I’m trying to archive in the document buffer table.

I don’t think I’m hitting DB too quickly, here are the log of the loop

2022-05-05 03:02:17.166627+00:00 - info -   saving document in Anvil buffer database
2022-05-05 03:02:17.787746+00:00 - info -   saving audit trail in Anvil buffer database
2022-05-05 03:02:25.031218+00:00 - info -   saving document in Anvil buffer database
2022-05-05 03:02:26.014853+00:00 - info -   saving audit trail in Anvil buffer database
2022-05-05 03:02:32.976062+00:00 - info -   saving document in Anvil buffer database
2022-05-05 03:02:33.464905+00:00 - info -   saving audit trail in Anvil buffer database
2022-05-05 03:02:39.497643+00:00 - info -   saving document in Anvil buffer database
2022-05-05 03:02:40.150628+00:00 - info -   saving audit trail in Anvil buffer database
2022-05-05 03:02:46.008466+00:00 - info -   saving document in Anvil buffer database
2022-05-05 03:02:46.855679+00:00 - info -   saving audit trail in Anvil buffer database
2022-05-05 03:02:53.286981+00:00 - info -   saving document in Anvil buffer database
2022-05-05 03:02:53.853669+00:00 - info -   saving audit trail in Anvil buffer database
2022-05-05 03:02:59.365530+00:00 - info -   saving document in Anvil buffer database
2022-05-05 03:03:00.069294+00:00 - info -   saving audit trail in Anvil buffer database
2022-05-05 03:03:05.745416+00:00 - info -   saving document in Anvil buffer database
2022-05-05 03:03:06.742689+00:00 - info -   saving audit trail in Anvil buffer database
2022-05-05 03:03:13.098978+00:00 - info -   saving document in Anvil buffer database
2022-05-05 03:03:13.747621+00:00 - info -   saving audit trail in Anvil buffer database
2022-05-05 03:03:20.175714+00:00 - info -   saving document in Anvil buffer database
2022-05-05 03:03:20.906129+00:00 - info -   saving audit trail in Anvil buffer database
2022-05-05 03:03:28.356071+00:00 - info -   saving document in Anvil buffer database
2022-05-05 03:03:29.592448+00:00 - info -   saving audit trail in Anvil buffer database

The document buffer is limited to 1000 rows not to exceed DB usage limits.

I feel hopeless and tired of keep trying again until it completes.

Hi,

I’m also getting the same error when trying to copy the data from a big mysql database into an anvil database in a background task.

" anvil.tables.TableError: Internal database error: ERROR: canceling statement due to user request

<running on the server>"

What might be the problem? It seems very limited and this is the only thread I could find about this error.

Welcome to the Anvil Forum, @LM7!

What workarounds have you tried so far? There are several listed earlier in this thread.

I’ve now gotten this error two days in a row. It is from a background task that updates a row of a datatable:

anvil.tables.TableError: Internal database error: ERROR: canceling statement due to user request
<running on the server>
called from /downlink/anvil/tables/v2/_row.py, line 467
called from /downlink/anvil/tables/v2/_row.py, line 351
called from update, line 245
called from update, line 150

Line 245 in my code is an update of a datatable row t: t[mdict[m]] = totalx
This background task has run flawlessly for months.
I will try to add the transaction decorator to see if it helps
But like the OP, I am puzzled because there are no multiuser changes possible to this database.
It is, I suppose, possible that two different columns of this row are being changed by two different lines of code and that one has not completed before the other line is hit. Could this be the cause?

1 Like