`anvil.server.InternalError: Internal server error: b5e90f0b9b13`

anvil.server.InternalError: Internal server error: b5e90f0b9b13

I’m still re-working my code related to this thread

Based on this thread

I’ve shifted to writing a Media Object (file) to Data Table, then I am passing that Data Table Row back to client code via Task State.

It has been working up until today, when I got the subject Internal Server Error. The error has been intermittent, but it has happened 3 or 4 times today, so I thought I’d report it.

The Exception Message

anvil.server.InternalError: Internal server error: b5e90f0b9b13
at A_Main_Navigation, line 208
called from A_Main_Navigation, line 208
called from app/anvil_extras/utils/_timed.py:41 

Related Code

The line (208) in question is the get_bytes().decode() in the code block below

      le_t_data = self.log_entry_task.get_state()
      if 'log_entry_db_row' in le_t_data and not components[1].visible:
        internal_file = le_t_data['log_entry_db_row']['alternate_file']
        le_list = json.loads(internal_file.get_bytes().decode())
        GlobalCache.global_dict['log_entries'] = le_list        

        GlobalCache.global_dict['log_entries_load_time'] = datetime.now(timezone.utc) 
        components[1].visible = True

Thoughts on Server Error???

I guess my primary question is the Internal Server Error bad-luck?

In the Background task I am writing to Data Table as follows:
log_entries is a list of dicts from a JSON data retrieved from an API call. About 4MB of data. ~15,000 records.

  f_name = f'/tmp/{filename}.json'
  with open(f_name, 'w', encoding="utf-8") as f:
    json.dump(log_entries, f)
  
  T.check("File write is done.")
  new_row.update(alternate_file=anvil.media.from_file(f_name, mime_type='application/json')) 

  anvil.server.task_state['log_entry_status'] = 'file_written'
  anvil.server.task_state['file_record_count'] = len(log_entries)
  # try passing Data Table Row 
  anvil.server.task_state['log_entry_db_row'] = new_row
  return

I found that dumping records to file system was faster than using BytesIO buffer, hence the file.

Is it possible that new_row.update hasn’t actually completed it’s write before the function returns?

the log_entry_db_row key is not added to the task state until the last line in the BG task. So the client code should not try to access that db row until its available.

Although this error does not happen on each run, it has occurred multiple times, any thought on how to avoid the exception?

Thanks.

Generally, internal errors are problems the Anvil staff has to look at. A minimal clone link that shows the problem so they can reproduce it easily helps a lot.

1 Like

Hi @mcmasty,

Thanks for posting this, and I’m sorry that the error message wasn’t more helpful. Looking at our logs, it seems that the reference to the Media object in the table row is somehow invalidated by passing the row through the task_state object. To help debug this, please can you try re-loading the row by ID to see if that helps? Assuming your table is called log_entry_db, your code would become something like:

...
if 'log_entry_db_row' in le_t_data and not components[1].visible:
    row_id = le_t_data['log_entry_db_row'].get_id()
    reloaded_row = app_tables.log_entry_db.get_by_id(row_id)
    internal_file = reloaded_row['alternate_file']
    le_list = json.loads(internal_file.get_bytes().decode())
...

Does that make a difference? Also, can you please try with Accelerated Tables turned on, and with it turned off?

Thanks!

1 Like

Thanks @daviesian

FWIW, the error hasn’t occurred today.

I’ll turn on accelerated tables right now and will re-work the client to reload the row.

I’ll let you know if I see anything strange.