How do I catch errors on starting background tasks that are not registered?
This is not catching the error, but the error will appear in the app log:
try:
anvil.server.launch_background_task('non_existing_task_name')
except anvil.server.NoServerFunctionError:
print('Oops')
I am trying to launch a background task on all my uplink machines, I am following the logic described here.
I would like to catch and manage the error when an uplink machine is missing. As is everything works well because the calls to the missing uplink machines will be ignored as desired, but the log will be cluttered with errors.
1 Like
You can use the get_error()
method on the resulting background task object to trap the error itself (you probably already know that), but the pesky app logs keep on fillin’.
I think one problem here is there’s no way to suppress the app logs if required, coupled with there being no way to empty them (programmatically or otherwise) when they get huge without contacting support.
edit - er, actually, I’m seeing something slightly odder than that, one sec whilst I put together an example…
1 Like
Thanks.
I’m OK with the log showing errors risen by the background tasks.
I would expect for the tasks that can’t even start because they don’t exist to show the error in the log entry that launches the background task, not on the background task itself.
This is fun 
I can catch the exception, but I have to delay everything.
# Server code
@anvil.server.background_task
def backtask():
print("In the backtask")
@anvil.server.callable
def test3():
try:
task_object = anvil.server.launch_background_task('backtask3')
time.sleep(1)
print("test3 - tid", task_object.get_error()) # <-- raises exception
return task_object
except Exception as e:
print("test3 server exception", e)
return None
Accessing one of the failed task object’s methods raises the exception, but it looks like the object is not ready for use (ie not populated) immediately upon failure, which is why the sleep is needed.
I kind of get why it might do that, but should it? Is there a better way to check for an exception, or should we not be checking for exceptions on this?
Here’s a test app :
https://anvil.works/build#clone:2XFZHCHZFEUNZVTN=KGXYCYZ3BPRG4KPGI2BHQGDV
The task object is ready as soon as the task is launched, but all it knows is that there is a task that is about to start.
Eventually the server will take care of running the task and reporting any error, either about the missing task or risen by the existing task. Only at that point checking for errors makes sense.
1 Like
Yeah, so I guess catching an exception doesn’t make sense in this context.
This begs a new question, which I’ll split off into its own thread…
I found a very simple solution solution to the problem of background tasks defined on uplink machines that may be down: don’t use them!
Instead define the background task on the server and call the uplink function.
Background tasks defined on the server are always available, so no risk of failure when launching them.
.
The uplink functions may be missing, but the failure can be easily managed when calling them from the background task.
Here is my background task defined on the server:
@anvil.server.background_task
def start_next_uplink_machine():
for machine in tables.app_tables.uplinkmachines.search():
row = tables.app_tables.queue.get(uplink_machine_name=machine['machine_name'], status='Processing')
if not row:
function_name = f"start_uplink_process_{machine['machine_name']}"
print(f'{datetime.datetime.now().strftime("%H:%M:%S.%f")[:-4]} Calling {function_name}')
try:
result = anvil.server.call(function_name)
except Exception as e:
print(f'{datetime.datetime.now().strftime("%H:%M:%S.%f")[:-4]} Crash: "{e}"')
else:
print(f'{datetime.datetime.now().strftime("%H:%M:%S.%f")[:-4]} Success: "{result}"')
The uplink function that used to be a background task is this:
@anvil.server.callable(f'start_uplink_process_{COMPUTER_NAME}')
def start_uplink_process():
executed_rows = ['OK']
row = get_next_request_in_queue()
while row:
executed_rows.append(do_something_with(row))
row = get_next_request_in_queue()
return '\n'.join(executed_rows)
And the uplink function that gets the next row from the queue is this:
@anvil.tables.in_transaction
def get_next_request_in_queue():
try:
if tables.app_tables.queue.get(status='Processing',
uplink_machine_name=COMPUTER_NAME):
return # this machine is already processing one task
row = tables.app_tables.queue.search(tables.order_by('priority'),
tables.order_by('request_time', ascending=False),
status='Pending')[0]
except IndexError:
return # there are not pending tasks
row.update(status='Processing',
uplink_machine_name=COMPUTER_NAME,
started_on=datetime.now(timezone.utc))
return row # return the row to process
1 Like
If called from a server-side background task, can an Uplink RPC still time out?
If so, then you may still need an Uplink-side background task.
But at least, the server-side background task can query the Uplinked program to see if it’s “awake”, first.
My assumption is that Anvil will do some magic to survive through any RPC or connection problem and keep things magically working and, if there is a timeout, it’s because they want to enforce it, not because of the underlying technology.
I already deployed this to production. I tested it so far with 25-ish minute tasks without problem. Very likely in the next few days there will be some real world tasks running for hours, and I will let you know if I’m having problems.
1 Like
I haven’t seen any timeout errors, but I have seen connection errors that interrupt the background task if the background task is trying to interact with the Anvil server during the network errors.
To avoid this problem I created my own @in_transaction2
decorator that keeps trying forever instead of trying 8 times only:
def in_transaction2(f):
def new_f(*args, **kwargs):
n = 0
while True:
try:
with anvil.tables.Transaction():
return f(*args, **kwargs)
except anvil.tables.TransactionConflict:
n += 1
print(f'TransactionConflict - trying again {n}')
time.sleep(random.random() * 2 ** min(n, 8) * 0.05)
try:
reregister = f._anvil_reregister
except AttributeError:
pass
else:
reregister(new_f)
new_f.__name__ = f.__name__
return new_f
I also created a function that keeps trying other tasks that don’t need a transaction:
def try_forever(callable, *args, raise_exceptions=None, **kargs):
n = 0
while True:
try:
return callable(*args, **kargs)
except raise_exceptions or ():
raise
except Exception as e:
n += 1
print(f'Trying again {n} - {e.__class__.__name__}: {e}')
time.sleep(random.random() * 2 ** min(n, 8) * 0.05)
Examples:
# don't fail to start if there is a network problem
try_forever(anvil.server.connect, 'xxx')
# don't fail to send an email
try_forever(anvil.email.send, to='abc@def.ghi', subject='hello', html='hello')
# sending email and doing other operations
def f():
recipients = app_tables.people.get(type='recipients')
anvil.email.send(to=recipients, subject='hello', html='hello')
try_forever(f)
# don't fail to update if there is no need for a transaction, or if
# we are already inside a transaction - here the TransactionConflict
# exception is not managed and it will trickle up to the caller function
# decorated with @in_transaction2
def f():
row1.update(col1=val1)
row2.update(col2=val2)
row3.update(col3=val3)
try_forever(f, raise_exceptions=(anvil.tables.TransactionConflict,))
2 Likes