Backend blues: notes on making https calls to Flask vs AWS Lambda

I have been migrating some backend-stuff from this stack:

Ubuntu -> Python -> Flask -> UWSGI -> NGINX

to this stack:

AWS Lambda (python) -> AWS API Gateway

I want the api calls to be direct from the browser to make it as snappy as possible. This migration has been a bit frustrating, so I wanted to capture some of the pitfalls for the next guy or gal. The two items are around encoding/decoding and CORS

BACKEND: FLASK
Here is how my flask app is set up to enable CORS and jsonification of responses

from flask import Flask, jsonify, request
from flask_cors import CORS
app = Flask(__name__)
CORS(app)
@app.route("/")
def home():
    resp = {'status':'hello world'}
    return jsonify(resp)
if __name__ == "__main__":
    app.run(port=5000, debug=True)

BACKEND: AWS Lambda
The CORS headers have to be inserted manually into the response from AWS. Also the routing is a bit different: you have to extract the path from the event so your lambda knows which method to use and set up a separate invoke_url for each, all tied back to the same lambda. By using one lambda for all the different paths on the endpoint, you enable container re-use with fewer cold starts:

def lambda_handler(event, context):    
    path = event['path']
    resp = {
        'statusCode': 200,
        'headers': {
            "Access-Control-Allow-Origin" : "*", 
            "Access-Control-Allow-Credentials" : True 
            },
        'body': json.dumps({ 'event':'hello world'  })
        }
    return resp

FRONTEND: Anvil
Within Anvil, you use the http.request library to make requests of the backend from the browser. Typically you would pass the json=True argument to decrypt the response and format as JSON, but for some reason I have not been able to get this to work properly with Lambda/API Gateway, and an Anvil Media Object (binary) is returned instead. The below method is intended to make an API call that can be used with Flask or with AWS Lambda:

from anvil import *
import json
def api_call(endpoint, req_data, method='POST', aws_lambda=True):
  # make an API call at the specified endpoint with req_data
  resp = http.request(
    url = '{}/{}'.format(api_url, endpoint),
    method=method,
    data=req_data,
    json=True)
  if aws_lambda:
    resp = json.loads(resp.get_bytes().decode())
  return resp

One final note, the code above always uses a POST method. With API testing utilities like POSTMAN, you can include a JSON body as data in a GET request. With Anvil, GET requests cannot include this data so tend to default to using POST for everything.

1 Like

Thanks for the notes, I hope I will never use them! :slight_smile:

Question: why are you going through the trouble of setting up another rest server instead of staying in the Anvil ecosystem?

I usually go the other way: I create Anvil apps to use as rest servers for old php apps, or to use as dependencies for other Anvil apps.

Perhaps are you using Anvil for the front end and other technologies for the backend?

I am immensely grateful for Anvil. In general, anything that can be done in Anvil, should be done in Anvil. One of the exceptions is when you need something particularly heavy-duty on the backend (Amazon’s DynamoDB in this instance).

My apps and rest api are mostly for internal use.

I have not hit any wall with Anvil yet. I am afraid that sooner or later the app_table will feel a little limited for some more advanced features I have in my long term plans.

But I also hope that by the time I start working on those advanced features Anvil will have grown and moved the wall further away, so I will not hit it.

Thanks for your post again. I hope I will never use it, but it’s good to know what you did, just in case.

1 Like

Understood- I would like to see some of the examples where you used Anvil as a backend to replace PHP- I think you are a better Anvil developer than I am.

In my case I needed to use a stand-alone database as it is storing tens of millions of records. One of the great things about Anvil is it does have “escape hatches” - You can continue growing with the product and if one day you need to use a heavier-duty persistent store, you just need to make an API call to a MySQL or DynamoDB or MongoDB instance sitting someplace.

I’m like @navigate in that I use external databases (MariaDB Galera & CockroachDB clusters) to update and query huge amounts of data. I’ve never really used the built in data tables in anger, as my inserts can hit well over 100,000 per minute. Not always, but it can do.

All of my back end is now Python, but a lot of it is running on my own servers so that I can control resource availability. Data inserts don’t even really need to be connected to Anvil via uplink, and i have my own consistent data manipulation API which takes care of whatever db I am using.

That detachment has allowed me to change db technologies without any impact on the front end whatsoever.

1 Like

One example is a shipping cost calculator for OpenCart.

OpenCart out of the box doesn’t manage inventory distributed across multiple warehouses and other requirements that we have.

I looked into creating a plug in for OpenCart, but I quickly decided that all the PHP I wanted to write was limited to calling a few Anvil endpoints. The Anvil endpoint receives the content of the cart and the shipping address, finds the closest facility, tries different packaging configurations and shipping methods, and returns a list with the shipping methods that make sense. The requests to Google maps for the distances, UPS and FedEx for 3-4 different shipping methods, the calculation of other options takes a few sections and I’m ok with that. Other APIs take care of checking the availability in inventory and allocating the cart items first temporarily, then permanently. An uplink script that runs nightly uploads to an Anvil table about 1000 items from our local inventory (still based on the decrepit Lotus Notes, soon to be replaced by a few Anvil apps).

I decided to use Anvil because Python is easier than PHP, because I didn’t need to mess with the existing database or to setup a new one, because the uplink makes it easy to synchronize our local database with the table on the cloud.

Another example of rest server is an app that stores parameters shared by internal tools, VBA Excel macros, CAD plugins, etc.
The admin interface is an Excel file with thousands of parameters with a button that uploads them to an Anvil table. All the other Excel macros use two endpoints to get the parameters whenever they need them.

I used Anvil because I went from nothing to a working rest server in 15 minutes. Then I spent a few more hours because I added some bells and whistles like authentication, usage statistics, etc.

2 Likes