Day 12 of the Anvil Advent Calendar

Build a web app every day until Christmas, with nothing but Python!

Make an AI Write Your Christmas Cards

Writing holiday greetings is tricky – you have to strike a careful balance between schmalzy and sincere. What if you could get a computer to help you?

Last year’s big AI news was the release of OpenAI’s GPT-3, a neural network that generates scarily good written text (and even code!). Unfortunately, OpenAI keep GPT-3 under lock and key, and there’s still a long queue for an API key. But, luckily, there’s an open-source version! This summer EleutherAI released GPT-J, an open-source GPT-3-like network with similar size and performance to the smallest of the GPT-3 family. If you have 12GB+ of RAM, you can download it and evaluate it yourself – but even if you don’t, you can use the Hugging Face Inference API to run it for you!

We’ve built an app to generate Christmas card messages on demand! Here it is:

This is a live app. Try it!

Asking the model nicely

Models like GPT-3 and GPT-J are “no-shot learners” – instead of “pre-training” them to produce a certain type of text, you can just ask for what you want! These models start with a “seed text”, and continue from there. For example, I fed GPT-J the text “This is an essay about trains. Trains are an important mode of transport, both in cities and” – and here’s what it produced:

This is an essay about trains. Trains are an important mode of transport, both in cities and in the countryside. And they can also be an important mode of transportation between cities and between cities and between countries. Trains, then, are not just vehicles as we usually think of them. They are as much part of the landscape as any other kind of machinery. And, if you look at them closely, you can see what the world would be like without them

If we want to produce Christmas greetings, then, we can provide prompt text to indicate what we want. For example, if we load it up with some cheesy Christmas messages, GPT-J will follow suit:

Dear Louise, Merry Christmas! I hope these holidays bring you joy, comfort and happiness. Best wishes.

Dear Gerald, Sending my warmest wishes to all of your family on this Christmas. Best wishes.

Dear Mark, I hope you are having a jolly Christmas full of warmth and love.

Dear Jude, A happy Christmas to you and yours, and here’s wishing you all a happy and healthy New Year.

Dear Mark, Hope you have a safe and happy Christmas and New Year.

Dear Mark, HAPPY CHRISTMAS! Hope you and your family enjoy these…

Now, the model doesn’t know where to stop, but in a rough and ready fashion we can chop it off at “Best wishes…” or “Dear…”.

(Note: It required a bit of playing around to come up with a prompt that worked consistently – I encourage you to experiment!)

Using the Hugging Face API

First, let’s make a server-side function that queries the API, based on the API docs. We’ll need to sign up to get an API key – we’ve stored ours in the App Secrets:

import requests 

API_URL = "https://api-inference.huggingface.co/models/EleutherAI/gpt-neo-2.7B"
headers = {"Authorization": f"Bearer {anvil.secrets.get_secret('api_key')}"}

def query(prefix):
    payload = {"inputs": prefix, "parameters": {"max_new_tokens": 60, "wait_for_model": True, "use_cache": False}}
    response = requests.post(API_URL, headers=headers, json=payload)
    if response.status_code != 200:
        raise Exception(response.text)
    return response.json()[0]["generated_text"]

We call this from a Server Console:

It worked!

It worked!

Scaling up

OK, let’s make this scalable. We don’t want to hit the API every time, so let’s make a Scheduled Task to fetch a new batch of messages every day, and store them in a Data Table:

from datetime import datetime, timedelta

PROMPT = "Dear Louise, Merry Christmas! I hope these holidays bring you joy, comfort and happiness. Best wishes.\n\Dear Gerald, Sending my warmest wishes to all of your family on this Christmas. Best wishes.\n\nDear Mark, I hope you are having a jolly Christmas full of warmth and love.\n\nDear Jude, "

@anvil.server.background_task
def generate_greetings():
    for i in range(50):
        text = query(PROMPT)
        greeting = text[len(PROMPT):].split("Best wishes")[0]

        app_tables.greetings.add_row(greeting=greeting, added=datetime.now())

    # Cycle out old greetings
    for row in app_tables.greetings.search(added=q.lt(datetime.now()-timedelta(days=30))):
        row.delete()

We can schedule that task to run every few days (depending on how much Hugging Face API quota we have left):

Now, we just need to build a front end, and pick a random greeting each time! Here’s our server function:

import random

@anvil.server.callable
def get_greeting():
    all_greetings = list(app_tables.greetings.search())
    return random.choice(all_greetings)['greeting']

Put a pretty face on it

Finally, we can build a user interface that calls this function! Building the UI takes about 75 seconds:

See the full app

See the source code for yourself:


Give the Gift of Python

Share this post: