Using Hugging Face Transformers on the Anvil Server

Hi,

Apologies for the simple question. I am new to Anvil.

I would like to use the Hugging Face Transformers library on the Anvil server so that I can use their pre-trained RobertA models to do sentiment analysis.

I could configure a Jupyter Notebook and use an uplink to Anvil.
The code would be something like …

# Install the dependencies inside the Jupyter Notebook
!pip install torch torchvision torchaudio
!pip install transformers
!pip install anvil-uplink

import anvil.server
anvil.server.connect(‘<ANVIL TOKEN>’)
# To get he ANVIL_TOKEN 
# On Anvil [ ]  Uplink  [Enable]
# Then copy and paste code the string as the Anvil Token

from transformers import pipeline

# download model & use default pipeline
sent_pipeline = pipeline(“sentiment-analysis”)

# run the model
tinput = “I love Anvil”
sent_pipeline(tinput)

I would like to avoid the overhead of using the Jupyter Notebook and just have the Anvil server run the code.

What I have tried so far is:
In the settings - I have selected “Python 3.10 Beta”
and in the base packages I tried both Machine Learning & Data Science
Under “Add package” I tried to add “transformers” and it wasn’t available in the autocomplete.

Questions

  1. What settings do I need to use in Anvil so that the Anvil server can recognise the transformers library?
  2. I want to load the model only once when form is initiated. Where do I put this code so that it is only run once

Thanks
Chris

Welcome to the Forum!

Regarding #2, have a look at Persistent Server Modules, and see if they’re a good fit.

Thanks for the suggestion!

Hi @chris6 and welcome to the forum!

The transformers package shows up for me. Can you make sure you’ve typed in the entire word?

Hi brooke, nice to meet you.

It worked - thanks for your help

regards
Chris