Building a GPT-powered Assistant for Internal Tools using Anvil

Hi all,

I’ve been experimenting with combining Anvil and OpenAI’s ChatGPT API to streamline some internal processes, and I wanted to share one of the projects I’ve built — not just to show what’s possible, but also to get some feedback from the community.

The idea was to build a small assistant that helps internal teams interpret open-ended feedback (think: survey responses, ad-hoc reports, product suggestions). The assistant parses text inputs and returns structured summaries, potential issues raised, and in some cases, even possible action items — all generated by GPT.

Why Anvil?
I chose Anvil because it allows for a rapid prototyping loop. The interface was up and running in under an hour, and integrating the OpenAI API was straightforward through the server modules. It also made it easy to keep sensitive API keys out of the browser, which is a nice security touch.

How it works:

  • The main app interface is built in Anvil, where users can paste free-text feedback or upload a CSV.
  • The processing logic lives in a server module, where inputs are chunked and sent to the GPT API (via HTTP).
  • For heavier processing (e.g. daily digests), I use an Uplink script that runs from a cron job and posts results back to a Data Table the app reads from.

Some things I’m still exploring:

  • How to best handle timeouts or retries when the GPT API rate-limits.
  • Whether I should be using background tasks instead of Uplink for the async jobs.
  • If it makes sense to wrap parts of the GPT logic into a class-like structure, especially since the API “prompts” are increasingly complex.

Would love to hear how others are managing structured external APIs like this in Anvil. Also, has anyone experimented with putting business logic in Uplink code versus model classes in the main app?

Happy to share more technical details if anyone’s interested — and also open to criticism if I’m doing something that breaks best practices.

2 Likes

Sounds fascinating! Do you happen to have any (glances around furtively)…screenshots?

That’s really cool! I wouldn’t overthink the refactoring at this point since you’re just starting to get some use out of it. One of the great things about Anvil is the flexibility of how you can design and architect apps.

For example, I did what you did for my first “AI” app and eventually started experimenting with frameworks like Langchain which have some logic for dealing with rate limits, etc.

I’m currently exploring developing MCP servers (basically FastAPI under the hood) which will wrap any AI stuff into a microservice. I like the idea of keeping my Anvil apps clean and safe from rapidly evolving AI frameworks, but we’ll see how it goes.

Share a gif or video of it in action!

1 Like

I can recommend long-running background tasks to run a number of async workers. Each background task runs on a single CPU, so eventually you can run multiples of them to spread the load when you have a dedicated machine.

1 Like

Yeah, that makes a lot of sense. I’ve been thinking along similar lines — keeping the AI parts modular feels like a safer long-term move, especially with how fast everything’s changing.

I haven’t played with Langchain much yet, but it’s definitely on my list. Curious to hear how the MCP route works out for you!

I’ll try to put together a quick video soon.

Thanks — that’s super helpful. I’ve mostly stuck with Uplink so far just out of habit, but you’re right, background tasks feel like a better fit for scaling. I hadn’t really thought about running multiple workers in parallel like that — definitely something I’ll explore as the load grows.

A client of mine and I ran into some minor memory leaks when working with Langchain. In a continually running async context, those eventually added up to crash the task. They might have been fixed by now, but if you try it out, look up how to track memory usage in a Python program to make sure everything’s being freed.

Is this the same “MCP”?

That’s good to know — I hadn’t considered memory issues in that context. Thanks for the heads-up. I’ll definitely keep an eye on memory usage if I start playing with Langchain, especially in anything long-running. If you ever write up your findings or have tips on tracking that kind of leak, I’d love to read them.

If you look at getting the memory used by a Python process using psutil, the basic idea is you take that measurement at periodic intervals.