Is there a tool to convert classic linked postgres tables to Anvil style? And back? Or REST API scaffolding? Or else?

It’s a general question and I’m thinking out loud. Maybe it’s not complete :wink:

What I’m trying to do:
Get classic linked tables from postgres DB, for instance a carpooling service:

  • users
  • cars
  • users-cars (m2m joint table)

to be converted to Anvil meta-tables (app_storage_tables with ..."type": "liveObject", "backend": "anvil.tables.Row"... and app_storage_data filled with these tables data).

What I’ve tried and what’s not working:
I only found what @Tony.Nguyen is doing at Upload CSV file into data tables pretty clever. But I can’t simply jump from postgres to excel and I have no idea how to implement the missing joint function in a table or spreadsheet.

An alternative would be to go psycopg2, but opening the postgres server to the world is risky (maybe could be removed with an uplink server), and making a SQL function for each CRUD operation seems like implementing a REST API from scratch.

This last subject (REST API) raises the question of scaffolding.
Is there a nice way (i.e. one click on a big red button) to scaffold python function according to a OpenAPI (swagger) schema?

2 Likes

I think this is the best way to do it if you want to keep two systems. Whatever machine is running your postgresql DB on can just run an anvil uplink that can only be accessed through a server module that checks credentials before allowing access.

I am guessing you would have to write anvil data table code anyway for CRUD, so again if you want to keep two systems so you have the power of full postgres that is the easiest route imo.

If on the other hand…

You want to follow this idea of going to anvil and never looking back,

I just created this last week, it allows you to load your data from CSV files while picking the anvil column types you would like the data converted to:

It is possible to create linked columns with anvils data tables but others here use that feature much more than I do…

Wow, this is good stuff @ianbuywise , thank you!
You’re right, I have to choose which route.

I’d be minded to write a migration (ETL) function. Use pycopg2 to connect to your existing db, do the extraction, python for any transformations and then anvil’s library for the load.

You can run that via uplink or a server module depending on access to your existing db.

(I’ve done this sort of migration on several projects and it works well)

1 Like

Not only do I agree, but I would go as far as saying @owen.campbell s solution is the “correct” answer for if you want to move to anvil and not use postgres anymore.

Even just doing an ETL migration and deciding it’s not what you want will still get you 70-90% of the way to figuring out how to do what you want, so it will not be a wasted effort either way.

2 Likes

Thank you @owen.campbell, thank you guys.
I might come with more technical questions if you don’t mind :slight_smile:

Vlookup is what I use to perform joint function in Excel.

All my data now is in CSV, which means it can be easily moved around.

Your use case might need different solution then.

In Excel 365 there is the new Xlookup, which is a little easier to use and more powerful than Vlookup. There are also other new functions like Sort or Filter.

For real join queries (as opposed to clunky slow functions) you can have a look at power queries.

1 Like

Thank you @Tony.Nguyen and @stefano.menci

Anvil is great to quickly develop UI and data together (RAD), but I wonder how it behaves when any load (more clients, more server computation, more data R/W, …) is happening since the jar is serving the client, the server and the database in the same breathe (I know I can split to use external database but is it more efficient?).

At my newbie level, in the current setup, I can’t find a way to say where the bottleneck could be and how to prevent it using cache and throttling for instance, so I tend to think of splitting services to see specific loads. But I might be wrong if, for instance, a REST implementation is too heavy on the server.

The skulpt process for the server files is black box to me. Is there a way to get the generated JS files to be able to serve them as production commits?

The other point is about the database structure itself: How to keep a transferable database to be used by other services? I.E. a postgres database with named tables and FKs.
Once the DB goes the Anvil way (app_storage_tables, app_storage_data) and evolves on it, is there a way to get it back to named tables? and serve them from Anvil? Wit on-the-fly transformation from Anvil to “static postgres”?

Anvil is an ocean of possibilities, I’m planning my little boat trips on it, looking at potential reefs, currents and weather forecast :slight_smile:

Here is my answer:

==============

I treat my CVS files some kind of noSQL database

=====
@stefano.menci thanks for the info

Thank you @Tony.Nguyen I didn’t know about Http-kit, it’s good to know.
So does it means the open-source server as a fail-safe net is a no go?
You seem to have walked the same path guys, so I’m listening.
Indeed, CSV might be the way to go full Anvil and keep postgres. I’ll try a 2way very basic pipeline.

EDIT:
I can’t try it with a basic plan.
I don’t know why it claims I use environments since I only added http.client:
image
EDIT2: anvil.http does it for now.