AttributeError: No such app table: 'data'

I am trying to upload data via uplink from a Jupyter Notebook to my Anvil’s table and get the error:

AttributeError: No such app table: ‘data’

The Code in the Jupyter Notebook

def import_csv_data(file):
    with open(file, "r", encoding="utf8") as f:
        df = pd.read_csv(f, sep=";")
        for d in df.to_dict(orient="records"):
            # d is now a dict of {columnname -> value} for this row
            # We use Python's **kwargs syntax to pass the whole dict as
            # keyword arguments
            app_tables.data.add_row(**d)

import_csv_data("Data.csv")

The name of the table is the same:

Can anyone help?

I see a couple past threads troubleshooting a similar error. Not sure if they will help you but worth a look if you haven’t already seen them:
https://anvil.works/forum/search?expanded=true&q=attributeerror%20no%20such%20app%20table%20uplink

1 Like

Just two things I would check to start:
I am unsure if Data (the name of the table) and app_tables.data are equivalent names (due to the capitalization).

You might try re-running your Notebook from the top, I am also unsure how/if certain changes to the data tables service are ‘registered’ when you create the uplink connection. Like if you created the table ‘Dataafter you ran the notebook to create the uplink connection, it might not know of such a major change to the app.

2 Likes

Thank you.

Actually, running the same Notebook some hours later worked.

The only surprising thing is now, that the order of the columns of the table is very different from how it was in the uploaded CSV.

1 Like

That may come from an artifact of the serialization of the object through anvil uplink, like here:

This is from last year though, and I am unsure if they have changed serialization of the object passed by the .add_row() method.

If you would like to preserve the order, you might want to either create a portable class of your own device, or do a workaround where you have the columns be auto-created by some other serializable yet ordered object like a list of tuples containing key/value pairs.

This object could be passed to the server and a server function could create the first row of the insert, preserving the order, and more importantly passing the data types correctly to the “auto-create columns” feature of Anvil Data Tables.

In the notebook :

def import_csv_data(file):
    with open(file, "r", encoding="utf8") as f:
        df = pd.read_csv(f, sep=";")
        for i, d in enumerate(df.to_dict(orient="records") ):
            #  You could also do "if not i" but that's clever
            if i == 0:  
              the_first_d_in_df_to_dict = [ (k,v) for k,v in d.items() ]
              anvil.server.call(    'add_one_row_to_a_data_table',
                                    the_first_d_in_df_to_dict,
                                    'data'
                                 )
              continue
            #  ..and nobody likes clever when clarity will do.

            # d is now a dict of {columnname -> value} for this row
            # We use Python's **kwargs syntax to pass the whole dict as
            # keyword arguments
            app_tables.data.add_row(**d)

In a server Module:

@anvil.server.callable
def add_one_row_to_a_data_table(first_row_list, table_name):
  getattr(app_tables, table_name).add_row( **{ k:v for k,v in first_row_list} )

*You might have to re-run the anvil uplink object in your notebook again (like before) to get access to a newly registered server module function.

Edit:
Oh, also you will need to delete your data table and start again with a completely blank one, if you want to have anvil make the columns from scratch again.

Edit2:
Apparently anvil-labs has a serialization module as well that will do what you want:

…but if you are just trying to insert some data from a Jupyter Notebook this might be a bit of an overkill.

1 Like