That may come from an artifact of the serialization of the object through anvil uplink, like here:
This is from last year though, and I am unsure if they have changed serialization of the object passed by the .add_row()
method.
If you would like to preserve the order, you might want to either create a portable class of your own device, or do a workaround where you have the columns be auto-created by some other serializable yet ordered object like a list of tuples containing key/value pairs.
This object could be passed to the server and a server function could create the first row of the insert, preserving the order, and more importantly passing the data types correctly to the “auto-create columns” feature of Anvil Data Tables.
In the notebook :
def import_csv_data(file):
with open(file, "r", encoding="utf8") as f:
df = pd.read_csv(f, sep=";")
for i, d in enumerate(df.to_dict(orient="records") ):
# You could also do "if not i" but that's clever
if i == 0:
the_first_d_in_df_to_dict = [ (k,v) for k,v in d.items() ]
anvil.server.call( 'add_one_row_to_a_data_table',
the_first_d_in_df_to_dict,
'data'
)
continue
# ..and nobody likes clever when clarity will do.
# d is now a dict of {columnname -> value} for this row
# We use Python's **kwargs syntax to pass the whole dict as
# keyword arguments
app_tables.data.add_row(**d)
In a server Module:
@anvil.server.callable
def add_one_row_to_a_data_table(first_row_list, table_name):
getattr(app_tables, table_name).add_row( **{ k:v for k,v in first_row_list} )
*You might have to re-run the anvil uplink object in your notebook again (like before) to get access to a newly registered server module function.
Edit:
Oh, also you will need to delete your data table and start again with a completely blank one, if you want to have anvil make the columns from scratch again.
Edit2:
Apparently anvil-labs has a serialization module as well that will do what you want:
…but if you are just trying to insert some data from a Jupyter Notebook this might be a bit of an overkill.