Here’s some code that might be useful:
import pandas as pd
@anvil.server.callable
def load_csv_with_pandas(file_name):
file_row = app_tables.files.get(name=file_name)
file = file_row['file']
col_names = [f'column{i}' for i in range(18)]
# these are the column headers for your datatable
# change as appropriate
with anvil.media.TempFile(file) as csv_file:
df = pd.read_csv(csv_file, header=None, names=col_names)
for row in df.to_dict('records'):
row = {k:v for k,v in row.items() if not pd.isna(v)} #drop nan values
app_tables.data.add_row(**row)
file_row.delete() #we've uploaded the file so delete it
I think using Pandas Dataframe is best because the column types are automatically detected.
The problem with the CSV string method that you were using in the other post is that all the datatypes are strings so you have to do some extra work to convert the datatypes appropriately…
NB: This was specific(ish) to your table - though you’ll want to change list of names to be correct
If your csv had a header row then you would need to adapt the approach slightly
You may also have unecessary columns in your csv so you can drop these columns like
df = df.drop(['column5','column8'], axis=1)
I might also create a client_module for this and make it the startup form whenever I needed to load some csv to the database:
import anvil.server
import anvil.tables as tables
import anvil.tables.query as q
from anvil.tables import app_tables
def load_csv(file_name):
anvil.server.call('load_csv_with_pandas', file_name)
if __name__ == '__main__':
load_csv('MARSYS')
here’s a clone link (I took your csv from before and just used the first 5 rows for brevity):
https://anvil.works/build#clone:AMXLEFG7BNPZKAWF=JVVGS4RWW2VJUDAJIHTWZMCC