form file-upload button, send that file object to a server module, read the .csv, load contents into a table. I don’t see any other way to get my data (small files) into my tables. I’ve tried making my file(s) assets, tried everything from a form, etc. I cannot get any examples, cloned app code, nothing to work. I’m on a Mac, paid version, using python 3 on the server.
Thanks in advance.
Thinking, Qlik Sense makes this SO easy. Drag and drop a .csv or .xlsx, it creates the table, loads the data, bang - ready to use. I’m writing code to use my data, trying this forum, documentation, you tube, everything - cannot get anything to actually work.
To get a CSV file into a datatable (where each row in the CSV becomes a row in the datatable), I use Pandas. This is one way to do it (server-side only).
Please see this post which reads a CSV media file into a Pandas dataframe:
Please see this post that shows how to add rows to the datatable based on a Pandas dataframe:
So, if your CSV is stored as a media object in a datatable, you should be able to use the first post to put it into a Pandas dataframe, The second post shows how to convert the dataframe to a list of dicts and how to use that list to populate a datatable.
I think this is what you are asking for but please clarify if I’ve got it wrong.
Thanks folks - I’ll try these. Question - when I go to assets and upload a .csv file utf-8 endcoded, that is entirely different than storing that .csv file as a media object in a field in a table - correct?
Yes that is a different way of storing a file. Personally, I have not ever stored CSVs as an asset. I’m not saying that’s wrong, just that I have not thought to do it. My thinking for dealing with tabular data is to make use of server-side code and datatables (including media object storage).
ok, thanks, testing now. I searched the docs. but I suspect I don’t know where to look. I’m curious, in teh command below, my_table.get()… where do I find the syntax for this? is column= the name of the column that contains the media object? if so, what is the ‘media_column’ after that?
import pandas as pd
@anvil.server.callable
def load_csv_with_pandas(file_name):
file_row = app_tables.files.get(name=file_name)
file = file_row['file']
col_names = [f'column{i}' for i in range(18)]
# these are the column headers for your datatable
# change as appropriate
with anvil.media.TempFile(file) as csv_file:
df = pd.read_csv(csv_file, header=None, names=col_names)
for row in df.to_dict('records'):
row = {k:v for k,v in row.items() if not pd.isna(v)} #drop nan values
app_tables.data.add_row(**row)
file_row.delete() #we've uploaded the file so delete it
I think using Pandas Dataframe is best because the column types are automatically detected.
The problem with the CSV string method that you were using in the other post is that all the datatypes are strings so you have to do some extra work to convert the datatypes appropriately…
NB: This was specific(ish) to your table - though you’ll want to change list of names to be correct If your csv had a header row then you would need to adapt the approach slightly
You may also have unecessary columns in your csv so you can drop these columns like
df = df.drop(['column5','column8'], axis=1)
I might also create a client_module for this and make it the startup form whenever I needed to load some csv to the database:
import anvil.server
import anvil.tables as tables
import anvil.tables.query as q
from anvil.tables import app_tables
def load_csv(file_name):
anvil.server.call('load_csv_with_pandas', file_name)
if __name__ == '__main__':
load_csv('MARSYS')