Via the Beta IDE (but also via the Classic IDE), when I export a Data Table to CSV, only 8 of its 10 available rows are included. (There are only 10-rows total).
The IDE is configured to show 100 rows per page, if that matters.
Hi! Not a lot of columns. About six of them. There are no Media columns in the table, but a few (3 columns) do single
-row link to other Data Table columns which are complete (meaning there are no orphans).
Actually, there are only 8 items, so there’s nothing wrong with the CSV download.
But, I discovered the cause (probably a bug somewhere), which I have to find:
These 10 Data Table rows are uploaded via a loop that calls .add_row() 10 times. After that is complete, I send a CSV-containing status eMail to the content creator (me in this case). That eMail (in the CSV attachment) includes the row_id of each .add[ed]_row(), as well as its submission success, True or False.
Oddly, in the CSV attachment of that status eMail, I notice that CSV rows 8, 9 and 10 all have the same row_id, which isn’t supposed to be (because .add_row() is basically an append). Since I didn’t expect to that to happen. Now I have to figure out why this is happening (repeat row_ids).
I found the bug. But first, you guys are awesome for hanging with me. I apologize for the red herring.
The issue … When I add content, I generate a Hash of it prior to .add_row() to check if it’s already in the Data Table, and skip it if it is. Because I was generating 10-rows of test data for myself today, I got lazy at the 8th, 9th and 10th entries, filled in the exact same content (via cut & paste), so the code did what it was supposed to and skipped (deduped) it.
I just have to adjust the code to skip appending to the eMail’s status CSV file when an .add_row() is skipped.
Sorry about that.
EDIT: Actually, I won’t change anything. If rows in the CSV status eMail have duplicate row_ids (i.e. creators are submitting duplicate content), this is useful to surface. The creator can easily sort CSVs in Excel (by row_id) to surface this.