App_tables.<table_name>.search(name=nm) not working as expected

I have a data table with three columns . I want the countyName column values as a list for stateAbbr column values that match a selected value so that I can return the list to a drop-down. This line of code:

cN = [r['countyName'] for r in app_tables.counties.search(stateAbbr=stAb)]

returns an empty list. If I use search with no argument, then I get every countyName for every stateAbbr. What am I doing wrong?

hi @cf1 and welcome to the forum,

the code looks good to me so I think there must be a mistake somewhere. i.e. what is stAb?
perhaps you can share a clone link and someone will be able to help you debug the issue.

You could also try using print to debug

print(stAb)
# how long is this search iterator?
print(len(app_tables.counties.search(stateAbbr=stAb))) 

# what is the row?
for row in app_tables.counties.search(stateAbbr=stAb):
    print(row)

1 Like

stAb is the two-letter state abbreviation. I get it from a first drop-down. print(stAb) shows that the code is correct up to that point. the search iterator has a length of zero, and print(row) in the for loop returns nothing (since the length is zero). how do I share a clone link?

This post shows how to make a clone

Figured it out. There is a Chr(13) appended to the row values in the stateAbbr column in my data table. I uploaded that data from a CSV file from a Mac. How should I have done the upload to avoid including the Chr(13) in the values?

I doubt that the Upload step was the problem. But it may have been the conversion from CSV. If we knew more about how you did that, maybe we could spot something?

I used a file loader. This is the client-side file loader code:

def file_loader_1_change(self, files, **event_args):
# This method is called when a new file is loaded into this FileLoader
print(“loaded a file”)
for f in files:
anvil.server.call(‘read_csv’, f)

And this is the server code:

@anvil.server.callable
def read_csv(csv_object):
# Get the data as bytes.
csv_bytes = csv_object.get_bytes()
# Convert bytes to a string.
csv_string = str(csv_bytes, “utf-8”)
# Create a list of lines split on \n
line_list = csv_string.split(’\n’)
for line in line_list:
# Create a list of fields from line.
print(“line=”, line)
field_list = line.split(",")
print(field_list[0])
app_tables.counties.add_row(
fipsCode=field_list[0],
countyName=field_list[1],
stateAbbr=field_list[2],
)

Line-endings will depend on the source of the file. Under Windows and DOS, lines commonly end in ‘\r\n’. (In other operating systems, it’s ‘\n’ or ‘\r’.) If you were reading this file using standard Python file-handling routines, these would all get converted to a standardized ‘\n’. But since you’re rolling your own conversion, here, handling the variety of line-endings has become your code’s responsibility.

Assuming that this CSV file was originally written under Windows, stripping off just the \n’ will still leave a trailing ‘\r’, which becomes part of the final field:

Fortunately, there’s a simple fix. Line endings are considered “whitespace”, and can be stripped off using standard Python library functions. I.e.:

    stateAbbr=field_list[2].strip(),

If there’s any possibility of your CSV files handling more complex data (e.g., values that contain commas, apostrophes, quotes, or other delimiters), you may want to look into Python’s Standard Library module CSV, and class StringIO (to feed it your csv_string). This would let the Standard Library handle the awkward cases for you.

1 Like

The CSV file is from Excel on MacOS. The strip() function did the trick. Thanks.