Data Tables with multiple users

Hi, I’m writing a app that allows the user to analyze mutliple images at once. I’m using the upload tool to upload them to a data table. But my concern is when multiple users are using the app at the same time, the data table will be comingled between the users. The data table is just a single row of images. I would also like to put the results in a separate data table. How would I create a separate data table of images and data table of results for each user without them having to login?

I was thinking about using a random number generator to name the data table, and naming it in the client code, that way it would be unique for each user? If I did this I could delete the data table after the code is finished each time.

Welcome to the Forum, @conraddonovan16!

Your idea is very close to the mark. While it is not possible to create new tables at run-time, it is very possible to use your pseudorandom number as a “tag”, to identify the rows “owned” by each user.

See Views for an easy way to filter the rows. Once you create and use an appropriate filter, each user sees only their own rows.

Lastly, if you are going to use a number column as a tag, please note that it will store a Python float, with around 50 significant bits. Your pseudorandom number should not have more bits than that.

It seems that Views is only available for logged in users? I was looking for a solution that didn’t require a user to login. Another option I was considering, was to pass all the file urls to the server at once to do the analysis, so it could return the full analysis in a server variable so I wouldn’t have to worry about comingling, and I wouldn’t need to create a data table, but with this approach, I think it would be impossible to provide a status update to the client side, for example, finished image1, working on image 2, because there’s no way that I can find to call a client side function from the server side.

Instead of creating a table per user named “table_xxx”, add one column called “user” to the one and only table and put your “table_xxx” identifier in that column.

Then all the searches will filter by that identifier, whether you use a view or a normal query filtered by that column.

If you don’t want the user to sign in, but you want all the server calls of the same session to share the same set of rows, you can use the session id as your “table_xxx” user identifier. See here for details: Anvil Docs | Sessions and Cookies

1 Like

No, that’s just one of the examples. You can filter on any value in any column, such as the column containing your generated random number.

The Anvil way to do that is to start a background task, and on the client use a Timer to check the status of the background task. If you decide to go that route and need help, best to start a new thread with an appropriate title.

Thanks for the fast responses and help everyone. I think i’ll go with random num/session id as the solution, instead of using a timer, but nice to know there are options. I’m transcoding a matlab program into python, and loving anvil so far!

That’s a job for Chatgpt!

I don’t know what type of analysis you are doing, but the python running on the client is limited and may not be able to do the job.

The usual way of doing jobs that require importing libraries that may not be available on the client, is to call a server function, do the job on the server and return the result. And the usual way of doing jobs that may take longer than 30 seconds, is to create a background task and use a timer to check the status every few seconds and get the final result when it’s done.

Yes, I’m doing the analysis on the server side. Why is the background task/timer preferred over just waiting for the server to finish? I’m setting the status update for each image when it’s loaded and processing etc.

Because server calls have a time limit of 30 seconds.

If you are sure the server will finish the job before 30 seconds, then that’s the way to go. If you are analyzing many images and it may take longer than 30 seconds, then the background task is the way to go.

Thanks for the heads up. I think I can get around this by doing a server call for each image, so that’s it’s under 30 seconds each time. But I’ll definitely keep this in mind for the future.