Hi @mrh,
Certainly! This topic was featured in last week’s Anvil User Group meeting, with several community presenters sharing their approaches, and our own @s-cork talking about how we do end-to-end testing of Anvil itself. An official guide to end-to-end testing in Anvil is on our TODO list, but we’ve been a bit busy lately – so here’s a quick write-up. The simple summary is that you can mostly test Anvil like any other web application, with browser-based front end tests and pytest
and friends on the backend:
Basic front-end testing
For front-end testing, the best approach is indeed to use something like Selenium. You can test an Anvil application like any other web application. If you need to identify particular components for your testing framework to pick up, you can use anvil.js.get_dom_node()
to get a reference to the underlying HTML element object and add CSS classes or attributes that you can then search for from your testing framework. For example:
from anvil.js import get_dom_node
# then, later, in your Form's __init__ method:
get_dom_node(self.button_1).classList.add("test-role-frob")
Then in Selenium, you can use that class to identify the element, eg:
frob_button = driver.find_element(By.CLASS_NAME, "test-role-frob")
Scripting tests within the front end
A fairly common approach is to build some test sequences into your front-end code (for example, in a Module), and then trigger them from your test driver (eg Selenium). This can be easier than doing every test “through” the front-end interface (for example if you need to test the behaviour of functions in your client-side modules).
If this test code gets large, you can consider shipping it as a separate app, and using your main app as a dependency of the test app. That way, you avoid shipping test code to your users. (This isn’t a concern for tests that are committed to your repository but not part of your client_code
and server_code
, as that code is not served to your users).
Basic back-end testing
It sounds like you’re already comfortable with back-end testing using pytest
. A common pattern here is to have your test suite running as an Uplink script, calling your server functions to test them. (A Server Uplink is privileged, so your test fixtures can set up your environment as you wish - for example, by setting up test data in your tables, or creating test users and logging them in/out with anvil.users.force_login()
and anvil.users.logout()
).
It’s common practice to commit these tests to your Anvil repository (in a separate directory, eg `test/, to keep them out of the way of your client and server code) and run them from there, so that they can be updated alongside the application.
Testing with CI
If you want to do Continuous Integration with Anvil, you have a couple of options:
1. Use a test deployment
Create a deployment environment for testing, with a separate Data Tables database and a private URI. Configure your CI to run on a branch deployed to that environment (or to force-push that branch to whatever revision you want to test!). Then your test script can run client-side tests by pointing Selenium at your private URI, and server-side tests using an Uplink key pointed at that environment. (The Uplink key here will be restricted to your test deployment, and is therefore not such a privileged credential!)
2. Use the App Server
A restriction of option #1 above is that there is a fixed set of testing deployments, and therefore you can’t run many sets of tests in parallel. If this is a problem for you, then you can spin up an instance of your app on-demand in your CI environment by using the App Server.
You can then point Selenium at http://localhost:3000
(or wherever your App Server is listening), and use the Uplink to drive your server tests. (It is advisable to make the tests themselves configurable – eg with environment variables for the app origin, uplink URL and uplink key - so that you can also run them against your development environment while you’re working!)
Although this approach requires more setup, it is entirely self contained, and thus suitable for CI configurations that might be running multiple instances of the test suite simultaneously on separate versions.