Anvil_Testing: A testing package designed for Anvil

I’ve been thinking about integrating testing into anvil and the headache of local + online development was driving me crazy. So, I started thinking, could I just make a simple test suite in anvil?

With a couple hours of playing around I’ve got:

  • Basic pytest test naming convention and assert based tests
  • Automatic collection and running of tests
  • Testing can be started from server console REPL (or any Anvil way)
  • Dependency app anvil_testing for running tests

Test directory

Automatic test discovery works with nested testing directory so you can organize your tests. Tests are expected to be in one importable directory. There is no need to import the anvil_testing dependency for the tests. It is only required at test time.

Server Code
===========
|-- module_a
|-- package
|   |-- module_b
|-- tests
|   |-- module_a
|   |-- package/
|       |-- module_b

Test Naming

Anything starting with an _ in the name is ignored, ie. private and dunder methods.
Packages and modules can be named anything.
Functions are marked as a test by the prefix test_
Classes are marked as a test class with the prefix Test
Class methods for marked as a test the prefix test_

tests/module_a

def test_positive():
    result = module_a.calculate()
    assert result > 0, f"Should always be positive {result}"

# this is not seen as a test.
def setup():
    return "something important"

tests/package/module_b

from ..package import module_b

class DummyTable:
    def __init__(self, data):
        self.data

    def get(self, key):
        return data.get(key, None)

    def test_result(self):
        # not seen as part of test pacakge since
        # class name does not start with `Test`
        (do something related to the table)

class TestSuite:
    def test_a(self):
        self.setup()
        assert self.table.get('a') is None

    def test_b(self):
        assert False, 'Test not implemented'

    def setup(self):
        self.table = DummyTable({})

Running Tests

Console

Tests are then run from the server REPL by importing the apps tests package then running it with anvil_testing.

from . import tests
import anvil_testing
results = anvil_testing.auto.run(tests, quiet=False)

Here is an example test report:

== Anvil Testing ==
Found 3 tests

  Pass: module_a::test_positive
  Pass: package/module_b::TestSuite::test_a
> Fail: package/module_b::TestSuite::test_b - Test not implemented

2/3 passed
1 failed tests

Webpage

Of course, we can run testing many other ways.
How about making tests an http_endpoint?
Just browse to your /_/api/test page and refresh whenever you want to run tests.

import anvil.server

@anvil.server.http_endpoint('/test')
def run_tests():
    from . import tests
    import anvil_testing

    test_results = anvil_testing.auto.run(tests, quiet=False)
    return anvil.server.HttpResponse(status=200, body=test_results)

Questions

  • Do other people have a painless way to integrate testing that I’ve missed?
  • Does the additional code from tests slow down server startup?
  • How much client side testing can be achieved from server side?
  • Being so simple, does this actually provide enough testing power?

Clone and Code

While I love being able to clone an app…
anvil_testingIt is also pretty nice to just look at the code, wouldn’t want to slow anyone down from pointing out something dumb I’ve done:

import inspect as _inspect

FN_PREFIX = "test_"
CLS_PREFIX = "Test"


def _find_tests(parent):
    """recursivly find all functions/methods within the module that start with the test prefix"""
    found_tests = list()

    for name in dir(parent):
        if not name.startswith("_"):
            obj = getattr(parent, name)

            # Delve into modules in the same path, don't stray into imports.
            if _inspect.ismodule(obj) and obj.__name__.startswith(parent.__name__):
                found_tests.extend(_find_tests(obj))

            # Extract test methods from classes
            elif _inspect.isclass(obj) and name.startswith(CLS_PREFIX):
                found_tests.extend(_find_tests(obj))

            # grab test methods
            # since parent is not an instance of the class obj is not seen as a method and rather, a function.
            elif (
                _inspect.isclass(parent)
                and _inspect.isfunction(obj)
                and name.startswith(FN_PREFIX)
            ):
                # create a new class instance for each test method to isolate the tests
                class_instance = parent()
                found_tests.append(getattr(class_instance, name))

            # grab test functions
            elif _inspect.isfunction(obj) and name.startswith(FN_PREFIX):
                found_tests.append(obj)

    return found_tests


def _format_test_name(fn, test_module_name="tests"):
    """Get a descriptive name of the function that explains where it lives"""
    module = fn.__module__.split(f"{test_module_name}.")[-1]
    return f"{module}.{fn.__qualname__.replace('.', ':')}"


def run(test_package, quiet=True):
    log = list()
    log.append("== Anvil Testing ==")
    found_tests = _find_tests(test_package)
    n_tests = len(found_tests)
    log.append(f"Found {n_tests} tests\n")

    passed = 0
    failed = 0

    for test in found_tests:
        test_name = _format_test_name(test, "tests")
        try:
            # Run the test
            test()
            passed += 1
            if not quiet:
                log.append(f"  Pass: {test_name}")

        # capture the assertion error from our test
        except AssertionError as e:
            log.append(f"> Fail: {test_name} - {e}")
            failed += 1

    # Summary info
    log.append(f"\n{passed}/{n_tests} passed")
    log.append(f"{failed} failed tests")

    # I found this more reliable for printing to the console.
    # Otherwise, printing as I went the lines would stack, get out of order, etc.
    test_results = "\n".join(log)
    print(test_results)
    return test_results
8 Likes

Thanks for sharing! A scalable way of testing is always helpful

1 Like

Update:
Created a README form.
Added a anvil_testing.helpers module that provides:

Table Verification

My thought with the table check is that you could write a dependency app which requires tables then run the tests on the imported dependency to verify you have tables setup correctly in the parent app.

import anvil_testing
import my_dependency
anvil_testing.auto.run(my_dependency.tests.tables)

to get information on what still requires setup:

> Fail: tables::test_my_table - 
	Table: my_table
	 - column 'last_update' must be of type 'string' not 'number'
	 - column 'next_update' not found

Temporary Table Row

You can create temporary rows that are automatically deleted after the test is complete.

Raises

Check that a block raises a specific exception.

def test_temp_row():
    with helpers.temp_row(app_tables.my_table, id='TESTING') as row:
        row['api_version'] = '9'
        assert row['id'] == 'TESTING'
        assert row['api_version'] == '9'

    with helpers.raises(tables.RowDeleted):
        row.get_id()   

Helper Module Tests

Tests have been written for the helper module and run within anvil_testing
It is a strange concept of having the test suite test itself.

3 Likes

Update:

Added the ability to test within tables and discard the changes made during a test.

from anvil_testing import helpers

existing_row = table.get(text_col='existing_row')
previous_data = dict(existing_row)
        
with helpers.temp_writes():
    new_row = table.add_row(number_col=gen_int())
    new_row['bool_col'] = False
    existing_row['bool_col'] = not existing_row['bool_col']
    existing_row['number_col'] = gen_int()

with helpers.raises(RowDeleted):
    new_row.get_id()

existing_row = table.get(text_col='existing_row')
assert existing_row['bool_col'] is previous_data['bool_col'], f"Row should not have been updated with a value, {existing_row['bool_col']=}"
assert existing_row['number_col'] == previous_data['number_col'], f"Row should not have been updated with a value, {existing_row['bool_col']=}"

Also added some quick random number and string generators for use in testing tables.

from anvil_testing import helpers
helpers.gen_int(n_digits=5)
>> 17439
helpers.gen_str(n_characters=5)
>>'2aa67'
4 Likes

Keep it coming dude!

I’ve added a v1.0 tag to the current version. Here is the App ID if anyone wants to use it as a third party dependency: CCW3SYLSAQHLCF2A

I’ll try and keep updates on revisions documented on this thread for those crazy enough to care.

master branch will be the current working version
other branches will be feature branches
Major versions will not be backwards compatible
Minor versions will be backwards compatible

2 Likes

v1.01

Minor update that adds documentation on how to have a debug only testing webpage.

After using this for a bit I’ve become happy with a slightly different testing folder structure. Here are some examples of my usage today:

From anvil_testing
image

Another example
image

I typically will mark testing as private to keep it out of my autocomplete.

Allowing for a test module where I add a debug environment server route to trigger the testing.

my test module looks like this:

from anvil import app

"""
Expose an endpoint on the debug environment 
for running tests
"""
if 'debug' in app.environment.tags:
    import anvil.server
    test_endpoint = '/test'
    print('Tests can be run here:')
    print(f"{anvil.server.get_app_origin('debug')}{test_endpoint}")
    
    @anvil.server.route(test_endpoint)
    def run() -> anvil.server.HttpResponse:
        from . import tests
        import anvil_testing
        results = anvil_testing.auto.run(tests, quiet=False)
        return anvil.server.HttpResponse(body=results)

Giving you a testing page like this: Test anvil_testing This is actually running the live tests that are included in anvil_testing.

The documentation is included as the main page of anvil_testing here: anvil_testing

4 Likes

Great initiative! I was exactly looking for something like this. Will try to give it a shot in the upcoming weeks and provide some feedback.

Awesome! Let me know how it goes and how it can improve.