Now that I’m back at a proper keyboard instead of a phone, here’s a little more detail of how I do things…
I use a Dockerfile that’s almost identical to the one you linked to above:
FROM python:3
RUN apt-get -yyy update && apt-get -yyy install \
software-properties-common \
postgresql-client \
&& wget -O- https://apt.corretto.aws/corretto.key | apt-key add - && \
add-apt-repository 'deb https://apt.corretto.aws stable main'
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb && \
(dpkg -i google-chrome-stable_current_amd64.deb || apt install -y --fix-broken) && \
rm google-chrome-stable_current_amd64.deb
RUN apt-get -yyy update && apt-get -yyy install java-1.8.0-amazon-corretto-jdk ghostscript
COPY anvil_app/requirements.txt ./
RUN pip install -r requirements.txt
RUN anvil-app-server || true
VOLUME /apps
WORKDIR /apps
RUN mkdir /anvil-data
RUN useradd anvil
RUN chown -R anvil:anvil /anvil-data
USER anvil
I generally have packages that need to be installed on the server and those are defined in a requirements.txt
file, so you can see a few extra lines for that.
I’ve also removed the ENTRYPOINT and CMD commands because I handle those using docker compose. Here’s a typical docker compose file for the anvil app server:
version: '3'
services:
anvil:
build:
context: .
dockerfile: Dockerfile
ports:
- 3030:3030
volumes:
- ./anvil_app:/apps/<my app>
- ./anvil_app/.anvil-data:/anvil-data
command: anvil-app-server --data-dir /anvil-data --app /apps/<my app> --uplink-key ${ANVIL_UPLINK_KEY} --auto-migrate
environment:
- ANVIL_UPLINK_KEY=${ANVIL_UPLINK_KEY}
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:3030 || exit 1"]
interval: 20s
timeout: 10s
retries: 5
I use a .env
file to hold various config options - e.g. ANVIL_UPLINK_KEY - but you could instead hard code that into the command.
If I need convenient access to the database, I use one of the official postgres docker images, add a ‘database’ service to the docker compose file and add the --database
option to the app server startup command. I often also add a pgadmin container and docker compose service for that.
For a production system, I use either Gitlab CI/CD or Github Actions to build my docker images and store them in either Gitlab Container Registry or Github Container Registry. A new build is triggered whenever a change is merged to the main branch of the app.
On the production host, I have the docker compose and .env files and nothing else - the only difference being that the docker compose file has an image
entry pointing to whichever registry I used instead of the build options.
Finally, I add Watchtower and Portainer services to the docker compose file. Watchtower monitors the registry and automatically pulls the image and restarts the container whenever it detects a change. Portainer gives me a convenient web front end to the containers so I can start/stop them and view the logs from my browser.
My workflow is then to work on my app, merge into main when I’m happy with my changes and watch my production app refresh and restart automatically a few minutes later.