I've written at length about a deployment first development style of workflow. I've found this to be particularly useful for side projects where you might just want to occasionally drop in and add a feature without too much hassle. Lowering the bar to first commit in a single work session can be a huge boost to development speed.
I've tried a lot of different solutions - Rancher, minikube, custom shell scripts and Make commands, some CLI libraries like flightplanJS, and even git hooks. But none of them hit the points I really wanted and I usually end up back at manually handling some element of the deployment process. That's okay for the most part, but I wanted to create a smoother process for that.
Things I wanted from an ideal setup:
- Push to deploy
- Heavy integration with testing
- OSS friendly
GitHub actions dominate the CI/CD space in a way that I don't think most people expected. It should've been obvious, though. They're the de facto standard for open source. Adding a CI/CD element that flawlessly integrates with your code and automatically gives you access to their entire ecosystem of tooling is a no-brainer.
My current production deployment setup relies on
vtec2/watchtower to detect new Docker images pushed to the Docker Hub and automatically pulls and runs the
latest tag. This is currently integrated with
make commands for various app tasks. I can run
make deploy which will deploy the latest server and UI builds.
Here are the specific Makefile targets for building the Docker images in question.
docker-ui: docker build -f ./frontend/Dockerfile -t openmtg/edhgo-ui:$(BUILD_TAG) ./frontend docker-server: docker build -t openmtg/edhgo-server:$(BUILD_TAG) .
Adding CI/CD was as easy as telling the workflow what make targets to run.
- name: Build the Docker UI image run: make docker-ui
These are quick, effective, and fairly reliable smoke tests. If my Docker containers aren't building, my confidence in a pull request is obviously decreased to about zero. I can hop in, make a change, run my tests locally, push, see them build successfully in Github, and be at little bit more sure that they're going to deploy safely to production. Not 100%, but with this setup you can start approach 98%.
This won't auto-deploy them, though. Next we need to call our deployment steps.
Push to Docker Hub
You either need a private repository, which will cost money, or you'll need to have these be public repositories. In my case, they're public repositories, so I get a limited number of containers I can use for free.
We need to call our
make deploy command for each successfully built pipeline on main builds, but not on pull_request builds.