I preach a deployment first development style of workflow. I’ve found this to be particularly useful for side projects where you might just want to occasionally drop in and add a feature without too much hassle. Lowering the bar to first commit in a single work session can be a huge boost to development speed for a project. When I have a spare few hours on a weekend, I want to create value, not debug a build pipeline just to get a single feature done. Painful things aren’t done often. Frequent deployments will reveal pain points in your deployment processes that must be rectified. You have to remove the pain & fear of deploying: You should be confident that you can run an update with little to no issues in production. I’ve tried a lot of different solutions - Rancher, minikube, custom shell scripts and Make commands, some CLI libraries like flightplanJS, and even git hooks. But none of them hit the points I really wanted and I usually end up back at manually handling some element of the deployment process. That’s okay for the most part, but I wanted to create a smoother process for that.

Things I wanted from an ideal setup:

  • Push to deploy
  • Heavy integration with testing
  • Free
  • OSS friendly

Github Actions

GitHub actions dominate the CI/CD space in a way that I don’t think most people expected. It should’ve been obvious, though. They’re the de facto standard for open source. Adding a CI/CD element that flawlessly integrates with your code and automatically gives you access to their entire ecosystem of tooling is a no-brainer.

My current production deployment setup relies on vtec2/watchtower to detect new Docker images pushed to the Docker Hub and automatically pulls and runs the latest tag. This is currently integrated with make commands for various app tasks. I can run make deploy which will deploy the latest server and UI builds.

Here are the specific Makefile targets for building the Docker images in question.

docker-ui:
	docker build -f ./frontend/Dockerfile -t openmtg/edhgo-ui:$(BUILD_TAG) ./frontend

docker-server:
	docker build -t openmtg/edhgo-server:$(BUILD_TAG) . 

Adding CI/CD was as easy as telling the workflow what make targets to run.

    - name: Build the Docker UI image
      run: make docker-ui

These are quick, effective, and fairly reliable smoke tests. If my Docker containers aren’t building, my confidence in a pull request is obviously decreased to about zero. I can hop in, make a change, run my tests locally, push, see them build successfully in Github, and be at little bit more sure that they’re going to deploy safely to production. Not 100%, but with this setup you can start approach 98%.

This won’t auto-deploy them, though. Next we need to call our deployment steps.

Push to Docker Hub

You either need a private repository, which will cost money, or you’ll need to have these be public repositories. In my case, they’re public repositories, so I get a limited number of containers I can use for free.

We need to call our make deploy command for each successfully built pipeline on main builds, but not on pull_request builds.

Deployment related Makefile commands

Reference

name: CI - Build Docker UI

on:
  push:
    branches: [ "main" ]
  pull_request:
    branches: [ "main" ]

jobs:

  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v3
    
    - name: Build the Docker UI image
      run: make docker-ui
name: CI - Build Docker Server

on:
  push:
    branches: [ "main" ]
  pull_request:
    branches: [ "main" ]

jobs:

  build:

    runs-on: ubuntu-latest

    steps:
    - uses: actions/checkout@v3
    
    - name: Build the Docker UI image
      run: make docker-server
.PHONY: build dev persistence clean test test-api deploy-ui deploy-server docker-server docker-ui docker

GOCMD=go
GOBUILD=$(GOCMD) build
GOCLEAN=$(GOCMD) clean
GOTEST=$(GOCMD) test
GOGET=$(GOCMD) get
BINARY_NAME=edhgo
BINARY_UNIX=$(BINARY_NAME)_unix
BUILD_TAG=latest #tag all releases as latest for watchtower detection

all: test build
build:
	$(GOBUILD) -o $(BINARY_NAME) -v
test-api:
	$(GOTEST) -v ./server/... -race
test: test-api
test-unit:
	$(GOTEST) -v ./pkg/... -race
clean:
	$(GOCLEAN)
	rm -f $(BINARY_NAME)
	rm -f $(BINARY_UNIX)
run:
	$(GOCMD) run ./
generate:
	$(GOCMD) run github.com/99designs/gqlgen
# Migrate will run migrations at your env's EDHGO_PG_URL value.
# This is how we run prod migrations, so BE CAREFUL ABOUT RUNNING THIS COMMAND.
# ALWAYS TEST MIGRATIONS LOCALLY FIRST.
migrate-prod: confirm
	migrate -path ./persistence/migrations -database $(EDHGO_PG_URL) up
docker: docker-ui docker-server
build-linux:
	CGO_ENABLED=0 GOOS=linux GOARCH=amd64 $(GOBUILD) -o $(BINARY_UNIX) -v
docker-ui:
	docker build -f ./frontend/Dockerfile -t openmtg/edhgo-ui:$(BUILD_TAG) ./frontend
docker-server:
	docker build -t openmtg/edhgo-server:$(BUILD_TAG) .
deploy: confirm deploy-server deploy-ui
deploy-ui: confirm docker-ui
	docker push openmtg/edhgo-ui:$(BUILD_TAG)
deploy-server: confirm docker-server
	docker push openmtg/edhgo-server:$(BUILD_TAG)
persistence:
	docker-compose -f dev.docker-compose.yml up postgres redis
confirm:
	@echo -n "Are you sure? [y/N] " && read ans && [ $${ans:-N} = y ]