John Jarvis

One huge shell script

There are a lot of DevOps tools for managing both configuration and infrastructure as code. While some can be extremely helpful for managing configuring for a large distributed service, not all are as helpful for deploying something simple in the cloud, or a personal project that runs on a single virtual machine.

IaC or a configuration thingy$jobpersonal stuff
Configuration management (Ansible, Chef, etc.)
CI pipelines for code
CI pipelines for infra
Code reviews
Changes without downtime
Docker registry
Prebuilt machine images
Cloud “appliances”, load balancers, hosted DB, etc.
CDN (CloudFlare/CloudFront)
Hosted Git
One friggin huge shell script

Over time, my approach to using tooling for my personal projects has changed. From a high level, my own lessons have been:

This resulted in what I have now, which is a single VM provisioned in Hetzner Cloud where everything is configured in a single shell script and most services run in Docker containers.

Before converting to a single VM, I would do something like this any time I had an experiment or idea to play around with:

  1. Create a new AWS account.
  2. Create a bucket for Terraform state
  3. Terraform configuration for CloudFront, Lambda, EC2, Route53, etc.
  4. In the user-data script configuration the image from scratch to support running docker and create systemd unit files.
  5. In the CI pipeline copy a binary, or tag a new docker image for every push to master.
  6. Deploy the container using a public or private registry, pull the new image on the instance with a systemd unit configuration to manage the service.

For this to work, A lot of complexity was baked into the user-data script associated with the instance. It also meant keeping track of multiple AWS accounts (usually using the free-tier resources) and then possibly paying money if an EC2 instance was required after the free-tier ended. Often, there would be a separate testing environment which was ephemeral, that spun up an identical instance and cloud configuration.

My current approach of a single VM eliminates the need for AWS and a lot of complexity that comes along with it. For the configuration management part of it, I was confronted with the following questions:

  1. How do I deal with configuration drift if manual changes are made on the instance while it is running?
  2. Will it be possible to rebuild a single VM from scratch easily, and without much downtime?
  3. If there will be multiple containers running on the instance, does it make sense to any container orchestration?
  4. How much should be automated in CI if I am the only person deploying changes?
  5. Should I depend on a container registry?

My approach to managing the single VM deployment

The questions above lead me to the following approach for configuration management when deploying to a single VM:

The big shell script

I keep one repository named config-mgmt that has the following content:

The bootstrap script along with all files in files/ are rsync’d to the VM first. Then bootstrap is run which does all the little shell maintenance stuff like ensuring services are enabled, installing and configuring things that run on the VM, reloads SystemD, etc. The script ends up being around 500 lines of bash, a dozen or so simple functions that are called in sequence at the end of the script.

If I have a new project to deploy to my single VM I simply create a new config_ function, add it to the script and in most cases create a small bin/deploy shell script in the project’s repository that dumps and imports the docker container to the host.

Using Docker and a single VM for side-projects

There is no Docker container orchestration in this setup, just a bunch of Systemd unit files for starting Docker containers with Caddy configurations to route traffic to them. I do however not run everything in Docker, on the VM I run the following services:

Rebuilding my single VM from scratch

This is something that is important as you don’t want to create a “snowflake” that will be impossible to recreate. To make this work in Terraform I maintain a list of servers and an “active server”.

locals {
  servers       = ["lisa", "bart"]
  active_server = "lisa"

I can add a new server to the list to create a new one, and then switch the active to the new one I created which points all of the service DNS entries to it.