Server CoreOS Migration
I can’t believe it’s been almost nine months since my last post. Nine months without designing another open-source board. Nine months of procrastination since I made the decision to block future projects on forcing myself to update my resumé and LinkedIn first. For the first few months, I can say that I was moving between places, busy with self-inflicted work at $DAYJOB
, and coming to terms with the election results. For the last few months, I can’t really explain why I’ve just been frozen. Perhaps it’s the political climate. It’s terrifying how backwards we seem to be moving and how idiotic and/or self-serving a significant portion of the population is. Perhaps it’s that some of those who voted for the current administration were friends and family that I now have to keep a distance from. Regardless, it’s time for a change, I hope.
To kick off some of this change, I’ll do a post about software. It’ll be short, but it’ll at least be something. Generally speaking, I don’t consider pure software worth writing about but perhaps I’ll change my mind.
Ubuntu Era
Upon moving this blog to Github Pages, my server was finally free for experimentation. If you trace back the commits, it all started as a fresh Ubuntu Server install and learning how to containerize services, except for WireGuard. Under one Docker Compose instance, I added multiple services including NGINX, Monero, and Minecraft. It might not have been particularly secure nor easy to backup and restore, but it worked and services would come up on boot. At some point, I got all containers to stop gracefully so I could restart things faster.
After moving to my new place, it was time to re-evaluate everything. To focus the server on just containerized services, I switched to an OpenWrt router and moved WireGuard onto it. This also had the benefit of being able to script the entire configuration of a freshly wiped router. It was also at that point that I learned about IPv6 delegated prefixes and could finally correctly allow IPv6 connections to my services.
From there, I focused on hardening my server’s security. Since Ubuntu Server didn’t support TPM-backed FDE, I had to use Ubuntu Desktop. Since Docker runs containers as actual root by default, I had to update each container to run as a non-root user and dropped all root capabilities. In case of container breakout, I also mapped each non-root user to a non-existent user on the host. To keep file permissions sane, the container users would map to an actual group on the host. It was also at this time that I added an SMB share to get Time Machine backups working. Once everything was working, I wiped everything to check that my setup scripts worked.
CoreOS Era
A few months in, my server failed to reboot properly. It took a bit too long to realize the SSD was faulty. Seeing the mess that was my setup script, I decided to look into better options. I came upon Fedora CoreOS which promised provisioning of an entire server from a single Ignition configuration file. Even more, I could reinstall the OS without touching additional partitions and drives if needed. Recovering from any drive failure would be almost trivial. I started by generating the Ignition file that would properly configure and mount encrypted LUKS drives. Since Ignition doesn’t currently support BTRFS RAID, I had to use mdadm RAID.
Instead of Docker Compose, CoreOS prefers Podman and Quadlets. Podman runs non-root by default and each user gets their own set of containers and network. Quadlets essentially allow control of containers through systemd. Podman’s default security model makes what I was attempting to accomplish with Docker Compose much easier. Instead of making a network for each group and manually messing with UIDs and GIDs, I could simply add a user for each group of services and their networks would be isolated by default. I also don’t have to mess with running as non-root in the containers as Podman maps the root user in the container to the user that started it. As a last barrier, folder permissions isolate users from accessing other user’s data. All of this ensures that even in the event of container takeover and breakout, there isn’t much an attacker can do.
Last but not least, I added scripts to update container images and also backup and restore data. As a final test, I wiped everything, reprovisioned, and restored the backup. At the end of the day, I finally have a system I can be sure is not only secure but also easy to restore when needed.
Comments