I describe a simple method I use to self-host portable server applications using tmux and bubblewraparticles linux self-hosting tmux bubblewrap raspberrypi
Let's say you want to self-host a Gemini capsule and a weblog. Maybe you'll use a Raspberry Pi or VPS server. Typically, you'd install (or get a pre-installed) operating system, like Debian/Ubuntu. You might then
apt install a webserver like nginx, and
pip3 install a Gemini server like JetForce.
Some problems come to mind with this kind of setup:
- You need root access. This isn't usually a problem, but for the less technical, this can be a foot-gun. Even for the technical, if like me you get used to typing
sudofrom muscle-memory without due care, you can hose your system.
- When you install things with package managers, things get sprayed all over your file system. At this point you loose some control over the system, e.g. were any services installed? where any suid binaries installed? what access does it have? does it run on boot?
- If you
pip3 installa package, how do you secure it? How do you make it run on boot? These require some deep technical skills.
- Over the years, you will have to re-learn and re-setup your machine, because SysV Init changes to systemd (for example), so oops, /etc/init.d/ stuff doesn't work anymore and you'd best learn to use journald to read logs, etc. More hurdles for the less technical self-hosters.
- If you want to move to another OS or re-build the machine, it's a huge barrier, unless you've been diligent and kept an up-to-date automation script like Ansible. I've tried, but often I install things just to try it out, then I end up depending on it, and my automation scripts fall way behind and I give up and end up with a snowflake server.
Getting back some control
In order to get back some control, I thought about how nice it would be if:
- The OS is an immutable image, like with Android. I "flash" it, then don't touch it. Every so often, I "flash" updates.
- Services I want to run are sandboxed apps (like on Android). They get their own "home" directory, have no access to the directories of other services, and read-only access to system files.
- They can start on boot, and get restarted if they crash.
- They are the type of portable services that you unzip-and-run, preferably Go or C based statically compiled binaries. No root access needed to install them, just unzip into a directory and run them.
- The services I install and run are kept independent of the system services, so that when I list services, only my user-installed ones are listed.
The current solution
This is my solution for implementing the above ideas:
- I install the unzip-and-run services into a subfolder of
~/services. I strongly prefer services written in compiled languages with no system dependencies, e.g. Caddy server, HAProxy, or SyncThing
- I use
tmuxto run the services. If you don't know tmux at all, this might seem like a barrier or added complexity, but learning it is both easy and worthwhile. A tmux session called "services" is started on boot, and each service runs in a named tmux window, e.g.
caddyweb server runs in a window called "caddy". When I log out, the services continue to run.
- I can list running services by listing the tmux windows:
tmux list-windows -t services
- I can stop a service by either sending a Ctrl-C to the window (
tmux send-keys -t services:myservice C-c), or by killing the window (
tmux kill-window -t services:myservice).
- When I start a service, I run it in a while loop to restart if it crashes, after
- When I start a service, I
teethe output both to a log file, as well as to the console. This way, I can
tmux attach -t servicesand interact with each service from the console, or I can parse the corresponding log files.
- When I start a service, I open it with bubblewrap (i.e.
bwrap). You can read about bubblewrap in the Arch Wiki. It is my favourite security tool in Linux that I use both on desktop and server. It creates a sandbox much like a chroot but with added control over mounting tmpfs directories, read-only system directories, changing the user/pid/network namespaces for isolation, and much much more. This is similar in some ways to a container, but it's much more lightweight and allows the processes to share the OS and system files.
- Using bwrap, for each service, I mount a new
/home/user/appdirectory in the sandbox, which maps to the folder of the service itself (
~/services/myservice), and the
/home/userfolder in the sandbox is mounted from a subfolder of
~/sandboxes. Let's say the service wants to store files in
~/.config/myservice/myservice.conf, then the actual file gets written to
~/sandboxes/home/user/services/service/.config/myservice/myservice.conf. Each service sees a different
~/.configfolder and thus cannot access each other's files.
- To start the services on boot, I use a nifty feature of
cronby adding this to my crontab:
@reboot cd ~/services/ && ./start
In my hypothetical scenario from the introduction, this is how I'd go about setting up my server:
- Install a fresh Linux OS (my preference is Alpine Linux for it's simplicity and security). Alpine can be made to run-from-RAM, which is very useful on a Raspberry Pi as it saves your microSD from writes thus massively improving reliability.
- Install tmux and bwrap if they aren't already installed.
- Log in as the user created during install. Download the
servicesfolder linked above into the user's home folder.
- Add the line from
crontabfile to the user's crontab.
- To add a service, download the binaries into a subfolder of
~/services/caddy/and make a script file called
start.shto start it, i.e.
~/services/caddy/start.sh. Don't forget to
chmod +x start.sh
- Start the service:
~/services/bws start caddy
- The logs are written to
/tmp/log/caddy.logbut you can also attach a tmux session using
~/services/bws attach caddyand view it there.
- To auto-start the service on boot, add a line to
~/services/bws start caddy
- A good way to explore the sandbox of a running service is to try running the
ttydservice and poke around
Here are some benefits I find for using this method:
- It is simpler to understand and see what's going on. My user-selected services run separately to system-level services so I don't need to deal with the latter's complexity.
- It can be run root-less (I remove
doasto harden the system). I still use
suto install updates, setup a firewall or
fail2ban, or to install any system-wide packages.
- It avoids the issue of files from different services being owned by different users, making backups and sharing files between services difficult
- It has easier security to reason about, and reduces attack surface, e.g. directory traversals have much less attack surface. I can also run "untrusted" binaries knowing the damage would be limited.
- I can switch to another OS with minimal fuss. I simply copy my home folder across and add the crontab line. This method should work in most Linux OS's, maybe even BSD's and MacOS (barring the bubblewrap sandbox which should be replaced with jails or similar).
- This method could potentially be used by less technical users, and users without root access. I've used it at work on some servers I don't have root access to.
This simple method is only really useful for simple self-hosting. Here are some downsides I found:
- Services that need to bind to priviledged ports (like port 80, 443 for web) would need to be run as root. I mitigate this by adding
iptablesrules to forward port 80 to port 8080, for example, then the service can run on port 8080.
- You need to manually update the services when new versions are available. You could automate it with a bash script and cronjob though, and with the reduced attack surface, you may not need to be as pedantic about updates.
- Services that need to write to system directories (like /var or /etc) can be problematic. It is possible to use an overlayfs file system to allow writes to system folders (that get saved elsewhere), but this adds complexity.
- Services that use PHP, Python, or other interpreted languages can be tricky to run, since you'd typically need to install those languages system-wide, with potential version conflicts, etc. If you're just running one Python service though, it's not too much of a problem.
Some features I'd like to explore in future:
- Ability to detect when a service is using too much CPU/RAM/disk and stop it. Running
earlyoomcan mitigate the RAM issue, for example.
- Make the service logs readable by
fail2ban, a very useful and important security tool for self-hosters
- Ability to adjust the sandbox per service, e.g. some services don't need internet access, so add
- Log rotation or trimming to keep the log files from getting too big