Rebuilding the Homelab: Services & Software
Frequently, after expressions of "wtf?" and "that's crazy", I get asked what is it exactly that I'm hosting within my homelab? Well, let's walk through the software setup.
cluster management
I am using proxmox as my hypervisor, which I use to manage the servers in my cluster. This allows me to spin up both full virtual machines and LXC containers from a pretty easy interface.
I also have a cluster of 8 Raspberry Pi running k3s, with one main node, and 7 worker nodes - this is where I do all my kubernetes cluster learning.
Each machine on the cluster has a specific purpose, and their vanity named after planets in the solar system. I know, how original, lol. The containers or VMs hosted on them are named after the moons of that particular planet.
- Saturn hosts all of my applications running in an LXC. These are applications that I've come across thanks to https://helper-scripts.com or other places, but running directly in an LXC.
- Jupiter hosts all of my Docker hosts. I currently run 3 separate LXC containers, each with 4 cores and 32GB RAM allocated to it.
- io hosts self-hosted applications running in an docker container
- europa hosts all of my database and queue containers
- callisto runs services that power my twitch stream applications, as well as other internal stuff.
- Pluto runs software that's central for the homelab servers themselves.
- Mars is a Mac Studio M1 Ultra (technically not part of the cluster, and not physically on the rack). Currently, it's running things like my ollama server.
- Eris is my Nvidia Orin Jetson Nano. I'm using this for edge compute for in-house AI projects (like my long-running, never finished smart mirror project lol)
io: docker home of self-hosted apps
On io, I am currently running a few applications that are becoming more and more central to my workflows.
- WebUI, which is my main UI interface for using open-source LLMs within a chat-like interface, has completely replaced my usage of OpenAI for anything. LLaMa 3 runs amazing on the m1 architecture.
- Planka, which is my kanban board and how I'm tracking my various tasks and todos.
- Budibase, to play with some no-code tools backed by databases I host.
- Homepage, which is my portal where I see a bunch of things at a glance and click through to all my applications.
I also tend to throw quick application trials up on here, before assigning them a hostname and all that jazz.
europa: databases and queues
On here, I have all sorts of stuff running. MongoDB, PostgreSQL, MariaDB, redpanda, RabbitMQ, CouchDB, and Neo4J. I'll dive into their uses in a future post, but I am currently all but 1 of these in at least one "production" application (RedPanda is the outlier - I'm testing this out as a kafka replacement before I start doing some fun experiments).
callisto, the stream projects docker host
Here's where I'm hosting my twitch golang bot, some backing services to control things like my Hue lights and my Nanoleaf panels. It's also where I start off on hosting some of my stream projects before pushing them to product-ready places. Some services will always live here, like my not-yet-finished smart mirror project.
other software
Well, I have some other things running on other machines. For instance, I run two instances of pihole on my raspi cluster. I also run Home Assistant, which has my temperature/humidity sensors, as well as a monitor for the power my rack uses (in total, the bill to run my rack is roughly $11.00/month - not bad for how much I get out of it).
That power bill I mentioned also includes my TrueNAS machine, running (6) 12TB HDDs. I do not use the container system on there, given I have my proxmox cluster, but I may if there's something I want to run that makes use of my NAS storage.
I also run multiple stages of backups
- For daily backups and snapshots, I use Time Machine on macOS, and I take ZFS snapshots on my servers.
- For weekly, I take clone images of my macOS systems using Carbon Copy Cloner onto a bootable disk image. I keep a bootable weekly active for each machine. For the servers, I do a weekly Proxmox Backup as well.
- For monthly, I have the last 4 weeks of cloned disk images for each mac host. When I update the weekly, I also add the latest clone to this drive, and delete the previous one. This drive gets stored offsite.
- I'm planning on reintroducing backblaze into my setup for additional offsite backup storage.
- Lastly, I use dropbox for a selection of files and assets that make sense to have in readily available, multi-system cloud storage (but understand, Dropbox is not a backup system, though it is better than nothing)
I also have OpenTelemetry collectors running on each rackmount host, and I send all of that data over to honeycomb, where I can keep track of the metrics and traces for system events. It's already been super useful for debugging the errant host-level system resources spike.
So yeah, this is my humble homelab to date, and it has been fantastic for all sorts of experimentation and running software. If you're running a homelab, or curious about stuff I got going, hit me up on X (formerly Twitter).