My HomeLab

Working for Red Hat and focusing on Red Hat products, specifically OpenShift, I try to utilize the same products at home. This allows me to have a similar environment to what my customers use and keep my mind sharp on various products (and potential pitfalls). Every VM and host runs CentOS 8 (was deployed a year ago before the CentOS Stream fiasco began). When I upgrade, I will be deploying CentOS Stream and not any of the new upstarts such as Rocky Linux etc. Maybe there could be a blog post in there somewhere later.

Hardware

Currently my homelab is on revision 5428 which consists of a single "server". This server is an AMD Ryzen 2700X with 64GB of RAM backed by a 1TB NVMe SSD. The storage for my environment is handled by a Synology DS1618+ filled with 16TB Seagate Exos drives. This nets me ~43TB usable and currently consuming 20TB of that (I run a large media server). I also have a single 2TB RAID1 SSD array for VM storage over NFS which is currently served over 1Gbps, but will be upgrading to 10Gbps soon. I already have the 10GB card for the Synology, I just need to acquire a 10GB card for the server (or servers if I upgrade anytime soon).

For networking, I have a dumb 24-port NetGear switch and a Synology RT2600AC router. This router is pretty dope and allows for all the functionality I need which is really just a DNS server I don't have to install myself. It is also managing DHCP for the network and gives pretty decent data visualization.

Storage

As mentioned above, I have a Synology DS1618+ unit. Prior to purchasing an actual NAS, I was using MDADM to manage a software raid and manually configured all of the shares I needed. Later on when I started storing our family photos and video, it became a pain to deal with Samba to share out so my Wife's Windows laptop could access it. For me in the long run, it was just easier to get a unit that did all of this for me (and had the ability to do a little bit more if needed). So far it has worked out amazingly and I wouldn't go back to managing my own storage without having a Synology.

This unit has 4 RJ45 Ports that can all be configured if required and has expandability for either a 10Gbe NIC or I believe an SSD NVMe expansion drive (I chose the 10Gbe route).

Virtualization

For virtualization the single server has oVirt installed (Red Hat Virtualization) and runs as a single host (with the hosted-engine running locally). The configuration is a single node cluster in a single datacenter. This presents some challenges I will outline later on.

Ovirt has a pretty awesome UI and functions quite well as far as virtualization platforms go, especially with it being open source. Currently here are the VMs that live on this node:

  • hosted-engine (Virtualization Manager)
  • bastion (Proxy and only VM exposed to the internet)
  • Plex
  • Minecraft
  • OKD Nodes (3 nodes in total)
  • Services (just a VM for testing things)

Containerization

Continuing on with the usage of open source projects, I run my own OpenShift Origin cluster (Red Hat OpenShift 3.11). For those that don't know or have been living under a rock, Kubernetes is a container orchestration system and has more or less gone from buzzword to the standard for application platforms. OpenShift is a distribution of Kubernetes and has quite a few handy add-ons built in. Some of these may not make sense to small groups or solo developers, but the features that OpenShift provides for an Enterprise (or any business looking to have some real features) cannot be matched.

OpenShift

First and foremost, OpenShift puts container security at the top of everything it does. Containers are somewhat secure by just existing, but can easily be exploited if just running Docker or a more vanilla Kubernetes platform. OpenShift enforces by default that containers/pods cannot run as root. This is already a leap ahead of every Kubernetes deployment in the wild. It also brings some challenges to newcomers to the platform as when most people get started, they find a tutorial on Kubernetes and just apply the code. They will be shocked to find that most tutorials out there assume that your container will run as root (although this is getting better as time goes on). There are other nice things in security, but I will leave those for another post.

Another neat item built in is around image registries and a build system. For most developers, you wouldn't think of having a build system as being an important item in Kubernetes since you will most likely have Docker installed locally. What if you work in an environment where you can't install Docker locally? Or don't have the option of a VM running Docker to do a build for you? That is where the build system comes in for OpenShift. It exists and allows you to build container images from source, Dockerfile or other means and then push them into the built in container registry built into OpenShift. You do not need to worry about having to spin up your own image registry when OpenShift just gives you one. Obviously for multiple clusters this becomes its own interesting problem and a central registry (Quay.io maybe?) would be more suitable. But for the sake of a homelab, this is more than sufficient.

Applications

In my lab, OpenShift Origin is used for me to test various tools on a live Kubernetes cluster. I have attempted using Docker Desktop on both Mac and Windows, but prefer to have a real multi-node cluster as it just sits a little closer to what is being done in the wild. Some of the projects I have running here are outlined below:

Not all of these are in "production" use, but I have them around for testing and experimenting.

Challenges

One of the biggest issues I have is the current state of the single oVirt node. If there is an issue with the node, which rarely happens but can, the hosted-engine goes down with it. Getting this started back up due to it running on "local" storage, requires some finesse with Linux services starting up. I have a script to do this for me, but the beginning was quite troublesome.

The other issue faced is just lack of resources. I understand that there are devs and others out there who would kill to have 64GB on their main machine, let alone on an entirely independent server. However for me, I hit resource limits quite often (mostly memory related) and am unable to deploy the latest version of OKD the way I would like. Hopefully soon when chips and things become more available, this can be rectified in homelab v2.0.


Conclusion

This is the current iteration of my homelab with more to come in the near future as nature heals and chips become available. The next iteration will be 10Gbe with x2 servers for the oVirt cluster so I can utilize failover and not jump through hoops when the hosted engine fails. Please feel free to ask questions or links for any of the items/software listed here.