Homelab Fun and Games

Homepage

I made a recent post on Linkedin about the benefits of operating a homelab as if you were an enterprise with separate teams, in my case platform and application teams. There were a number of comments on the post asking for more information on my homelab setup so in this post let’s dig into it.

Background

I started my Homelab simply with a single machine running a Single-Node OpenShift (SNO) cluster. This machine had a Ryzen 3900x with 128 GB of RAM and has since been upgraded to a 5950x. It also doubled (and still does) as my gaming machine and I dual boot between OpenShift and Windows, this made it easier to justify the initial cost of the machine since I was getting more use out of it then I would have with a dedicated server.

Note I made a conscious decision to go with consumer level hardware rather than enterprise gear, I wanted to focus more on OpenShift and did not want to get into complex network management or racked equipment. My house does not have a basement so the only place I have for equipment is my home office and rack servers tend to be on the loud side. I’ve also opted to use a consumer grade router and separate network switch to run a 2.5G ethernet network in my home office versus a full blown 10G network with a managed switch, pfsense, etc.

I originally started with libvirt and three OpenShift nodes running as KVM VMs on my single server however this was using a lot of resources for the control plane. Once OpenShift SNO fully supported upgrades I switched over to a single SNO bare metal installation which was more performant and allowed for better utilization of resources. The only downside with SNO is I cannot run any workloads that explicitly requires three nodes but fortunately nothing I needed had that requirement.

This was my setup for a couple of years however a colleague with a similar setup had been running a second server to act as a hub cluster (i.e. Red Hat Advanced Cluster Manager and Red Hat Advance Cluster Security) and I decided to follow suit. I ended up buying a used Dell 7820 workstation earlier this year with dual Xeon 5118 processors off of eBay and then expanded its memory to 160 GB. As I joke with my wife though, one purchase always begets a second purchase…

This second purchase was triggered by the fact that with two servers I could no longer expose both OpenShift clusters to the Internet using simple port forwarding in my Asus Router given the clusters are running off the same ports (80, 443 and 6443). I needed a reverse proxy (HAProxy, NGINX, etc) to be able to do this and thus a third machine I could run it on. My two OpenShift cluster machines are started and stopped each day to save on power costs, however this third machine needed to run 24/7 so I wanted something efficient.

I ended up buying a Beelink SER 6 off of Amazon on one of the many sales that Beelink runs there. With 8 cores and 32 GB of RAM it has plenty of horsepower to run a reverse proxy as well as other services.

So with the background out of the way, let’s look at the hardware I’m using in the next section.

Hardware

In my homelab I currently have three servers, two of which are running individual OpenShift clusters (Home and Hub) and the third is the Beelink (Infra) running infrastructure services. Here are the complete specs for these machines:

Home Cluster Hub Cluster Infra Server
Model Custom Dell 7820 Workstation Beelink SER 6
Processor Ryzen 5950x
16 cores
2 x Xeon 5118
24 cores
Ryzen 7 7735HS
8 cores
Memory 128 GB 160 GB 32 GB
Storage 1 TB nvme (Windows)
1 TB nvme (OpenShift)
1 TB nvme (OpenShift Storage)

256 GB SSD (Arch Linux)
1 256 GB nvme (OpenShift)
2 TB nvme (OpenShift Storage)
512 GB nvme
Storage 2.5 GB (Motherboard)
1 GB (Motherboard)
2.5 GB (PCIe adapter)
1 GB (Motherboard)
2.5 GB (Motherboard)
GPU nVidia 4080 Radeon HD 6450 Radeon iGPU
Software OpenShift OpenShift
Advanced Cluster Manager
Advanced Cluster Security
Arch Linux
HAProxy
Keycloak
GLAuth
Homepage
Pi-hole
Prometheus
Upsnap
Uptime Kuma

Note that the Home server is multi-purpose, during the day it runs OpenShift while at night it’s my gaming machine. Arch Linux on the SSD boots with systemd-boot and let’s me flip between OpenShift and Windows as needed with OpenShift being the default boot option.

The storage on the machines is a mixture of new and used drives hence why some drives are smaller or are SSDs versus nvme drives. Like any good homelabber reuse is a great way to save a few bucks.

I also have a QNAP TS-251D which is a two bay unit with 2 x 4 TB spinning platters, I mostly use this for backups and media files. On an as needed basis I will run a Minio container on it for object storage if needed to support a temporary demo. Having said that this is the part of my homelab I probably regret buying the most, using the cloud for backups would have been sufficient.

Networking

My networking setup is relatively vanilla compared to some of the other homelabbers in Red Hat, networking is not my strong suit so I tend to stick with the minimum that meets my requirements.

It all starts with HAProxy on the infra server routing traffic to the appropriate backend depending on the service that was requested. This routing is managed by using SNI in HAProxy for TLS ports to determine the correct backend and you can see my configuration here.

My homelab is using split DNS where my homelab services can be resolved both externally (i.e. outside my house) and internally (i.e. in my home office). Split DNS means that when I access services from my house I get an internal IP address and when I’m out I get an external IP address. The benefit of this is avoiding the round trip to the Internet when I’m at home plus if your ISP doesn’t let you route traffic back to the router (i.e hairpinning) this can be useful.

To manage this I use pi-hole to provide DNS services in my home as per the diagram below:

network

The external DNS for my homelab is managed through Route 53, this costs approximately $1.50 a month which is very reasonable. Since the IP address of my router is not static and can be changed as needed by my ISP, I use the built-in Asus Dynamic DNS feature. Then in Route 53 I simply set up the DNS for the individual services to use a CNAME record and alias it to the Asus DDNS service which keeps my external IP addresses always up to date without any additional scripting required.

For the internal addresses I have DNSmasq configured on pi-hole to return the local IP addresses instead of the external addresses provided by Route 53. So when I’m home pi-hole resolves my DNS and gives me local addresses, when I’m out and about Route 53 resolves my DNS address and gives me external IP addresses. This setup has been completely seamless and has worked well.

Infrastructure Services

In the hardware section I listed the various infrastructure services I’m running, in this section let’s take a deeper dive to review what I am running and why.

HAProxy. As mentioned in the networking section, it provides the reverse proxy enabling me to expose multiple services to the Internet. For pure http services it also provides TLS termination.

GLAuth. A simple LDAP server that is used to centrally mnanage users and groups in my homelab. While it is functional, in retrospect I wish I had used OpenLDAP however I started with this as my original Infra server, as a test, was a NUC from 2011 and was far less capable then the Beelink. At some point I plan to swap it out for OpenLDAP.

Keycloak. It provides Single Sign-On (SSO) via Open ID Connect to all of the services in my Homelab and connects to GLAuth for identity federation. It also provides SSO with Red Hat’s Google authentication system so if I want to give access to my homelab to a Red Hatter for troubleshooting or testing purposes I can do so.

Homepage. Provides a homepage for my Homelab, is it useful? Not sure but it was a fun configuring it and putting it together. I have it set as my browser home page and there is a certain satisfaction whenever I open my browser and see it.

Homepage

Upsnap. For efficiency reasons I start and stop my OpenShift servers to save on power consumption and help do a tiny bit for the environment. However this means I need a way to easily turn on and off the machines without having to dig under my desk. This is where Upsnap comes into play, it provides a simple UI for wake-on-lan that enables me to easily power up or down my servers as needed.

As a bonus I can VPN into my homelab (this isn’t exposed to the Internet) to manage the power state of my servers remotely.

upsnap

Uptime Kuma. This monitors the health of all of my services and sends alerts to my Slack workspace if any of the services are down. It also monitors for the expiration of TLS certificates and sends me an alert in advance if a certificate will expire soon. It’s very configurable and I was able to tweak it so my OpenShift servers are considered in a maintenance window when I shut them down at night and on weekends.

Uptime-Kuma

Pi-hole. Covered in the network section, this is used to manage my internal DNS in my homelab. It’s also able to do ad blocking but I’ve disabled that as I’ve found it more trouble then it’s worth personally.

pi-hole

Prometheus. A standalone instance of Prometheus, it scrapes metrics for some of my services like Keycloak. At the moment these metrics are only being used in Homepage but I’m planning on getting Grafana installed at some point to support some dashboards.

My infrastructure server is configured using Ansible, you can view the roles I have for it in my homelab-automation repo. A couple of notes on this repo:

  • It’s pretty common that I run roles ad-hoc so you may see the playbook configure-infra-server.yaml constantly change in terms of roles commented out.
  • This repo generates the TLS certificates I need but I haven’t gotten to the point of running it as a cron job. At the moment when uptime-kuma warns me a certificate is about to expire I just run the letsencrypt roles to re-generate and provision the certs on the Infra server. (Note on OpenShift this is all handled automatically by cert-manager)
  • The Keycloak role is a work in progress as fully configuring Keycloak is somewhat involved.

OpenShift

I run two OpenShift clusters using Single Node OpenShift (SNO) as discussed previously. Both clusters are configured and managed through Advanced Cluster Manager (ACM) and OpenShift GitOps (aka Argo CD). While it’s too long to go into details here, I basically have some policies configured in ACM that bootstraps the OpenShift GitOps Operator along with a bootstrap cluster configuration application, using the app-of-app pattern, onto the clusters managed by ACM.

In this GitOps Guide to the Galaxy youtube video I go into a lot of detail of how this works. However note that I’m always iterating and some things have changed since then but it still is good for seeing the big picture.

Once the operator and the Argo CD application are installed on the cluster by ACM, sync waves are used to provision the cluster configuration in an ordered fashion as illustrated by the image below (though the image itself is a bit dated).

cluster-config

Periodically I will temporarily add clusters from the cloud or the internal Red Hat demo system to show specific things to customers, bootstrapping these clusters becomes trivial with ACM using this process.

I run quite a few things on my cluster, here are a few highlights with regard to specific items I use in my clusters:

LVM Storage Operator. This provides dynamic RWO storage to the cluster, it works by managing a storage device (nvme in my case) and partionining it as needed using Logical Volume Manager (LVM). This is a great way to have easy to manage storage in a SNO cluster with minimal resources consumed by the operator.

External Secrets Operator. I use GitOps to manage and configure my clusters and thus need a way to manage secrets securely. I started with Sealed Secrets, which worked well enough, but once I added the second cluster I found it was becoming more of a maintenance burden. Using the External Secrets Operator with the Doppler back-end externalizes all of the secrets and makes it easy to access secrets on either cluster as needed. I wrote a blog on my use of this operator here.

When ACM bootstraps a cluster with the GitOps operator and initial application, it also copies over the secret needed for ESO to access Doppler from the Hub cluster to the target cluster.

Cert Manager. Managing certificates is never fun but the cert-manager operator makes it a trivial exercise. I use this operator with Lets Encrypt to provide certificates for the OpenShift API and wildcard endpoints as well as specific cluster workloads that need a cert.

Advanced Cluster Security (ACS). This is used to provide runtime security on my clusters as it scans images, monitors runtime deployments, etc and is an invaluable tool for managing the security posture of my Homelab. The Hub cluster runs the ACS Central (the user interface) as well as an agent, the Home cluster just runs the agent which connects back to Central on the Hub.

OpenShift Configuration

My OpenShift configuration is spread across a few repos and, well, is involved. As a result it is not possible to do a deep dive on it here however I will list my repos and provide some details in the table below.

Repository Description
cluster-config This repo contains my cluster configuration for the home and hub clusters as well as clusters I may add from AWS and the Red Hat demo system.
acm-hub-bootstrap This is used to bootstrap the ACM Hub cluster, it has a bash script that installs the ACM control plane along with the policies and other components needed to bootstrap and manage other clusters.
cluster-config-pins I use commit pinning to manage promotion between clusters (i.e. rollout change on lab cluster, then non-prod and then prod). This repo is used to hold the pins, it’s a work in progress as I’ve just started doing this but I’m finding it works well for me.
helm-charts This repo holds the various helm charts used in my homelab. One notable chart is the one I use to generate the applications used by the bootstrap app-of-app.

Alerting

I setup a free workspace in Slack and configured it as a destination for all of my alerts from OpenShift as well as other services such as uptime-kuma. Since I use my homelab for customer demos being proactively informed of issues has been really useful.

slack-alerts

Conclusion

This post reviewed my homelab setup, in a somewhat rambling fashion, as of December 2023. If you have any questions, thoughts or ways I could do things better feel free to add a comment to this point.

PS4 Controller and PC

I do a lot of traveling for my job and one thing I enjoy doing on the road is gaming on my laptop. While the keyboard and mouse works great for many games sometimes nothing beats a controller. I also prefer a wireless controller so that I can connect the laptop to the hotel TV if so desired for some bigger screen gaming. So the question is, what is the best wireless controller out there for the road warrior who wants to do some PC gaming?

In my opinion the answer is the PS4 controller. It has a number of advantages over other controllers including that it is wireless (bluetooth) and uses the same standard micro USB connector as an Android phone thereby cutting down on cables. It has all the same buttons as the 360 controller and can mimic it flawlessly on Windows and Linux with third party help.

I previously tried the Logitech F710 wireless controller but it has batteries that needed constant replacing plus requires a dongle. The wireless 360 controller is excellent, but it has a non-standard charging connector and also requires a dongle.

On Windows, I use DS4Tool to get the PS4 controller to mimic the 360 controller. On Linux, I’m using the ds4drv for the odd time a game requires a 360 controller (looking at you Dead Island).

In short, I highly recommend the PS4 controller, give it a try and I’m sure you won’t be disappointed.

Moto 360 Charging Image Retention


There have been some reports on the web about issues with regards to the charging image the Moto 360 displays and image retention along with possible burn-in. In short, the Moto 360 displays a largely static image that shows the percent charged when placed in the charger. Since this image is displayed for several hours with minor changes there is the possibility of image retention occurring and in extreme cases burn-in.

Unfortunately Android Wear doesn’t have an option to turn off the screen while charging. While there are a few programs that purport to do this, what they really do is throw up a black image but the screen is still on. While this does eliminate the image retention issue, it’s by no means ideal.

The only way I have found to reliably disable the screen while charging is to use the following sequence of steps:

  1. Place watch on charger
  2. On your phone, open the Android Wear app, go into Settings and enable Ambient Screen by clicking the check box
  3. Wear 2 or 3 seconds then disable it
  4. Watch screen should now be off

While this solution works well, you do have to do these steps every time you place the watch on the Charger. Hopefully Google will add an options for this top Wear in a subsequent version.

Here’s a screen shot showing the Ambient Screen setting:

Screenshot_2014-10-20-12-43-09

 

 

Review of Clevo w230ss Laptop

Introduction

I recently opted to replace my beloved Sony Vaio Z (SVZ) laptop with a new Clevo w230s laptop. While the Vaio Z had served me well, it had a couple of limitations that were really starting to annoy me. Specifically, the Vaio Z was limited to 8 GB and the 256 GB SSD could only be increased by replacing them with an expensive 512 GB Sony proprietary model. Finally the external AMD GPU of the Vaio, while neat in concept, would only work in Linux with the open source driver and performance was atrocious.

Prior to the Vaio Z I had owned a Clevo machine that I purchased from Mythlogic, while not particularly attractive and a bit of a brick, it was powerful, easily upgradeable and user maintainable. This laptop is still going strong and being abused as my 13 year old son’s gaming laptop, it’s durability in this role has been very impressive.

Based on that positive experience, I purchased the Clevo w230ss from Reflex Notebook and the purchase went well with no issues as the laptop showed up at my door approximately four weeks later. I opted to use a Canadian company instead of Mythlogic this time as I wanted to support a Canadian alternative for Clevo/Sager resellers and didn’t feel like driving to Ann Arbor this time for the tax savings. I do highly recommend Mythlogic though, they were an excellent bunch to deal with last time.

Anyways, I thought it would be fun to do a short review of the laptop particularly from a Linux perspective since Linux is my primary OS for day to day work. Please keep in mind that I use Arch Linux which is a rolling release distro in that you are always running the latest and greatest. Thus there is a chance that while some things work fine for me they may not be functional under other distros such as Ubuntu which may have a slightly older kernel.

So with all of that in mind, lets move on to the review.

Appearance

Clevo w230ss

Clevo w230ss

The w230ss laptop has an appearance that could be best described as functional. It’s not a particularly sexy laptop, like a Macbook or my Vaio Z, but it’s not ugly either. The two tone finish of the laptop with silver on the inside and black on the outside is reasonably attractive and fits in well in the corporate environment. In some ways, the w230ss reminds of “sleeper” cars which are the cars when you look at them seem like perfectly ordinary family sedans but hide an enormous amount of power under the hood.

The back of the LCD is a rubberized material which feels nice but attracts skin oil like there is no tomorrow. On the whole though I like the rubberized finish.

Ports

This is the biggest issue with the laptop. While the number of ports are certainly sufficient (3 USB 3.0, 1 USB 2.0, HDMI, VGA, Ethernet) the layout is pretty crap-tastic. The right side of the laptop has the 3 USB 3.0 ports, HDMI, VGA and Ethernet and unfortunately the ports start right at the front which means if you use a mouse and are right handed plugging anything in will interfere with the mouse. The only work around I can see is buying some right angled cables off the net to keep the cables close to the side of the laptop.

The left side has the single USB 2.0 port and once again it is right at the front along with the 1/8” mic and headphone jack. Even if you don’t use the USB port on the left, left handed mouse users will find the left side is no better then the right since the fan vents here. In winter it’s great as your hand gets nice and toasty when gaming, other times not so much.

The one port I wish was included is display port so that high resolution monitors could be used. While HDMI is getting better in this regard it still has limitations at higher resolutions which won’t be addressed until HDMI 2.0 is available.

Keyboard

The keyboard feel is much better then my Vaio Z and is a pleasure to use, one of the better laptop chiclet keyboard that I’ve used over the last few years. As a software developer I am extremely thankful that Clevo designed the keyboard with full size arrow keys, this trend started by Apple towards half size arrow keys on laptops has been driving me insane.

Also, the inclusion of dedicated keys for Home, End, Page Up and Page Down is a welcome sight as I constantly use these keys to navigate code. On most smaller laptops such as my Vaio Z accessing these keys typically involves using the Fn key in combination with the arrow keys which I find quite irritating.

Finally the keyboard has three levels of back-lighting (None, Low, High) and the back-light key works fine under Linux for controlling it. However I don’t see any way to automate the control of the keyboard light as it doesn’t appear to have a device listed under /sys/class/leds, there is a phy0_led but its brightness value doesn’t change.

Trackpad

It works under Linux but since I never use it I can’t speak to the quality of it.

Display

My laptop came with the 1080p FHD display, displays with larger resolutions are available however I would highly recommend checking that your preferred Linux Desktop Environment (DE) supports HiDPI displays well before purchasing one.

The 1080p display on this laptop is nice and bright compared to my Sony Vaio Z. Viewing angles are very good and text is nice and easy to make out. Some users might opt to use scaling but in Gnome I have no issues reading text with scaling off (scaling factor of 1), everything is crisp and sharp. The display is matte, not glossy, and thus has minimum reflectivity even in office environments with overhead fluorescent lighting.

Power Management

I’m using Arch Linux with TLP to provide active power management, I also use bumblebee to disable the Nvidia GPU unless it is needed. TLP does a great job of managing power, with it enabled the screenshot below shows the tunables from Powertop and the ones in the Bad category are not things you typically would want to tune.

Powertop Screenshot

Powertop Screenshot

Under Arch I get about 5 to 5.5 hours of light use under battery with the screen dimmed to about 75%. The wattage being consumed is 15.5 watts as per powerstat.

Temperature and Fans

There are some complaints on various forums with respect to fan noise for the w230st and to a lesser extent the w230ss. This concerned me as I like my laptop to be as silent as possible when using it for work. I’m please to report though I find them mostly unfounded and have been very happy with the noise profile of this laptop.

When doing basic tasks in my home office, the CPU temperatures are around 45 Celsius and the laptop is dead quiet with the fans never spooling up. At the office when I’m doing coding, running servers, etc the CPU temperatures are between 50 and 60 degrees. In this scenario the fans will spin up lightly every once in a while but I find it to be barely noticeable and not distracting. It’s certainly not the on/off pattern so many people were complaining anout with the w230st.

In terms of gaming, I’ve only tried the Witcher 2 in both Linux and Windows. Unfortunately the Linux port of Witcher 2 is awful and the performance was atrocious so I didn’t bother looking at temps as I quickly moved over to Windows. Under Windows, the CPU temperature is about 80 degrees while the GPU is 71. This is in high settings with Bloom disabled.

When playing the Witcher 2 the fans are running constantly and the volume is correspondingly louder, after all you can’t pack this much power in a small laptop and expect silent fans when gaming. Speaking for myself though, I find the fan noise is drowned out by the game music and effects that I just don’t find it very noticeable. I will say though that if I accidentally place a drink with-in 6 inches of the exhaust fan on the left side it will quickly warm up. I’ve ruined more then one cold drink by doing this.

Dual Booting

As I mentioned in another post, I dual boot between Linux and Windows using rEFInd as the EFI boot manager. No issues with this setup except I had to adjust a Windows registry setting to ensure that time and time zones where treated the same in both OSes as per this blog post here.

One benefit of rEFInd is it will auto-detect any EFI boot loaders on USB sticks making it trivial to boot maintenances tools like an Arch Live USB or Gparted. This is much more convenient then trying to mess with the BIOS boot loader settings. Another benefit is not having to deal with GRUB any longer.

Summary

So in summary this is an excellent alternative for the Linux user looking for a powerful laptop with gaming capabilities.