Homelab Fun and Games

Homepage

I made a recent post on Linkedin about the benefits of operating a homelab as if you were an enterprise with separate teams, in my case platform and application teams. There were a number of comments on the post asking for more information on my homelab setup so in this post let’s dig into it.

Background

I started my Homelab simply with a single machine running a Single-Node OpenShift (SNO) cluster. This machine had a Ryzen 3900x with 128 GB of RAM and has since been upgraded to a 5950x. It also doubled (and still does) as my gaming machine and I dual boot between OpenShift and Windows, this made it easier to justify the initial cost of the machine since I was getting more use out of it then I would have with a dedicated server.

Note I made a conscious decision to go with consumer level hardware rather than enterprise gear, I wanted to focus more on OpenShift and did not want to get into complex network management or racked equipment. My house does not have a basement so the only place I have for equipment is my home office and rack servers tend to be on the loud side. I’ve also opted to use a consumer grade router and separate network switch to run a 2.5G ethernet network in my home office versus a full blown 10G network with a managed switch, pfsense, etc.

I originally started with libvirt and three OpenShift nodes running as KVM VMs on my single server however this was using a lot of resources for the control plane. Once OpenShift SNO fully supported upgrades I switched over to a single SNO bare metal installation which was more performant and allowed for better utilization of resources. The only downside with SNO is I cannot run any workloads that explicitly requires three nodes but fortunately nothing I needed had that requirement.

This was my setup for a couple of years however a colleague with a similar setup had been running a second server to act as a hub cluster (i.e. Red Hat Advanced Cluster Manager and Red Hat Advance Cluster Security) and I decided to follow suit. I ended up buying a used Dell 7820 workstation earlier this year with dual Xeon 5118 processors off of eBay and then expanded its memory to 160 GB. As I joke with my wife though, one purchase always begets a second purchase…

This second purchase was triggered by the fact that with two servers I could no longer expose both OpenShift clusters to the Internet using simple port forwarding in my Asus Router given the clusters are running off the same ports (80, 443 and 6443). I needed a reverse proxy (HAProxy, NGINX, etc) to be able to do this and thus a third machine I could run it on. My two OpenShift cluster machines are started and stopped each day to save on power costs, however this third machine needed to run 24/7 so I wanted something efficient.

I ended up buying a Beelink SER 6 off of Amazon on one of the many sales that Beelink runs there. With 8 cores and 32 GB of RAM it has plenty of horsepower to run a reverse proxy as well as other services.

So with the background out of the way, let’s look at the hardware I’m using in the next section.

Hardware

In my homelab I currently have three servers, two of which are running individual OpenShift clusters (Home and Hub) and the third is the Beelink (Infra) running infrastructure services. Here are the complete specs for these machines:

Home Cluster Hub Cluster Infra Server
Model Custom Dell 7820 Workstation Beelink SER 6
Processor Ryzen 5950x
16 cores
2 x Xeon 5118
24 cores
Ryzen 7 7735HS
8 cores
Memory 128 GB 160 GB 32 GB
Storage 1 TB nvme (Windows)
1 TB nvme (OpenShift)
1 TB nvme (OpenShift Storage)

256 GB SSD (Arch Linux)
1 256 GB nvme (OpenShift)
2 TB nvme (OpenShift Storage)
512 GB nvme
Storage 2.5 GB (Motherboard)
1 GB (Motherboard)
2.5 GB (PCIe adapter)
1 GB (Motherboard)
2.5 GB (Motherboard)
GPU nVidia 4080 Radeon HD 6450 Radeon iGPU
Software OpenShift OpenShift
Advanced Cluster Manager
Advanced Cluster Security
Arch Linux
HAProxy
Keycloak
GLAuth
Homepage
Pi-hole
Prometheus
Upsnap
Uptime Kuma

Note that the Home server is multi-purpose, during the day it runs OpenShift while at night it’s my gaming machine. Arch Linux on the SSD boots with systemd-boot and let’s me flip between OpenShift and Windows as needed with OpenShift being the default boot option.

The storage on the machines is a mixture of new and used drives hence why some drives are smaller or are SSDs versus nvme drives. Like any good homelabber reuse is a great way to save a few bucks.

I also have a QNAP TS-251D which is a two bay unit with 2 x 4 TB spinning platters, I mostly use this for backups and media files. On an as needed basis I will run a Minio container on it for object storage if needed to support a temporary demo. Having said that this is the part of my homelab I probably regret buying the most, using the cloud for backups would have been sufficient.

Networking

My networking setup is relatively vanilla compared to some of the other homelabbers in Red Hat, networking is not my strong suit so I tend to stick with the minimum that meets my requirements.

It all starts with HAProxy on the infra server routing traffic to the appropriate backend depending on the service that was requested. This routing is managed by using SNI in HAProxy for TLS ports to determine the correct backend and you can see my configuration here.

My homelab is using split DNS where my homelab services can be resolved both externally (i.e. outside my house) and internally (i.e. in my home office). Split DNS means that when I access services from my house I get an internal IP address and when I’m out I get an external IP address. The benefit of this is avoiding the round trip to the Internet when I’m at home plus if your ISP doesn’t let you route traffic back to the router (i.e hairpinning) this can be useful.

To manage this I use pi-hole to provide DNS services in my home as per the diagram below:

network

The external DNS for my homelab is managed through Route 53, this costs approximately $1.50 a month which is very reasonable. Since the IP address of my router is not static and can be changed as needed by my ISP, I use the built-in Asus Dynamic DNS feature. Then in Route 53 I simply set up the DNS for the individual services to use a CNAME record and alias it to the Asus DDNS service which keeps my external IP addresses always up to date without any additional scripting required.

For the internal addresses I have DNSmasq configured on pi-hole to return the local IP addresses instead of the external addresses provided by Route 53. So when I’m home pi-hole resolves my DNS and gives me local addresses, when I’m out and about Route 53 resolves my DNS address and gives me external IP addresses. This setup has been completely seamless and has worked well.

Infrastructure Services

In the hardware section I listed the various infrastructure services I’m running, in this section let’s take a deeper dive to review what I am running and why.

HAProxy. As mentioned in the networking section, it provides the reverse proxy enabling me to expose multiple services to the Internet. For pure http services it also provides TLS termination.

GLAuth. A simple LDAP server that is used to centrally mnanage users and groups in my homelab. While it is functional, in retrospect I wish I had used OpenLDAP however I started with this as my original Infra server, as a test, was a NUC from 2011 and was far less capable then the Beelink. At some point I plan to swap it out for OpenLDAP.

Keycloak. It provides Single Sign-On (SSO) via Open ID Connect to all of the services in my Homelab and connects to GLAuth for identity federation. It also provides SSO with Red Hat’s Google authentication system so if I want to give access to my homelab to a Red Hatter for troubleshooting or testing purposes I can do so.

Homepage. Provides a homepage for my Homelab, is it useful? Not sure but it was a fun configuring it and putting it together. I have it set as my browser home page and there is a certain satisfaction whenever I open my browser and see it.

Homepage

Upsnap. For efficiency reasons I start and stop my OpenShift servers to save on power consumption and help do a tiny bit for the environment. However this means I need a way to easily turn on and off the machines without having to dig under my desk. This is where Upsnap comes into play, it provides a simple UI for wake-on-lan that enables me to easily power up or down my servers as needed.

As a bonus I can VPN into my homelab (this isn’t exposed to the Internet) to manage the power state of my servers remotely.

upsnap

Uptime Kuma. This monitors the health of all of my services and sends alerts to my Slack workspace if any of the services are down. It also monitors for the expiration of TLS certificates and sends me an alert in advance if a certificate will expire soon. It’s very configurable and I was able to tweak it so my OpenShift servers are considered in a maintenance window when I shut them down at night and on weekends.

Uptime-Kuma

Pi-hole. Covered in the network section, this is used to manage my internal DNS in my homelab. It’s also able to do ad blocking but I’ve disabled that as I’ve found it more trouble then it’s worth personally.

pi-hole

Prometheus. A standalone instance of Prometheus, it scrapes metrics for some of my services like Keycloak. At the moment these metrics are only being used in Homepage but I’m planning on getting Grafana installed at some point to support some dashboards.

My infrastructure server is configured using Ansible, you can view the roles I have for it in my homelab-automation repo. A couple of notes on this repo:

  • It’s pretty common that I run roles ad-hoc so you may see the playbook configure-infra-server.yaml constantly change in terms of roles commented out.
  • This repo generates the TLS certificates I need but I haven’t gotten to the point of running it as a cron job. At the moment when uptime-kuma warns me a certificate is about to expire I just run the letsencrypt roles to re-generate and provision the certs on the Infra server. (Note on OpenShift this is all handled automatically by cert-manager)
  • The Keycloak role is a work in progress as fully configuring Keycloak is somewhat involved.

OpenShift

I run two OpenShift clusters using Single Node OpenShift (SNO) as discussed previously. Both clusters are configured and managed through Advanced Cluster Manager (ACM) and OpenShift GitOps (aka Argo CD). While it’s too long to go into details here, I basically have some policies configured in ACM that bootstraps the OpenShift GitOps Operator along with a bootstrap cluster configuration application, using the app-of-app pattern, onto the clusters managed by ACM.

In this GitOps Guide to the Galaxy youtube video I go into a lot of detail of how this works. However note that I’m always iterating and some things have changed since then but it still is good for seeing the big picture.

Once the operator and the Argo CD application are installed on the cluster by ACM, sync waves are used to provision the cluster configuration in an ordered fashion as illustrated by the image below (though the image itself is a bit dated).

cluster-config

Periodically I will temporarily add clusters from the cloud or the internal Red Hat demo system to show specific things to customers, bootstrapping these clusters becomes trivial with ACM using this process.

I run quite a few things on my cluster, here are a few highlights with regard to specific items I use in my clusters:

LVM Storage Operator. This provides dynamic RWO storage to the cluster, it works by managing a storage device (nvme in my case) and partionining it as needed using Logical Volume Manager (LVM). This is a great way to have easy to manage storage in a SNO cluster with minimal resources consumed by the operator.

External Secrets Operator. I use GitOps to manage and configure my clusters and thus need a way to manage secrets securely. I started with Sealed Secrets, which worked well enough, but once I added the second cluster I found it was becoming more of a maintenance burden. Using the External Secrets Operator with the Doppler back-end externalizes all of the secrets and makes it easy to access secrets on either cluster as needed. I wrote a blog on my use of this operator here.

When ACM bootstraps a cluster with the GitOps operator and initial application, it also copies over the secret needed for ESO to access Doppler from the Hub cluster to the target cluster.

Cert Manager. Managing certificates is never fun but the cert-manager operator makes it a trivial exercise. I use this operator with Lets Encrypt to provide certificates for the OpenShift API and wildcard endpoints as well as specific cluster workloads that need a cert.

Advanced Cluster Security (ACS). This is used to provide runtime security on my clusters as it scans images, monitors runtime deployments, etc and is an invaluable tool for managing the security posture of my Homelab. The Hub cluster runs the ACS Central (the user interface) as well as an agent, the Home cluster just runs the agent which connects back to Central on the Hub.

OpenShift Configuration

My OpenShift configuration is spread across a few repos and, well, is involved. As a result it is not possible to do a deep dive on it here however I will list my repos and provide some details in the table below.

Repository Description
cluster-config This repo contains my cluster configuration for the home and hub clusters as well as clusters I may add from AWS and the Red Hat demo system.
acm-hub-bootstrap This is used to bootstrap the ACM Hub cluster, it has a bash script that installs the ACM control plane along with the policies and other components needed to bootstrap and manage other clusters.
cluster-config-pins I use commit pinning to manage promotion between clusters (i.e. rollout change on lab cluster, then non-prod and then prod). This repo is used to hold the pins, it’s a work in progress as I’ve just started doing this but I’m finding it works well for me.
helm-charts This repo holds the various helm charts used in my homelab. One notable chart is the one I use to generate the applications used by the bootstrap app-of-app.

Alerting

I setup a free workspace in Slack and configured it as a destination for all of my alerts from OpenShift as well as other services such as uptime-kuma. Since I use my homelab for customer demos being proactively informed of issues has been really useful.

slack-alerts

Conclusion

This post reviewed my homelab setup, in a somewhat rambling fashion, as of December 2023. If you have any questions, thoughts or ways I could do things better feel free to add a comment to this point.

Lavender Redux

A few months ago I posted a theme called Lavender for Gnome Shell and GTK, this theme took the excellent Numix theme and tweaked the colors to use the same scheme as Sam Hewitt’s excellent but now unsupported Orchis theme. Unfortunately the Numix theme hasn’t been keeping up with newer versions of the GTK so with Gnome 3.18 I ended up re-basing Lavender on Adwaita instead.

Similar to the original Lavender theme the goals were as follows:

  1. Tweak the colors to match the Orchis theme in order to work well with Sam Hewitt’s excellent Moka icons
  2. Minimize customization to the base theme files in order to make it maintainable so as new versions of the base theme, Adwaita in this case, come out it’s easy to upgrade
  3. As a corollary to #2, any CSS changes should reside in an external file if at all possible as I really don’t want to re-edit Adwaita everytime a new version comes out.

So in a nutshell, this is a slight tweaking of Adwaita. If you don’t like Adwaita you probably won’t like this as well. Here’s a screenshot of what it looks like:

Screenshot from 2015-10-13 10-09-16

While I miss the Dark titlebars from Numix, I’m happy with the results at this point. Hopefully once the Numix sass port is ready for prime-time I’ll switch back to that as a base.

A couple of additional comments on this tweaked theme:

  1. I fixed the annoyling line in Chrome between the titlebar and the tabs, it’s also fixed in other apps like Firefox
  2. There is a chrome extension which is just a copy of the Adwaita scrollbar extension but the color is changed to match this theme
  3. Nautilus has been slightly tweaked with alternating row colors and sidebar color to match the original Orchis theme

You can download the Lavender theme here. Note this has only been tested with GTK 3.18, not sure how it will work with earlier versions.

Victor Vran on Linux

victorvranI’ve been playing Victor Vran lately on Arch Linux, it’s an excellent action RPG that works great on Linux. I’m playing it on my laptop using the discrete Nivdia GPU (860m) via Bumblebee with no issues at all. Visually the game is impressive with plenty of effects. Controller support is top notch, using my PS3 controller with bluetooth and no issues as all.

I like the way the game plays and character development is quite interesting, there are no skills rather the type of build you have is determined by what outfits, weapons, destiny cards and powers you have equipped. If you want to try a different build type then just swap items around, no worrying about making a mistake choosing the wrong skill.

Highly recommended, get it on Steam, currently 10% off.

 

 

Lavender GTK Theme

I really like the work that Sam Hewitt’s Moka Project has put together with their icons, Moka gnome shell and Orchis GTK themes. Unfortunately the Orchis theme isn’t fully compatible with GTK 3.14 and has a number of small issues that were annoying me. As much as I stuck with it due to the outstanding design of Orchis still those issues kept bugging me.

I initially ended up forking Orchis and fixing a few of the most egregious issues, at least to me, and submitted a pull request back to the project. As I was looking at fixing more things though it became apparent that this would take more time then I had available due to my general unfamiliarity with GTK theming and not understanding the nuances of the changes between 3.12 and 3.14 at the theme level. Also, while I’m comfortable with CSS having done Java development for web sites at various points in my career, I’m certainly not a designer nor a CSS guru.

So I ended up looking for alternatives and I came across the Numix theme. It was also well designed and supported 3.14 however the color palette was not to my taste. Having said, I had a look at how it was coded out of curiousity and noticed that it would be very easy to change it’s color palette. Rolling up the sleeves, I spent a couple of days playing with things and soon had a variant of Numix that used the Orchis color palette and assets and worked acceptably for my needs, thus was born Lavender.

Lavender Theme

Lavender Theme

Lavender is my first attempt at customizing a GTK theme. Lavender includes the GTK 2, GTK 3 and metacity themes from Numix with minor modifications, to the best of my ability, to support the Orchis color schemes. It also replaces the Numix assets for radio buttons, checkboxes, etc with the equivalent ones from Orchis. Lavender is not as attractive as Orchis, which is a superb design, but it gets the job done and works well with GTK 3.14 so it meets my needs.

Lavender also includes a slightly modified version of the Moka gnome shell theme. The primary change being a small purple bar shown to the left of running applications, similar to what Vertex does, to make them easier to pick out. As I get older I find I’m having trouble seeing contrast differences in blacks so this change was geared to my older eyes.

Finally let me clear that my effort pales in comparison to what the folks who originally built Numix and Moka/Orchis have put into their themes. Quite literally this work would not exist without those two giants, if you like this theme please, please donate to both these projects using the links below.

Moka Donation Page: mokaproject.com/donate/
Numix Donation Page: numixproject.org/

You can download Lavender at DeviantArt.

Create an UEFI Arch USB Rescue Stick

I’ve been running Arch for awhile now and one of the items on my todo list as been to create a USB rescue stick in case my installation ever gets borked with an upgraded. The process of creating a stick is really straight forward and I thought I would document the steps I used here. The first thing someone should review is the Arch Installation Guide and it will be referred to at various points. This guide assumes you are already running Arch Linux on your PC.

The first step is to partition and format the stick. I used GParted but any similar tool will do. In GParted I created a GPT table on the stick followed by two partitions. The first partition is FAT32 and only 384 MB in size, it will be the EFI boot partition. Remember to set the boot flag for the EFI partition. The second partition will contain the Arch Linux installation and it’s set to Ext 4. The screenshot below of GParted shows the final configuration.

GParted UEFI USB Stick

The next step is to get the Arch Linux install scripts on the PC you are going to use to create the USB rescue stick. This is simply done with the following command:

[bash]
pacman -S arch-install-scripts
[/bash]

Next we need to mount the partitions on the stick, replace [x] in the commands below with the right letter for your USB stick.

[bash]
mount /dev/sd[x]2 /mnt
mkdir -p /mnt/boot
mount /dev/sd[x]1 /mnt/boot
[/bash]

Once the stick is mounted we can install Arch Linux on it and generate an initial fstab:

[bash]
pacstrap -i /mnt base base-devel
genfstab -U -p /mnt >> /mnt/etc/fstab
[/bash]

Next, follow the install guide as per the System Configuration (https://wiki.archlinux.org/index.php/installation_guide#Configure_the_system) section and use arch-chroot to switch to the stick and do an initial configuration of the new install.

I chose to use rEFInd as the boot loader as it is very straight forward to install and configure. To install and do an initial configuration run the following from the change root shell (i.e. Within arch-chroot /mnt) remembering to once again replace the [x] with the letter for your device.

[bash]
pacman -S refind-efi
refind-install –usedefault /dev/sd[x] –alldrivers
[/bash]
Once this is done, go into /etc/fstab and note the UUID for the second partition (the EXT4 partition) on the stick. edit the file /boot/EFI/BOOT/refind.conf and add the following at the end of the file, replace the UUID in the options line with your UUID that you noted from /etc/fstab.

[text]
menuentry “Linux Rescue” {
icon EFI/BOOT/icons/os_linux.png
loader /vmlinuz-linux
initrd /initramfs-linux.img
options “rw root=UUID=[YOUR UUID HERE]”
}
[/text]

I would also recommend commenting out all of the other menuentry items besides this one but that’s optional.

Once done, exit arch-chroot and re-start your computer. Using the BIOS boot menu, select to boot from the stick and if everything went well it should boot to a text based login. At this point you can continue configuring Arch for your specific needs.

GNOME Boxes and Samba Shares

This is a followup to my earlier post about using GNOME Boxes to manage a Windows virtual machine. One of the comments I made was that I used Samba on the host (Arch Linux) to share the host file system with the Windows guest. I got a comment asking for further details about this as I mentioned it pretty superficially originally and thought it would make a good follow-up blog post.

This blog post is assuming Arch Linux as the host, if you are using a different variant of Linux check your distribution’s documentation on installing Samba. For Arch Linux, the Arch wiki does an excellent job of explaining how to install and configure Samba and this is what I followed with one exception.

That exception is that I opted to enable the smbd.socket service instead of smbd.service. Also, I didn’t bother enabling the nmbd.service which is used for Netbios since I only use the Samba service for my VM and not to share my local file system on the network at large.

Once you have followed the Arch install procedure, you need to create and modify smb.conf so that the host folders are available to the guest. As per the Arch wiki, this simply involves copying /etc/smb.default to /etc/smb.conf and adding an entry for each folder at the end of the file. Here is my entry for the Documents folder as an example:

[text]
[Documents]
comment = Documents
browseable = yes
writeable = yes
path = /home/[username]/Documents
valid users = [username]
public = no
read only = no
create mask = 0700
directory mask = 0700
[/text]

Make sure to replace [username] in the above with the name of the user you login into Linux with. I keep things simple by using the same username/password in the host and the guest, if you don’t you may need to do some tinkering to a to enter credentials in the guest to access the shares.

Once this is done, run the Windows Guest and open the File Explorer. I believe the default IP for the host in QEMU when using the slirp network stack is 10.0.2.2 so in Windows File Explorer, try accessing the network share using \\10.0.2.2\Documents. If everything is configured correctly you should see the host’s folders and files under Documents appear. If it works, you can opt to create permanent shares by creating mapped drives (right click Computer in Windows File Explorer and select Map Network Drive…).

So that’s it in a nutshell, pretty straight forward and hopefully it helps.

WPS Office (Kingsoft), Arch and Gnome Shell

I’ve used Libreoffice for quite awhile but mostly as a document viewer as I’ve found it’s compatibility with the various complex MS Office documents I have to deal with somewhat lacking. It’s also a large package and a bit slow for my tastes even though performance is much improved in later versions.

I’ve heard a lot of positive things about WPS Office (formerly Kingsoft) and I decided to give it a whirl. I haven’t used it long enough to post any kind of review, however I did want to comment on two minor issues I ran into and how to fix them.

The first issue I had is that the font selector in WPS Writer always reverted to Dejavu whenever I tried to use a Microsoft font like Arial or Times New Roman. After much head scratching, I realized that the infinaltiy font package (highly recommended for good looking fonts in Arch) had configured font substitution in /etc/fonts/conf.d. After reading the documentation here, I realized I needed to run the command sudo fc-presets set and select the ms option, after that the issue was resolved.

As an aside, dealing with Microsoft fonts in Linux is a bit of a pain. I had originally used the ttf-ms-win8 package but this required you to have the actual fonts on hand which was a pain to assemble. Also every time the package would be updated I would be missing new fonts added to it which I would have to track down, it was a complete pain. As part of sorting out my font issue in WPS Office, I also removed the ttf-ms-win8 package and simply copied over the fonts manually from my Windows partition as per the instructions here.

The second issue I had was that the Gnome Shell dock wasn’t displaying the right icon in the dock when running the WPS Office applications as per the picture below. Notice the ugly default Writer icon (the blue W) instead of the nice icon from the icon theme I am using.

Screenshot from 2014-11-12 13:43:36

Another issue I found was that the dock menu option “Add to Favorites” that appears when right clicking the icon in the dock was not showing and thus I could not pin the app to the dock.

After some investigation, I found out that the Gnome Shell dock expects the name of the desktop file to be the same as the binary. To get this working, I copied the wps desktop files to new ones as per the following table.

Application Existing New
Writer wps-office-wps.desktop wps.desktop
Spreadsheet wps-office-et.desktop et.desktop
Presentation wps-office-wpp.desktop wpp.desktop

Note that I could have opted to simply moved the existing files to the new names, but this means that whenever the package is upgraded the files will re-appear resulting in duplicate applications showing up in Shell. Instead, I simply copied the desktop files and then used menulibre to hide the old desktop files.

After these changes, the lovely icons from the icon theme starting appearing as per the picture below and the dock menu works as expected.

Screenshot from 2014-11-12 15:32:09

Borderlands 2 and Linux

The Borderlands 2 port for Linux was released this week and was on sale for $5 initially so I couldn’t resist picking up a copy. I fired up the game last night and played for a couple of hours, this is a quality port for Linux. On my laptop under Arch Linux and with an Nvidia 860m the game played flawlessly at 1920×1200 with good framerate and smooth play. The only issue I had was a bit of audio stuttering during the opening movie sequence but the game itself ran great.

I highly recommend this game for Linux and well worth picking up, particularly if you have Nvidia hardware.

Gnome Boxes and Windows Guest

I recently decided to try switching to Gnome Boxes for my Windows Virtual Machine from Virtual Box simply to see if the integration with Gnome Shell would be better. What follows is a summary of my experience getting things running optimally, hopefully it will be helpful to others interested in trying the switch. Note that I’m using Arch and Gnome 3.12 at the time this was written.

While not strictly necessary, the first step for me was to uninstall VirtualBox and all it’s dependent packages such as guest additions and the host packages. After that, I installed the gnome-boxes package which automatically drags in Qemu. I would also recommend installing virt-manager and virtviewer in case you need to perform more advanced tasks then Boxes will allow.

With the packages installed, I tried Boxes and found I could not do anything because it would fail as libvirtd was not starting automatically as it should. A bit of explanation, under the hood Gnome Boxes uses libvirtd, a library that abstracts virtualization implementations, to interact with Qemu. In turn Qemu has two different scopes for virtual machines: system and session.

The system scope is when the libvirtd is running as a daemon and the virtual machines have wider access to resources due to the higher privileges of the daemon. Session scope is when the VM is running in the current user context and only has access to what the current user is permitted.

Gnome Boxes uses the session scope so there is no need to enable the libvirtd daemon in Arch for Boxes, however as mentioned there is an annoying issue where libvirtd doesn’t start automatically as it should. Therefore when you start up Boxes you can an error message similar to the below:

Unable to open qemu+unix:///session: Failed to connect socket to
'/run/user/1000/libvirt/libvirt-sock': No such file or directory

You can check out this thread here (https://bbs.archlinux.org/viewtopic.php?id=186874) for more information on it, I simply worked around it by adding a script libvirtd.sh to /etc/profile.d to start it on login:

[bash]

libvirtd -d

[/bash]

Make sure to include the -d switch to run it in the background otherwise your login will hang.

Once that was fixed, I used Boxes to convert my Windows 7 VirtualBox image over to Qemu and fire it up. It ran fine but performance was terrible, after doing some reading I found there are two must do items to achieve decent Windows performance under Boxes and Qemu:

a. Uninstall the VirtualBox Guest Additions if you have them installed.

b. Install the Windows Guest Tools here which includes the QXL video driver, clipboard support, etc. This makes the VM experience much more performant and seamless similar to what the VirtualBox Guest Additions provide.

c. Install the virtio driver in the Windows guest for optimum network performance in the VM, they can be found here

At this point I had a well running VM but I still had an issue with Boxes that whenever I closed and re-opened the Windows VM Boxes would complain that it couldn’t be started and I had to start it from scratch. This was really irritating, but after a fair amount of troubleshooting I discovered the issue was that I had a inTSC feature enabled in the VM which was causing the issue. Using virsh, I modified the image in ~/.local/share/gnome-boxes/images to remove this feature and everything worked fine afterwards.

Finally the last issue is that in VirtualBox sharing folders between the host and guest is a snap, in Boxes there is no way to configure this easily in the Boxes UI. Looking at Qemu and KVM, it looks like there is a way to directly share folders but I opted to simply use Samba on the host to expose it to the Windows Guest. To do this, I installed the samba package in arch and then copied the file /etc/samba/smb.conf.default to /etc/samba/smb.conf and modified the newly copied file to expose the desired shares. In the guest, you can use the gateway address (10.0.2.2) to access the folders, i.e. \\10.0.2.2\Documents.

So after a bit of work everything is up and running at about the same performance level as VirtualBox. I love the integration of Boxes with Gnome Shell such as the search provider and will likely stay with Boxes. I do have one minor issue where I wish there was an option for Boxes to stop starting my VM in full screen mode however apparently this is addressed in Gnome 3.14 which should be available for Arch shortly.

Update on Clevo w230ss Review

I wrote a review of the Clevo w230ss laptop last month which specifically covered the Linux perspective. I’m pleased to report that the laptop still works great and I remain very satisfied with it. However I’ve found one minor issue with the laptop and Linux I thought I’d mention for others that may be interested in this combination.

I was on a plane using the laptop to watch a movie when I had to get up to let a seat mate go to the washroom. I closed the lid but when I sat back down and opened the lid to resume watching the movie I couldn’t get any sound out of the headphone jack. The only thing that solved it was doing a cold boot (i.e. power off fully and then start again), even a restart didn’t do the trick.

Turns out this is a known bug in the kernel and you can view the bug report here. Due to the complaints about the quality of the headphone output in the previous model, the w230st, Clevo added a DAC to the w230ss which is triggering this issue. Hopefully this will get addressed shortly.