I have a single server (Ryzen 3900x with 128 GB of RAM) homelab environment that I use to run OpenShift (plus it doubles as my gaming PC). The host is running Fedora 32 at the time of this writing and I run OpenShift on libvirt using a playbook created by Luis Javier Arizmendi Alonso that sets everything up including NFS storage. The NFS server runs on the host machine and the OpenShift nodes running in VMs access the NFS server via the host IP to provision PVs. Luis’s playbook sets up a dynamic NFS provisioner in OpenShift and it all works wonderfully.
However there are times when you do need block storage, while NFS is capable of handling some loads that would traditionally require block, small databases for example, I was having issues with some other more intensive workloads like Kafka. Fortunately I had a spare 500 GB SSD lying around from my retired gaming computer and I figured I could drop that into my homelab server and use as block storage. Hence began my journey of learning way more about iscsi then I ever wanted to know as a developer…
Here are the steps I used to get static block storage going, I’m definitely interested if there are better ways to do it particularly if someone has dynamic block storage going in libvirt then drop me a line! Note these instructions were written for Fedora 32 which is what my host is using.
The first step is we need to partition the SSD using LVM into chunks that we can eventually serve up as PVs in OpenShift. This process is pretty straightforward, first we need to create a physical volume and a volume group called ‘iscsi’. Note my SSD is on ‘/dev/sda’, your mileage will vary so replace the ‘/dev/sdX’ below with whatever device you are using. Be careful not to overwrite something that is in use.
pvcreate /dev/sdX vgcreate iscsi /dev/sdX
Next we create a logical volume, I’ve opted to create a thin pool which means that storage doesn’t get allocated until it’s actually used. This allows you to over-provision storage if you need to though obviously some care is required. To create the thin pool run the following:
lvcreate -l 100%FREE -T iscsi/thin_pool
One we have our pool created we then need to create the actual volumes that will be available as PVs. I’ve chosen to create a mix of PV sizes as per below, feel free to vary depending on your use case. Having said that note the naming convention I am using which will flow up into our iscsi and PV configuration, I highly recommend you use a similar convention for consistency.
lvcreate -V 100G -T iscsi/thin_pool -n block0_100 lvcreate -V 100G -T iscsi/thin_pool -n block1_100 lvcreate -V 50G -T iscsi/thin_pool -n block2_50 lvcreate -V 50G -T iscsi/thin_pool -n block3_50 lvcreate -V 10G -T iscsi/thin_pool -n block4_10 lvcreate -V 10G -T iscsi/thin_pool -n block5_10 lvcreate -V 10G -T iscsi/thin_pool -n block6_10 lvcreate -V 10G -T iscsi/thin_pool -n block7_10 lvcreate -V 10G -T iscsi/thin_pool -n block8_10
Note if you make a mistake and want to remove a volume, you can do so by running the following command:
lvremove iscsi/block5_50
Next we need to install some iscsi packages onto the host in order to configure and run the iscsi daemon on the host.
dnf install iscsi-initiator-utils targetcli
I’ve opted to use targetcli to configure iscsi rather then hand bombing a bunch of files, it provides a nice cli interface over the process which for me, being an iscsi newbie, greatly appreciated. When you run targetcli it wil drop you into a prompt as follows:
[gnunn@lab-server ~]$ sudo targetcli [sudo] password for gnunn: targetcli shell version 2.1.53 Copyright 2011-2013 by Datera, Inc and others. For help on commands, type 'help'. />
The prompt basically follows standard linux file system conventions and you can use commands like ‘cd’ and ‘ls’ to navigate it. The first thing we are going to do is create our block devices which map to our LVM PVs. In the targetcli prompt this is done with the following commands, note the naming convention being used which ties these devices to our PVs:
cd backstores/block create dev=/dev/mapper/iscsi-block0_100 name=disk0-100 create dev=/dev/mapper/iscsi-block1_100 name=disk1-100 create dev=/dev/mapper/iscsi-block2_50 name=disk2-50 create dev=/dev/mapper/iscsi-block3_50 name=disk3-50 create dev=/dev/mapper/iscsi-block4_10 name=disk4-10 create dev=/dev/mapper/iscsi-block5_10 name=disk5-10 create dev=/dev/mapper/iscsi-block6_10 name=disk6-10 create dev=/dev/mapper/iscsi-block7_10 name=disk7-10 create dev=/dev/mapper/iscsi-block8_10 name=disk8-10
Next we create the initiator in iscsi, note that my host name is lab-server so I used that in the name below, feel free to modify as you prefer. I’ll admit I’m still a little fuzzy on iscsi naming conventions so suggestions welcome from those of you with more experience.
cd /iscsi create iqn.2003-01.org.linux-iscsi.lab-server:openshift
Next we create the luns, the luns map to our block devices and represent the storage that will be available:
cd /iscsi/iqn.2003-01.org.linux-iscsi.lab-server:openshift/tpg1/luns create storage_object=/backstores/block/disk0-100 create storage_object=/backstores/block/disk1-100 create storage_object=/backstores/block/disk2-50 create storage_object=/backstores/block/disk3-50 create storage_object=/backstores/block/disk4-10 create storage_object=/backstores/block/disk5-10 create storage_object=/backstores/block/disk6-10 create storage_object=/backstores/block/disk7-10 create storage_object=/backstores/block/disk8-10
Next we create the acls which control access to the luns. Note in my case my lab server is running on a private network behind a firewall so I have not bothered with any sort of authentication. If this is not the case for you then I would definitely recommend spending some time looking into adding this.
cd /iscsi/iqn.2003-01.org.linux-iscsi.lab-server:openshift/tpg1/acls create iqn.2003-01.org.linux-iscsi.lab-server:client create iqn.2003-01.org.linux-iscsi.lab-server:openshift-client
Note I’ve created two acls, one as a generic client and one specific for my openshift cluster.
Finally the last step is the portal, a default portal will be created that binds to all ports on 0.0.0.0. My preference is to remove it and bind it to a specific IP address on the host. My host as two ethernet ports so here I am binding it to the 2.5 gigabit port which has a static IP address, your IP address will obviously vary.
cd /iscsi/iqn.2003-01.org.linux-iscsi.lab-server:openshift/tpg1/portal delete 0.0.0.0 ip_port=3260 create 192.168.1.83
Once you have done all this, you should have a result that looks similar to the following when you run ‘ls /’ in targetcli:
o- / ......................................................................................................................... [...] o- backstores .............................................................................................................. [...] | o- block .................................................................................................. [Storage Objects: 9] | | o- disk0-100 .................................................. [/dev/mapper/iscsi-block0_100 (100.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk1-100 .................................................. [/dev/mapper/iscsi-block1_100 (100.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk2-50 ..................................................... [/dev/mapper/iscsi-block2_50 (50.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk3-50 ..................................................... [/dev/mapper/iscsi-block3_50 (50.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk4-10 ..................................................... [/dev/mapper/iscsi-block4_10 (10.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk5-10 ..................................................... [/dev/mapper/iscsi-block5_10 (10.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk6-10 ..................................................... [/dev/mapper/iscsi-block6_10 (10.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk7-10 ..................................................... [/dev/mapper/iscsi-block7_10 (10.0GiB) write-thru activated] | | | o- alua ................................................................................................... [ALUA Groups: 1] | | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | | o- disk8-10 ..................................................... [/dev/mapper/iscsi-block8_10 (10.0GiB) write-thru activated] | | o- alua ................................................................................................... [ALUA Groups: 1] | | o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized] | o- fileio ................................................................................................. [Storage Objects: 0] | o- pscsi .................................................................................................. [Storage Objects: 0] | o- ramdisk ................................................................................................ [Storage Objects: 0] o- iscsi ............................................................................................................ [Targets: 1] | o- iqn.2003-01.org.linux-iscsi.lab-server:openshift .................................................................. [TPGs: 1] | o- tpg1 ............................................................................................... [no-gen-acls, no-auth] | o- acls .......................................................................................................... [ACLs: 2] | | o- iqn.2003-01.org.linux-iscsi.lab-server:client ........................................................ [Mapped LUNs: 9] | | | o- mapped_lun0 ............................................................................. [lun0 block/disk0-100 (rw)] | | | o- mapped_lun1 ............................................................................. [lun1 block/disk1-100 (rw)] | | | o- mapped_lun2 .............................................................................. [lun2 block/disk2-50 (rw)] | | | o- mapped_lun3 .............................................................................. [lun3 block/disk3-50 (rw)] | | | o- mapped_lun4 .............................................................................. [lun4 block/disk4-10 (rw)] | | | o- mapped_lun5 .............................................................................. [lun5 block/disk5-10 (rw)] | | | o- mapped_lun6 .............................................................................. [lun6 block/disk6-10 (rw)] | | | o- mapped_lun7 .............................................................................. [lun7 block/disk7-10 (rw)] | | | o- mapped_lun8 .............................................................................. [lun8 block/disk8-10 (rw)] | | o- iqn.2003-01.org.linux-iscsi.lab-server:openshift-client .............................................. [Mapped LUNs: 9] | | o- mapped_lun0 ............................................................................. [lun0 block/disk0-100 (rw)] | | o- mapped_lun1 ............................................................................. [lun1 block/disk1-100 (rw)] | | o- mapped_lun2 .............................................................................. [lun2 block/disk2-50 (rw)] | | o- mapped_lun3 .............................................................................. [lun3 block/disk3-50 (rw)] | | o- mapped_lun4 .............................................................................. [lun4 block/disk4-10 (rw)] | | o- mapped_lun5 .............................................................................. [lun5 block/disk5-10 (rw)] | | o- mapped_lun6 .............................................................................. [lun6 block/disk6-10 (rw)] | | o- mapped_lun7 .............................................................................. [lun7 block/disk7-10 (rw)] | | o- mapped_lun8 .............................................................................. [lun8 block/disk8-10 (rw)] | o- luns .......................................................................................................... [LUNs: 9] | | o- lun0 .............................................. [block/disk0-100 (/dev/mapper/iscsi-block0_100) (default_tg_pt_gp)] | | o- lun1 .............................................. [block/disk1-100 (/dev/mapper/iscsi-block1_100) (default_tg_pt_gp)] | | o- lun2 ................................................ [block/disk2-50 (/dev/mapper/iscsi-block2_50) (default_tg_pt_gp)] | | o- lun3 ................................................ [block/disk3-50 (/dev/mapper/iscsi-block3_50) (default_tg_pt_gp)] | | o- lun4 ................................................ [block/disk4-10 (/dev/mapper/iscsi-block4_10) (default_tg_pt_gp)] | | o- lun5 ................................................ [block/disk5-10 (/dev/mapper/iscsi-block5_10) (default_tg_pt_gp)] | | o- lun6 ................................................ [block/disk6-10 (/dev/mapper/iscsi-block6_10) (default_tg_pt_gp)] | | o- lun7 ................................................ [block/disk7-10 (/dev/mapper/iscsi-block7_10) (default_tg_pt_gp)] | | o- lun8 ................................................ [block/disk8-10 (/dev/mapper/iscsi-block8_10) (default_tg_pt_gp)] | o- portals .................................................................................................... [Portals: 1] | o- 192.168.1.83:3260 ................................................................................................ [OK] o- loopback ......................................................................................................... [Targets: 0] o- vhost ............................................................................................................ [Targets: 0]
At this point you can exit targetcli by typing ‘exit’ in the prompt. Next at this point we need to expose the iscsi port in firewalld and enable the services:
firewall-cmd --add-service=iscsi-target --permanent firewall-cmd --reload systemctl enable iscsid systemctl start iscsid systemctl enable target systemctl start target
Note that the target service ensures the configuration you created in targetcli is restored whenever the host is restarted. If you do not enable and start the target the next time the computer starts you will notice an empty configuration.
Now that the host is created we can go ahead and create the static PVs for OpenShift as well as the non-provisioning storage class. You can view the PVs I’m using in git here, I won’t paste them in the blog since it’s a long file. We wrap these PVs in a non-provisioning storage class so we can request them easily on demand from our applications.
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: iscsi provisioner: no-provisioning parameters:
To test out the PVs, here is an example PVS:
apiVersion: "v1" kind: "PersistentVolumeClaim" metadata: name: "block" spec: accessModes: - "ReadWriteOnce" resources: requests: storage: "100Gi" storageClassName: "iscsi"
And that’s it, now you have access to block storage in your homelab environment. I’ve used this quite a bit with kafka and it works great, I’m looking into doing some benchmarking of this versus AWS EBS to see how the performance compares and will follow up on this in another blog.