OpenShift Home Lab and Block Storage

I have a single server (Ryzen 3900x with 128 GB of RAM) homelab environment that I use to run OpenShift (plus it doubles as my gaming PC). The host is running Fedora 32 at the time of this writing and I run OpenShift on libvirt using a playbook created by Luis Javier Arizmendi Alonso that sets everything up including NFS storage. The NFS server runs on the host machine and the OpenShift nodes running in VMs access the NFS server via the host IP to provision PVs. Luis’s playbook sets up a dynamic NFS provisioner in OpenShift and it all works wonderfully.

However there are times when you do need block storage, while NFS is capable of handling some loads that would traditionally require block, small databases for example, I was having issues with some other more intensive workloads like Kafka. Fortunately I had a spare 500 GB SSD lying around from my retired gaming computer and I figured I could drop that into my homelab server and use as block storage. Hence began my journey of learning way more about iscsi then I ever wanted to know as a developer…

Here are the steps I used to get static block storage going, I’m definitely interested if there are better ways to do it particularly if someone has dynamic block storage going in libvirt then drop me a line! Note these instructions were written for Fedora 32 which is what my host is using.

The first step is we need to partition the SSD using LVM into chunks that we can eventually serve up as PVs in OpenShift. This process is pretty straightforward, first we need to create a physical volume and a volume group called ‘iscsi’. Note my SSD is on ‘/dev/sda’, your mileage will vary so replace the ‘/dev/sdX’ below with whatever device you are using. Be careful not to overwrite something that is in use.

pvcreate /dev/sdX
vgcreate iscsi /dev/sdX

Next we create a logical volume, I’ve opted to create a thin pool which means that storage doesn’t get allocated until it’s actually used. This allows you to over-provision storage if you need to though obviously some care is required. To create the thin pool run the following:

lvcreate -l 100%FREE -T iscsi/thin_pool

One we have our pool created we then need to create the actual volumes that will be available as PVs. I’ve chosen to create a mix of PV sizes as per below, feel free to vary depending on your use case. Having said that note the naming convention I am using which will flow up into our iscsi and PV configuration, I highly recommend you use a similar convention for consistency.

lvcreate -V 100G -T iscsi/thin_pool -n block0_100
lvcreate -V 100G -T iscsi/thin_pool -n block1_100
lvcreate -V 50G -T iscsi/thin_pool -n block2_50
lvcreate -V 50G -T iscsi/thin_pool -n block3_50
lvcreate -V 10G -T iscsi/thin_pool -n block4_10
lvcreate -V 10G -T iscsi/thin_pool -n block5_10
lvcreate -V 10G -T iscsi/thin_pool -n block6_10
lvcreate -V 10G -T iscsi/thin_pool -n block7_10
lvcreate -V 10G -T iscsi/thin_pool -n block8_10

Note if you make a mistake and want to remove a volume, you can do so by running the following command:

lvremove iscsi/block5_50

Next we need to install some iscsi packages onto the host in order to configure and run the iscsi daemon on the host.

dnf install iscsi-initiator-utils targetcli

I’ve opted to use targetcli to configure iscsi rather then hand bombing a bunch of files, it provides a nice cli interface over the process which for me, being an iscsi newbie, greatly appreciated. When you run targetcli it wil drop you into a prompt as follows:

[gnunn@lab-server ~]$ sudo targetcli
[sudo] password for gnunn: 
targetcli shell version 2.1.53
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.

/> 

The prompt basically follows standard linux file system conventions and you can use commands like ‘cd’ and ‘ls’ to navigate it. The first thing we are going to do is create our block devices which map to our LVM PVs. In the targetcli prompt this is done with the following commands, note the naming convention being used which ties these devices to our PVs:

cd backstores/block
create dev=/dev/mapper/iscsi-block0_100 name=disk0-100
create dev=/dev/mapper/iscsi-block1_100 name=disk1-100
create dev=/dev/mapper/iscsi-block2_50 name=disk2-50
create dev=/dev/mapper/iscsi-block3_50 name=disk3-50
create dev=/dev/mapper/iscsi-block4_10 name=disk4-10
create dev=/dev/mapper/iscsi-block5_10 name=disk5-10
create dev=/dev/mapper/iscsi-block6_10 name=disk6-10
create dev=/dev/mapper/iscsi-block7_10 name=disk7-10
create dev=/dev/mapper/iscsi-block8_10 name=disk8-10

Next we create the initiator in iscsi, note that my host name is lab-server so I used that in the name below, feel free to modify as you prefer. I’ll admit I’m still a little fuzzy on iscsi naming conventions so suggestions welcome from those of you with more experience.

cd /iscsi
create iqn.2003-01.org.linux-iscsi.lab-server:openshift

Next we create the luns, the luns map to our block devices and represent the storage that will be available:

cd /iscsi/iqn.2003-01.org.linux-iscsi.lab-server:openshift/tpg1/luns
create storage_object=/backstores/block/disk0-100
create storage_object=/backstores/block/disk1-100
create storage_object=/backstores/block/disk2-50
create storage_object=/backstores/block/disk3-50
create storage_object=/backstores/block/disk4-10
create storage_object=/backstores/block/disk5-10
create storage_object=/backstores/block/disk6-10
create storage_object=/backstores/block/disk7-10
create storage_object=/backstores/block/disk8-10

Next we create the acls which control access to the luns. Note in my case my lab server is running on a private network behind a firewall so I have not bothered with any sort of authentication. If this is not the case for you then I would definitely recommend spending some time looking into adding this.

cd /iscsi/iqn.2003-01.org.linux-iscsi.lab-server:openshift/tpg1/acls
create iqn.2003-01.org.linux-iscsi.lab-server:client
create iqn.2003-01.org.linux-iscsi.lab-server:openshift-client

Note I’ve created two acls, one as a generic client and one specific for my openshift cluster.

Finally the last step is the portal, a default portal will be created that binds to all ports on 0.0.0.0. My preference is to remove it and bind it to a specific IP address on the host. My host as two ethernet ports so here I am binding it to the 2.5 gigabit port which has a static IP address, your IP address will obviously vary.

cd /iscsi/iqn.2003-01.org.linux-iscsi.lab-server:openshift/tpg1/portal
delete 0.0.0.0 ip_port=3260
create 192.168.1.83

Once you have done all this, you should have a result that looks similar to the following when you run ‘ls /’ in targetcli:

o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 9]
  | | o- disk0-100 .................................................. [/dev/mapper/iscsi-block0_100 (100.0GiB) write-thru activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- disk1-100 .................................................. [/dev/mapper/iscsi-block1_100 (100.0GiB) write-thru activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- disk2-50 ..................................................... [/dev/mapper/iscsi-block2_50 (50.0GiB) write-thru activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- disk3-50 ..................................................... [/dev/mapper/iscsi-block3_50 (50.0GiB) write-thru activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- disk4-10 ..................................................... [/dev/mapper/iscsi-block4_10 (10.0GiB) write-thru activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- disk5-10 ..................................................... [/dev/mapper/iscsi-block5_10 (10.0GiB) write-thru activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- disk6-10 ..................................................... [/dev/mapper/iscsi-block6_10 (10.0GiB) write-thru activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- disk7-10 ..................................................... [/dev/mapper/iscsi-block7_10 (10.0GiB) write-thru activated]
  | | | o- alua ................................................................................................... [ALUA Groups: 1]
  | | |   o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | | o- disk8-10 ..................................................... [/dev/mapper/iscsi-block8_10 (10.0GiB) write-thru activated]
  | |   o- alua ................................................................................................... [ALUA Groups: 1]
  | |     o- default_tg_pt_gp ....................................................................... [ALUA state: Active/optimized]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 1]
  | o- iqn.2003-01.org.linux-iscsi.lab-server:openshift .................................................................. [TPGs: 1]
  |   o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
  |     o- acls .......................................................................................................... [ACLs: 2]
  |     | o- iqn.2003-01.org.linux-iscsi.lab-server:client ........................................................ [Mapped LUNs: 9]
  |     | | o- mapped_lun0 ............................................................................. [lun0 block/disk0-100 (rw)]
  |     | | o- mapped_lun1 ............................................................................. [lun1 block/disk1-100 (rw)]
  |     | | o- mapped_lun2 .............................................................................. [lun2 block/disk2-50 (rw)]
  |     | | o- mapped_lun3 .............................................................................. [lun3 block/disk3-50 (rw)]
  |     | | o- mapped_lun4 .............................................................................. [lun4 block/disk4-10 (rw)]
  |     | | o- mapped_lun5 .............................................................................. [lun5 block/disk5-10 (rw)]
  |     | | o- mapped_lun6 .............................................................................. [lun6 block/disk6-10 (rw)]
  |     | | o- mapped_lun7 .............................................................................. [lun7 block/disk7-10 (rw)]
  |     | | o- mapped_lun8 .............................................................................. [lun8 block/disk8-10 (rw)]
  |     | o- iqn.2003-01.org.linux-iscsi.lab-server:openshift-client .............................................. [Mapped LUNs: 9]
  |     |   o- mapped_lun0 ............................................................................. [lun0 block/disk0-100 (rw)]
  |     |   o- mapped_lun1 ............................................................................. [lun1 block/disk1-100 (rw)]
  |     |   o- mapped_lun2 .............................................................................. [lun2 block/disk2-50 (rw)]
  |     |   o- mapped_lun3 .............................................................................. [lun3 block/disk3-50 (rw)]
  |     |   o- mapped_lun4 .............................................................................. [lun4 block/disk4-10 (rw)]
  |     |   o- mapped_lun5 .............................................................................. [lun5 block/disk5-10 (rw)]
  |     |   o- mapped_lun6 .............................................................................. [lun6 block/disk6-10 (rw)]
  |     |   o- mapped_lun7 .............................................................................. [lun7 block/disk7-10 (rw)]
  |     |   o- mapped_lun8 .............................................................................. [lun8 block/disk8-10 (rw)]
  |     o- luns .......................................................................................................... [LUNs: 9]
  |     | o- lun0 .............................................. [block/disk0-100 (/dev/mapper/iscsi-block0_100) (default_tg_pt_gp)]
  |     | o- lun1 .............................................. [block/disk1-100 (/dev/mapper/iscsi-block1_100) (default_tg_pt_gp)]
  |     | o- lun2 ................................................ [block/disk2-50 (/dev/mapper/iscsi-block2_50) (default_tg_pt_gp)]
  |     | o- lun3 ................................................ [block/disk3-50 (/dev/mapper/iscsi-block3_50) (default_tg_pt_gp)]
  |     | o- lun4 ................................................ [block/disk4-10 (/dev/mapper/iscsi-block4_10) (default_tg_pt_gp)]
  |     | o- lun5 ................................................ [block/disk5-10 (/dev/mapper/iscsi-block5_10) (default_tg_pt_gp)]
  |     | o- lun6 ................................................ [block/disk6-10 (/dev/mapper/iscsi-block6_10) (default_tg_pt_gp)]
  |     | o- lun7 ................................................ [block/disk7-10 (/dev/mapper/iscsi-block7_10) (default_tg_pt_gp)]
  |     | o- lun8 ................................................ [block/disk8-10 (/dev/mapper/iscsi-block8_10) (default_tg_pt_gp)]
  |     o- portals .................................................................................................... [Portals: 1]
  |       o- 192.168.1.83:3260 ................................................................................................ [OK]
  o- loopback ......................................................................................................... [Targets: 0]
  o- vhost ............................................................................................................ [Targets: 0]

At this point you can exit targetcli by typing ‘exit’ in the prompt. Next at this point we need to expose the iscsi port in firewalld and enable the services:

firewall-cmd --add-service=iscsi-target --permanent
firewall-cmd --reload
systemctl enable iscsid
systemctl start iscsid
systemctl enable target
systemctl start target

Note that the target service ensures the configuration you created in targetcli is restored whenever the host is restarted. If you do not enable and start the target the next time the computer starts you will notice an empty configuration.

Now that the host is created we can go ahead and create the static PVs for OpenShift as well as the non-provisioning storage class. You can view the PVs I’m using in git here, I won’t paste them in the blog since it’s a long file. We wrap these PVs in a non-provisioning storage class so we can request them easily on demand from our applications.

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: iscsi
provisioner: no-provisioning
parameters:

To test out the PVs, here is an example PVS:

apiVersion: "v1"
kind: "PersistentVolumeClaim"
metadata:
  name: "block"
spec:
  accessModes:
    - "ReadWriteOnce"
  resources:
    requests:
      storage: "100Gi"
  storageClassName: "iscsi"

And that’s it, now you have access to block storage in your homelab environment. I’ve used this quite a bit with kafka and it works great, I’m looking into doing some benchmarking of this versus AWS EBS to see how the performance compares and will follow up on this in another blog.

Sending Alerts to Slack in OpenShift

In OpenShift 4 it’s pretty easy to configure sending alerts to a variety of destinations including Slack. While my work tends to be more focused on the development side of the house then operations, my goal for my homelab cluster is to be as production-like as possible hence the need to configure receivers.

To send messages to Slack the first thing you will need to do is setup a slack organization if you do not already have one and then setup channels to receive the alerts. In my case I opted to create three channels: alerts-critical, alerts-default and alerts-watchdog. These channels mirror the default filtering used in OpenShift 4, of course you may want to adjust as necessary based on whatever filtering routes and rules you have in place.

Once you have your channels in place for receiving alerts, you can add incoming webhooks by following the documentation here. Each webhook you create in Slack will have a URL in the following format:

https://hooks.slack.com/services/XXXXX/XXXXX/XXXXX

The XXXXX will of course be replaced by something specific to your channel. Please note that Slack webhooks do not require authentication so you should not expose these URLs in public git repositories as it may lead to your channels getting spammed.

Once you have that, you need to configure the default alertmanager-main secret. The OpenShift documentation does a good job of explaining the process both from a GUI and a yaml perspective, in my case I prefer using yaml since I am using a GitOps approach to manage it.

An example of my Slack receivers configuration is below with the complete example on github:

  - name: Critical
    slack_configs:
    - send_resolved: false
      api_url: https://hooks.slack.com/services/XXXXXX/XXXXXX/XXXXXX
      channel: alerts-critical
      username: '{{ template "slack.default.username" . }}'
      color: '{{ if eq .Status "firing" }}danger{{ else }}good{{ end }}'
      title: '{{ template "slack.default.title" . }}'
      title_link: '{{ template "slack.default.titlelink" . }}'
      pretext: '{{ .CommonAnnotations.summary }}'
      text: |-
        {{ range .Alerts }}
          *Alert:* {{ .Labels.alertname }} - `{{ .Labels.severity }}`
          *Description:* {{ .Annotations.message }}
          *Started:* {{ .StartsAt }}
          *Details:*
          {{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
          {{ end }}
        {{ end }}
      fallback: '{{ template "slack.default.fallback" . }}'
      icon_emoji: '{{ template "slack.default.iconemoji" . }}'
      icon_url: '{{ template "slack.default.iconurl" . }}'

This configuration is largely adapted from this excellent blog post by Hart Hoover called Pretty AlertManager Alerts in Slack. With this configuration, the alerts appear as follows in Slack:

As mentioned previously, the slack webhook should be treated as sensitive and as a result I’m using Sealed Secrets to encrypt the secret in my git repo which is then applied by ArgoCD as part of my GitOps process to configure the cluster. As a security measure, in order for Sealed Secrets to overwrite the existing alertmanager-main secret you need to prove you own the secret by placing an annotation on it. I’m using a pre-sync hook to do that in ArgoCD via a job:


apiVersion: batch/v1
kind: Job
metadata:
  name: annotate-secret-job
  namespace: openshift-monitoring
  annotations:
    argocd.argoproj.io/hook: PreSync
spec:
  template:
    spec:
      containers:
        - image: registry.redhat.io/openshift4/ose-cli:v4.6
          command:
            - /bin/bash
            - -c
            - |
              oc annotate secret alertmanager-main sealedsecrets.bitnami.com/managed="true" --overwrite=true
          imagePullPolicy: Always
          name: annotate-secret
      dnsPolicy: ClusterFirst
      restartPolicy: OnFailure
      serviceAccount: annotate-secret-job
      serviceAccountName: annotate-secret-job
      terminationGracePeriodSeconds: 30

The complete alertmanager implementation is part of my cluster-config repo which I will be covering more in a subsequent blog post.

Empowering Developers in the OpenShift Console

For customers coming from OpenShift 3, one thing that gets noticed right away is the change in consoles. While the Administrator perspective is analgous to the cluster console in 3.11, what happened to the default console which was the bread and butter experience for developers?

The good news is that in OpenShift 4 there is a new Developer perspective which provides an alternative experience tailored specifically for Developers out of a unified console. The Developer perspective features include providing a topology view giving an at-a-glance overview of applications in the namespace as well as the ability to quickly add new applications from a variety of sources such as git, container images, templates, helm charts, operators and more.

In this blog we will examine some of these new features and discuss how you can get the most out of the capabilities available in the Developer perspective. While the OpenShift documentation does cover many of these and I will link to the docs when needed, I think it’s worthwhile to review them in a concise form in order to understand the art of the possible with respect to empowering Developers in the OpenShift console.

Many of these features can be accessed by regular users, however some do require cluster-admin rights to use and are intended for a cluster administrator to provision on behalf of their developer community. Cluster administrators can choose the features that make sense for their developers providing an optimal experience based on their organizations requirements.

Labels and Annotations in the Topology View

The topology view provides an overview of the application, it enables users to understand the composition of the application at a glance by depicting the component resource object (Deployment, DeploymentConfig, StatefulSet, etc), component health, the runtime used, relationships to other resources and more.

The OpenShift Documentation on topology goes into great detail on this view however it focuses on using it from a GUI perspective and only mentions anecdotally at the end how how it is powered. Thus I would like to cover this in more detail since in many cases our manifests are stored and managed in git repos rather then in the console itself.

In short, how the topology view is rendered is determined by the labels and annotations on your resource objects. The labels and annotations that are available are defined in this git repo here. These annotations and labels, which are applied to the three Deployments, DeployConfigs, etc, are a mix of recommended kubernetes labels (https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels) as well as new OpenShift recommended labels and annotations to drive the topology view.

An example of these labels and annotations can be seen in the following diagram:

using this image as an example, we can see the client component is using the Node.js runtime and makes calls to the server component. The metadata for the client Deployment that causes this to be rendered in this way is as follows:

metadata:
  name: client
  annotations:
    app.openshift.io/connects-to: server
    app.openshift.io/vcs-ref: master
    app.openshift.io/vcs-uri: 'https://github.com/gnunn-gitops/product-catalog-client'
  labels:
    app: client
    app.kubernetes.io/name: client
    app.kubernetes.io/component: frontend
    app.kubernetes.io/instance: client
    app.openshift.io/runtime: nodejs
    app.kubernetes.io/part-of: product-catalog

The key labels and annotations that are being used in this example are as follows:

Type Name Description
Label app.kubernetes.io/part-of This refers to the overall application that this component is a part-of. In the image above, this is the ‘product-catalog’ bounding box which encapsulates the database, client and server components
Label app.kubernetes.io/name This is the name of the component, in the image above it corresponds to “database”, “server” and “client”
Label app.kubernetes.io/component The role of the component, i.e. frontend, backend, database, etc
Label app.kubernetes.io/instance This refers to the instance of the component, in the above simple example I have set the instance to be the same as the name but that’s not required. The instance label is used by the connect-to annotation to render the line arrows to depict the relationship between different components.
Label app.openshift.io/runtime This is the runtime used by the component, i.e. Java, NodeJS, Quarkus, etc. The topology view uses this to render the icon. A list of icons that are available in OpenShift can be found in github in the OpenShift console repo in the catalog-item-icon.tsx file. Note that you should select the branch that matches your OpenShift version, i.e. the “release-4.5” branch for OCP 4.5.
Annotation app.openshift.io/connects-to Renders the directional line showing the relationship between components. This is set to the instance label of the component for which you want to show the relationship.
Annotation app.openshift.io/vcs-uri This is the git repo where the source code for the application is located. By default this will add a link to the circle that can be clicked to navigate to the git repo. However if CodeReady Workspaces is installed (included for free in OpenShift), this will create a link to CRW to open the code in a workspace. If the git repo has a devfile.yaml in the root of the repository, the devfile will be used to create the workspace.

The example image above shows the link to CRW in the bottom right corner.

Annotation app.openshift.io/vcs-ref This is the reference to the version of source code used for the component. It can be a branch, tag or commit SHA.

A complete list of all of the labels and annotations can be found here.

Pinning Common Searches

In the Developer perspective the view is deliberately simplified from the Administrator perspective to focus specifically on the needs of the Developer. However predicting those needs is always difficult and as a result it’s not uncommon for users to need to find additional Resources.

To enable this, the Developer perspective provides the Search capability which enables you to find any Resource in OpenShift quickly and easily. As per the image below, highlighted in red, it also has a feature tucked away in the upper right side called “Add to Navigation”, if you click that your search gets added to the menubar on the left.

This is great for items you may commonly look for, instead of having to repeat the search over and over you can just bookmark it into the UI. Essentially once you click that button, the search, in this case for Persistent Volume Claims, will appear in the bar on the left as per below.

CodeReady Workspaces

CodeReady Workspaces (CRW) is included in OpenShift and it provides an IDE in a browser, I typically describe it as “Visual Studio Code on the web”. While it’s easy to install, the installation is done via an operator so it does require a cluster administrator to make it available.

The real power of CRW in my opinion is the ability to have the complete stack with all of the tools and technologies needed to work effectively on the application. No longer does the developer need to spend days setting up his laptop, instead simply create a devfile.yaml in the root of your git repository and it will configure CRW to use the stack appropriate for the application. Clicking on the CodeReady Workspaces icon in the Developer Topology view will open up a workspace with everything ready to go based on the devfile.yaml in the repo.

In short, one click takes you from this:

To this:

In my consulting days setting up my workstation for the application I was working on was often the bane of my existence involving following some hand-written and often outdated instructions, this would have made my life so much easier.

Now it should noted that running an IDE in OpenShift does require additional compute resources on the cluster, however personally I feel the benefits of this tool make it a worthwhile trade-off.

The OpenShift documentation does a great job of covering this feature so have a look there for detailed information.

Adding your own Helm Charts

In OpenShift 4.6 a new feature has been added which permits you to add your organization’s Helm Charts to the developer console through the use of the HelmChartRepository object. This enables developers using the platform to access the Helm Chart through the Developer Console and quickly instantiate the chart using a GUI driven approach.

Unfortunately, unlike OpenShift templates which can be added to the cluster globally or to specific namespaces, the HelmChartRepository object is cluster scoped only and does require a cluster administrator to use. As a result this feature is currently intended to be used by the cluster administrators to provide a curated set of helm charts for the platform user base as a whole.

An example HelmChartRepository is shown below:

apiVersion: helm.openshift.io/v1beta1
kind: HelmChartRepository
metadata:
  name: demo-helm-charts
spec:
  connectionConfig:
    url: 'https://gnunn-gitops.github.io/helm-charts'
  name: Demo Helm Charts

When this is added to an OpenShift cluster, the single chart in that repo, Product Catalog, appears in the Developer Console as per below and can be instantiated by developers as needed. The console will automatically display the latest version of that chart.

If you add a json schema (values.schema.json) to your Helm chart, as per this example, the OpenShift console can render a form in the GUI for users to fill out without having to directly deal with yaml.

If you are looking for a tutorial on how to create a Helm repo, I found the one here, “Create a public Helm chart repository with GitHub Pages”, quite good.

Adding Links to the Console

In many organizations it’s quite common to have a broad eco-system surrounding your own OpenShift cluster such as wikis, enterprise registries, third-party tools, etc to support your platform users. The OpenShift console enables a cluster administrator to add additional links to the console in various parts of the user interface to make it easy for your users to discover and navigate to the additional information and tools.

The available locations for ConsoleLink include:

  • ApplicationMenu – Places the item in the application menu as per the image below. In this image we have custom ConsoleLink items for ArgoCD (GitOps tool) and Quay (Enterprise Registry).

  • HelpMenu – Places the item in the OpenShift help menu (aka the question mark). In the image below we have a ConsoleLink that takes us to the ArgoCD documentation.

  • UserMenu – Inserts the link into the User menu which is in the top right hand side of the OpenShift console.
  • NamespaceDashboard – Inserts the link into the Project dashboard for the selected namespaces.

A great blog entry that covered console links, as well as other console customizations, can be found on the OpenShift Blog.

Web Terminal

This one is a little more bleeding edge as this is currently in Technical Preview, however a newer feature in OpenShift is the ability to integrate a web terminal into the OpenShift console. This enables developers to bring up a CLI whenever they need it without having to have the oc binary on hand. The terminal is automatically logged into OpenShift using the same user that is in the console.

The Web Terminal installs as an operator into the OpenShift cluster so again requiring a cluster admin to install it. Creating an instance of the web terminal once the operator is installed is easily done through the use of a CR:

apiVersion: workspace.devfile.io/v1alpha1
kind: DevWorkspace
metadata:
  name: web-terminal
  labels:
    console.openshift.io/terminal: 'true'
  annotations:
    controller.devfile.io/restricted-access: 'true'
  namespace: openshift-operators
spec:
  routingClass: web-terminal
  started: true
  template:
    components:
      - plugin:
          id: redhat-developer/web-terminal/4.5.0
          name: web-terminal

While this is Technical Preview and not recommended for Production usage, it is something to keep an eye on as it moves towards GA.