Sending Alerts to Slack in OpenShift

In OpenShift 4 it’s pretty easy to configure sending alerts to a variety of destinations including Slack. While my work tends to be more focused on the development side of the house then operations, my goal for my homelab cluster is to be as production-like as possible hence the need to configure receivers.

To send messages to Slack the first thing you will need to do is setup a slack organization if you do not already have one and then setup channels to receive the alerts. In my case I opted to create three channels: alerts-critical, alerts-default and alerts-watchdog. These channels mirror the default filtering used in OpenShift 4, of course you may want to adjust as necessary based on whatever filtering routes and rules you have in place.

Once you have your channels in place for receiving alerts, you can add incoming webhooks by following the documentation here. Each webhook you create in Slack will have a URL in the following format:

https://hooks.slack.com/services/XXXXX/XXXXX/XXXXX

The XXXXX will of course be replaced by something specific to your channel. Please note that Slack webhooks do not require authentication so you should not expose these URLs in public git repositories as it may lead to your channels getting spammed.

Once you have that, you need to configure the default alertmanager-main secret. The OpenShift documentation does a good job of explaining the process both from a GUI and a yaml perspective, in my case I prefer using yaml since I am using a GitOps approach to manage it.

An example of my Slack receivers configuration is below with the complete example on github:

  - name: Critical
    slack_configs:
    - send_resolved: false
      api_url: https://hooks.slack.com/services/XXXXXX/XXXXXX/XXXXXX
      channel: alerts-critical
      username: '{{ template "slack.default.username" . }}'
      color: '{{ if eq .Status "firing" }}danger{{ else }}good{{ end }}'
      title: '{{ template "slack.default.title" . }}'
      title_link: '{{ template "slack.default.titlelink" . }}'
      pretext: '{{ .CommonAnnotations.summary }}'
      text: |-
        {{ range .Alerts }}
          *Alert:* {{ .Labels.alertname }} - `{{ .Labels.severity }}`
          *Description:* {{ .Annotations.message }}
          *Started:* {{ .StartsAt }}
          *Details:*
          {{ range .Labels.SortedPairs }} • *{{ .Name }}:* `{{ .Value }}`
          {{ end }}
        {{ end }}
      fallback: '{{ template "slack.default.fallback" . }}'
      icon_emoji: '{{ template "slack.default.iconemoji" . }}'
      icon_url: '{{ template "slack.default.iconurl" . }}'

This configuration is largely adapted from this excellent blog post by Hart Hoover called Pretty AlertManager Alerts in Slack. With this configuration, the alerts appear as follows in Slack:

As mentioned previously, the slack webhook should be treated as sensitive and as a result I’m using Sealed Secrets to encrypt the secret in my git repo which is then applied by ArgoCD as part of my GitOps process to configure the cluster. As a security measure, in order for Sealed Secrets to overwrite the existing alertmanager-main secret you need to prove you own the secret by placing an annotation on it. I’m using a pre-sync hook to do that in ArgoCD via a job:


apiVersion: batch/v1
kind: Job
metadata:
  name: annotate-secret-job
  namespace: openshift-monitoring
  annotations:
    argocd.argoproj.io/hook: PreSync
spec:
  template:
    spec:
      containers:
        - image: registry.redhat.io/openshift4/ose-cli:v4.6
          command:
            - /bin/bash
            - -c
            - |
              oc annotate secret alertmanager-main sealedsecrets.bitnami.com/managed="true" --overwrite=true
          imagePullPolicy: Always
          name: annotate-secret
      dnsPolicy: ClusterFirst
      restartPolicy: OnFailure
      serviceAccount: annotate-secret-job
      serviceAccountName: annotate-secret-job
      terminationGracePeriodSeconds: 30

The complete alertmanager implementation is part of my cluster-config repo which I will be covering more in a subsequent blog post.

Empowering Developers in the OpenShift Console

For customers coming from OpenShift 3, one thing that gets noticed right away is the change in consoles. While the Administrator perspective is analgous to the cluster console in 3.11, what happened to the default console which was the bread and butter experience for developers?

The good news is that in OpenShift 4 there is a new Developer perspective which provides an alternative experience tailored specifically for Developers out of a unified console. The Developer perspective features include providing a topology view giving an at-a-glance overview of applications in the namespace as well as the ability to quickly add new applications from a variety of sources such as git, container images, templates, helm charts, operators and more.

In this blog we will examine some of these new features and discuss how you can get the most out of the capabilities available in the Developer perspective. While the OpenShift documentation does cover many of these and I will link to the docs when needed, I think it’s worthwhile to review them in a concise form in order to understand the art of the possible with respect to empowering Developers in the OpenShift console.

Many of these features can be accessed by regular users, however some do require cluster-admin rights to use and are intended for a cluster administrator to provision on behalf of their developer community. Cluster administrators can choose the features that make sense for their developers providing an optimal experience based on their organizations requirements.

Labels and Annotations in the Topology View

The topology view provides an overview of the application, it enables users to understand the composition of the application at a glance by depicting the component resource object (Deployment, DeploymentConfig, StatefulSet, etc), component health, the runtime used, relationships to other resources and more.

The OpenShift Documentation on topology goes into great detail on this view however it focuses on using it from a GUI perspective and only mentions anecdotally at the end how how it is powered. Thus I would like to cover this in more detail since in many cases our manifests are stored and managed in git repos rather then in the console itself.

In short, how the topology view is rendered is determined by the labels and annotations on your resource objects. The labels and annotations that are available are defined in this git repo here. These annotations and labels, which are applied to the three Deployments, DeployConfigs, etc, are a mix of recommended kubernetes labels (https://kubernetes.io/docs/concepts/overview/working-with-objects/common-labels) as well as new OpenShift recommended labels and annotations to drive the topology view.

An example of these labels and annotations can be seen in the following diagram:

using this image as an example, we can see the client component is using the Node.js runtime and makes calls to the server component. The metadata for the client Deployment that causes this to be rendered in this way is as follows:

metadata:
  name: client
  annotations:
    app.openshift.io/connects-to: server
    app.openshift.io/vcs-ref: master
    app.openshift.io/vcs-uri: 'https://github.com/gnunn-gitops/product-catalog-client'
  labels:
    app: client
    app.kubernetes.io/name: client
    app.kubernetes.io/component: frontend
    app.kubernetes.io/instance: client
    app.openshift.io/runtime: nodejs
    app.kubernetes.io/part-of: product-catalog

The key labels and annotations that are being used in this example are as follows:

Type Name Description
Label app.kubernetes.io/part-of This refers to the overall application that this component is a part-of. In the image above, this is the ‘product-catalog’ bounding box which encapsulates the database, client and server components
Label app.kubernetes.io/name This is the name of the component, in the image above it corresponds to “database”, “server” and “client”
Label app.kubernetes.io/component The role of the component, i.e. frontend, backend, database, etc
Label app.kubernetes.io/instance This refers to the instance of the component, in the above simple example I have set the instance to be the same as the name but that’s not required. The instance label is used by the connect-to annotation to render the line arrows to depict the relationship between different components.
Label app.openshift.io/runtime This is the runtime used by the component, i.e. Java, NodeJS, Quarkus, etc. The topology view uses this to render the icon. A list of icons that are available in OpenShift can be found in github in the OpenShift console repo in the catalog-item-icon.tsx file. Note that you should select the branch that matches your OpenShift version, i.e. the “release-4.5” branch for OCP 4.5.
Annotation app.openshift.io/connects-to Renders the directional line showing the relationship between components. This is set to the instance label of the component for which you want to show the relationship.
Annotation app.openshift.io/vcs-uri This is the git repo where the source code for the application is located. By default this will add a link to the circle that can be clicked to navigate to the git repo. However if CodeReady Workspaces is installed (included for free in OpenShift), this will create a link to CRW to open the code in a workspace. If the git repo has a devfile.yaml in the root of the repository, the devfile will be used to create the workspace.

The example image above shows the link to CRW in the bottom right corner.

Annotation app.openshift.io/vcs-ref This is the reference to the version of source code used for the component. It can be a branch, tag or commit SHA.

A complete list of all of the labels and annotations can be found here.

Pinning Common Searches

In the Developer perspective the view is deliberately simplified from the Administrator perspective to focus specifically on the needs of the Developer. However predicting those needs is always difficult and as a result it’s not uncommon for users to need to find additional Resources.

To enable this, the Developer perspective provides the Search capability which enables you to find any Resource in OpenShift quickly and easily. As per the image below, highlighted in red, it also has a feature tucked away in the upper right side called “Add to Navigation”, if you click that your search gets added to the menubar on the left.

This is great for items you may commonly look for, instead of having to repeat the search over and over you can just bookmark it into the UI. Essentially once you click that button, the search, in this case for Persistent Volume Claims, will appear in the bar on the left as per below.

CodeReady Workspaces

CodeReady Workspaces (CRW) is included in OpenShift and it provides an IDE in a browser, I typically describe it as “Visual Studio Code on the web”. While it’s easy to install, the installation is done via an operator so it does require a cluster administrator to make it available.

The real power of CRW in my opinion is the ability to have the complete stack with all of the tools and technologies needed to work effectively on the application. No longer does the developer need to spend days setting up his laptop, instead simply create a devfile.yaml in the root of your git repository and it will configure CRW to use the stack appropriate for the application. Clicking on the CodeReady Workspaces icon in the Developer Topology view will open up a workspace with everything ready to go based on the devfile.yaml in the repo.

In short, one click takes you from this:

To this:

In my consulting days setting up my workstation for the application I was working on was often the bane of my existence involving following some hand-written and often outdated instructions, this would have made my life so much easier.

Now it should noted that running an IDE in OpenShift does require additional compute resources on the cluster, however personally I feel the benefits of this tool make it a worthwhile trade-off.

The OpenShift documentation does a great job of covering this feature so have a look there for detailed information.

Adding your own Helm Charts

In OpenShift 4.6 a new feature has been added which permits you to add your organization’s Helm Charts to the developer console through the use of the HelmChartRepository object. This enables developers using the platform to access the Helm Chart through the Developer Console and quickly instantiate the chart using a GUI driven approach.

Unfortunately, unlike OpenShift templates which can be added to the cluster globally or to specific namespaces, the HelmChartRepository object is cluster scoped only and does require a cluster administrator to use. As a result this feature is currently intended to be used by the cluster administrators to provide a curated set of helm charts for the platform user base as a whole.

An example HelmChartRepository is shown below:

apiVersion: helm.openshift.io/v1beta1
kind: HelmChartRepository
metadata:
  name: demo-helm-charts
spec:
  connectionConfig:
    url: 'https://gnunn-gitops.github.io/helm-charts'
  name: Demo Helm Charts

When this is added to an OpenShift cluster, the single chart in that repo, Product Catalog, appears in the Developer Console as per below and can be instantiated by developers as needed. The console will automatically display the latest version of that chart.

If you add a json schema (values.schema.json) to your Helm chart, as per this example, the OpenShift console can render a form in the GUI for users to fill out without having to directly deal with yaml.

If you are looking for a tutorial on how to create a Helm repo, I found the one here, “Create a public Helm chart repository with GitHub Pages”, quite good.

Adding Links to the Console

In many organizations it’s quite common to have a broad eco-system surrounding your own OpenShift cluster such as wikis, enterprise registries, third-party tools, etc to support your platform users. The OpenShift console enables a cluster administrator to add additional links to the console in various parts of the user interface to make it easy for your users to discover and navigate to the additional information and tools.

The available locations for ConsoleLink include:

  • ApplicationMenu – Places the item in the application menu as per the image below. In this image we have custom ConsoleLink items for ArgoCD (GitOps tool) and Quay (Enterprise Registry).

  • HelpMenu – Places the item in the OpenShift help menu (aka the question mark). In the image below we have a ConsoleLink that takes us to the ArgoCD documentation.

  • UserMenu – Inserts the link into the User menu which is in the top right hand side of the OpenShift console.
  • NamespaceDashboard – Inserts the link into the Project dashboard for the selected namespaces.

A great blog entry that covered console links, as well as other console customizations, can be found on the OpenShift Blog.

Web Terminal

This one is a little more bleeding edge as this is currently in Technical Preview, however a newer feature in OpenShift is the ability to integrate a web terminal into the OpenShift console. This enables developers to bring up a CLI whenever they need it without having to have the oc binary on hand. The terminal is automatically logged into OpenShift using the same user that is in the console.

The Web Terminal installs as an operator into the OpenShift cluster so again requiring a cluster admin to install it. Creating an instance of the web terminal once the operator is installed is easily done through the use of a CR:

apiVersion: workspace.devfile.io/v1alpha1
kind: DevWorkspace
metadata:
  name: web-terminal
  labels:
    console.openshift.io/terminal: 'true'
  annotations:
    controller.devfile.io/restricted-access: 'true'
  namespace: openshift-operators
spec:
  routingClass: web-terminal
  started: true
  template:
    components:
      - plugin:
          id: redhat-developer/web-terminal/4.5.0
          name: web-terminal

While this is Technical Preview and not recommended for Production usage, it is something to keep an eye on as it moves towards GA.

Building Scala and SBT Applications on OpenShift

In this article we will look at how to build Scala applications in OpenShift. While Scala is a Java Virtual Machine (JVM) language, applications written in Scala typically use a build tool called SBT (https://www.scala-sbt.org) which is not currently supported by OpenShift. However the great thing about OpenShift is that it is extensible and adding support for Scala applications is very straightforward as will be shown in this article.

As a quick level set, when building and deploying applications for OpenShift there are typically three options available as follows:

  • Source-To-Image. This is a capability available in OpenShift via builder images that enable developers to simply point a builder to a git repository and have the application compiled with an image created automatically by the platform.
  • Jenkins Pipeline. In this scenario we leverage Jenkins to create a pipeline for our application and use an appropriate build agent (i.e. slave) for our technology. Out of the box OpenShift includes agents for Maven and Nodejs but not Scala/SBT.
  • Build the container image outside of OpenShift and push the resulting image into the internal OpenShift registry. This is essentially treating the platform as a Container-As-A-Service (CaaS) rather then leveraging the Platform-As-A-Service (PaaS) capabilities available in OpenShift.

All of the above approaches are perfectly valid and the choice enterprises make in this regard are typically driven by a variety of factors including technology used, toolset and organizational requirements.

For the purposes of this article I am going to focus on building Scala applications using the second option, Jenkins Pipelines which is typically my preferred approach for production applications. Note that this option is not exclusive of Option 1, you can certainly use S2I in a Jenkins pipeline to build your application from source. However I do prefer using a build agent simply because there are a variety of activities (code scanning, pushing to repository, pre-processing, etc) that need to be performed as part of the build process that can often be application specific and a build agent simply gives us more flexibility in this regard.

Additionally I typically recommend to my customers to use Red Hat’s JDK image whenever possible in order to benefit from the support that is provided as part of an OpenShift (or RHEL) subscription. With a build agent this is a natural activity as part of the pipeline, with S2I you would need to use build chaining which isn’t quite as straightforward. Having said that, if you are interested in the S2I approach an example of creating an S2I enabled image is available here and it works well.

Creating the build agent

In OpenShift the provided Jenkins image includes the Jenkins kubernetes-plugin. This plugin enables Jenkins to spin up build agents in the Kubernetes environment as needed to support builds. Separate and distinct build agents are typically used to support different technologies and build tools. As mentioned previously, OpenShift includes two build agents, Maven and Node.js, but does not provide a build agent for Scala or SBT.

Fortunately creating our own build agent is quite easy to do since OpenShift provides a base image to inherit from. If using the open source version of OpenShift, OKD, you can use the base image openshift/jenkins-slave-base-centos7 whereas if using the enterprise version of OpenShift I would recommend using the rhel7 version openshift/jenkins-slave-base-rhel7.

In this example we will use the centos7 version of the image. To create the agent image we need to create a Dockerfile defining the image, our example uses the Dockerfile below. This Dockerfile is also available in the github repo sbt-slave.

FROM openshift/jenkins-slave-base-centos7
MAINTAINER Gerald Nunn <gnunn@redhat.com>
 
ENV SBT_VERSION 1.2.6
ENV SCALA_VERSION 2.12.7
ENV IVY_DIR=/var/cache/.ivy2
ENV SBT_DIR=/var/cache/.sbt
 
USER root
 
RUN INSTALL_PKGS="sbt-$SBT_VERSION" \
 && curl -s https://bintray.com/sbt/rpm/rpm > bintray-sbt-rpm.repo \
 && mv bintray-sbt-rpm.repo /etc/yum.repos.d/ \
 && yum install -y --enablerepo=centosplus $INSTALL_PKGS \
 && rpm -V $INSTALL_PKGS \
 && yum install -y https://downloads.lightbend.com/scala/$SCALA_VERSION/scala-$SCALA_VERSION.rpm \
 && yum clean all -y
 
USER 1001

Notice that this image defines environment variables for the Scala and SBT versions. While the image is built with specific versions pre-loaded, SBT will automatically use the version specified in your build if it differs. To build the image, use the following command:

docker build . -t jenkins-slave-sbt-centos7

After building the image, you can view the image as follows:

$ docker images | grep jenkins-slave-sbt-centos7
 
jenkins-slave-sbt-centos7                                          latest              d2a60298e18d        6 weeks ago         1.23GB

Next we need to make the image available to OpenShift, one option is to directly push it into your OpenShift registry. Another option is to push it into an external registry which is what we will do here by pushing the image into Docker Hub. Note you will need an account on Docker Hub in order to do this. The first step to push the image into Docker Hub (or other registry) is to tag the image with the repository. My repository in Docker Hub is gnunn so that is what I am using here, your repository will be different so please change gnunn to the name of your repository.

docker tag jenkins-slave-sbt-centos7:latest gnunn/jenkins-slave-sbt-centos7:latest

After that we can now push the image, again change gnunn to the name of your repository:

docker push gnunn/jenkins-slave-sbt-centos7:latest

Building the Application in OpenShift

At this point we are almost ready to start building our Scala application. The first thing we need to do is create a project in OpenShift for the example. We can do that using the example below calling the project scala-example.

oc new-project scala-example

Next we need to configure Jenkins to be aware of our new agent image. This can be done in a couple of different ways. The easiest but most manual way is to simply login into Jenkins and update the Kubernetes plugin to add this new agent image via the Jenkins console.

However I’m not a fan of this approach since it is manual and requires the Jenkins configuration to be updated every time a new instance of Jenkins is deployed. A better approach is to update the configuration via a configmap which the Kubernetes plugin in Jenkins will use to set it’s configuration. An example configmap is shown below:

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    app: cicd-pipeline
    role: jenkins-slave
  name: jenkins-slaves
data:
  sbt-template: |-
    <org.csanchez.jenkins.plugins.kubernetes.PodTemplate>
      <inheritFrom></inheritFrom>
      <name>sbt</name>
      <privileged>false</privileged>
      <alwaysPullImage>false</alwaysPullImage>
      <instanceCap>2147483647</instanceCap>
      <idleMinutes>0</idleMinutes>
      <label>sbt</label>
      <serviceAccount>jenkins</serviceAccount>
      <nodeSelector></nodeSelector>
      <customWorkspaceVolumeEnabled>false</customWorkspaceVolumeEnabled>
      <workspaceVolume class="org.csanchez.jenkins.plugins.kubernetes.volumes.workspace.EmptyDirWorkspaceVolume">
        <memory>false</memory>
      </workspaceVolume>
      <volumes />
      <containers>
        <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
          <name>jnlp</name>
          
          <privileged>false</privileged>
          <alwaysPullImage>false</alwaysPullImage>
          <workingDir>/tmp</workingDir>
          <command></command>
          <args>${computer.jnlpmac} ${computer.name}</args>
          <ttyEnabled>false</ttyEnabled>
          <resourceRequestCpu>200m</resourceRequestCpu>
          <resourceRequestMemory>512Mi</resourceRequestMemory>
          <resourceLimitCpu>2</resourceLimitCpu>
          <resourceLimitMemory>4Gi</resourceLimitMemory>
          <envVars/>
        </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate>
      </containers>
      <envVars/>
      <annotations/>
      <imagePullSecrets/>
    </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>

Please note this line in the configuration:



The reference here will need to be updated for the registry and repository where you pushed the image. If you are using Docker Hub, again change gnunn to your repository. Once you have updated that line, save it into a file called jenkins-slaves.yaml. We can then add the configmap to our project in OpenShift using the following command:

oc create -f jenkins-slaves.yaml

In order to test the build process we are going to need an application. I’ve forked a simple Scala microservice application to the github repo akka-http-microservice. I’ve made some minor modifications to it, notably updating the versions of Scala and SBT as well as adding the assembly plugin to create a fat jar.

First we will create a new build-config that the pipeline will use to feed the generated Scala artifact into the Red Hat JDK image. This build-config will still use source-to-image but will do so using a binary build since we will have already have built the Scala JAR file using our build agent.

If you are using Red Hat’s OpenShift Container Platform, you can create the build using the existing Java S2I image that Red Hat provides:

oc new-build redhat-openjdk18-openshift:1.2 --name=scala-example --binary=true

If you are using the opensource OpenShift OKD where the above image is not available, you can use the Fabric8 opensource Java S2I image.

oc create -f https://raw.githubusercontent.com/gnunn1/openshift-notes/master/fabric8-s2i-java/imagestream.yaml
oc new-build fabric8-s2i-java:3.0-java8 --name=scala-example --binary=true

Next we create a new application around the build, expose it to the outside word via a route and disable triggers since we want the pipeline to control the deployment:

oc new-app scala-example --allow-missing-imagestream-tags
oc expose dc scala-example --port=9000
oc expose svc scala-example
oc set triggers dc scala-example --containers='scala-example' --from-image='scala-example:latest' --manual=true

Note we specify --allow-missing-imagestream-tags because no images have been created at this point and thus the imagestream has no tags associated with it.

Now we need to create an OpenShift pipeline with a corresponding pipeline in a Jenkins file, below is the pipeline this example will be using.

---
apiVersion: v1
kind: BuildConfig
metadata:
  annotations:
    pipeline.alpha.openshift.io/uses: '[{"name": "jenkins", "namespace": "", "kind": "DeploymentConfig"}]'
  labels:
    app: scala-example
    name: scala-example
  name: scala-example-pipeline
spec:
  triggers:
    - type: GitHub
      github:
        secret: secret101
    - type: Generic
      generic:
        secret: secret101
  runPolicy: Serial
  source:
    type: Git
    git:
      uri: 'https://github.com/gnunn1/akka-http-microservice'
      ref: master
  strategy:
    jenkinsPipelineStrategy:
      jenkinsfilePath: Jenkinsfile
    type: JenkinsPipeline

Save the pipeline as pipeline.yaml and then create it in OpenShift using the following command, a Jenkins instance will be deployed automatically when you create the pipeline:

cc create -f pipeline.yaml

Note that the pipeline above is referencing a Jenkinsfile that is included in the source repository in git, you can view it here. This Jenkins file contains a simple pipeline that simply compiles the application, creates the image and then deploys it. For simplicity in this example we will do it all in the same project where the pipeline lives but typically you would deploy the application into separate projects representing individual environments (DEV, QA, PROD, etc) and not in the same project as Jenkins and the pipeline.

At this point you can now build the application, got to the Builds > Pipelines menu on the left and select the scala-example-pipeline and click the Start Pipeline button.

Scala Pipeline

At this point the pipeline will start though it may take a couple of minutes to start and perform all of it’s tasks so be patient, however once the pipeline is complete you will see the application running.

Scala Application Running

Once the pipeline is complete the application should be running and can be tested, to do so click on the route. You will see an error because the service doesn’t expose a service at the context root, add /ip/8.8.8.8 at the end of the URL and you should see a response as per below.

Scala Application Output

Running WebLogic on OpenShift

openshift-weblogic

From time to time I have customers expressing interest in running WebLogic on OpenShift. Having worked extensively with WebLogic in my previous career as an Oracle consultant, it’s immediately obvious that there are a number of copmplexities in getting the WebLogic domain structure working in a dynamic environment like OpenShift or Kubernetes. To Oracle’s credit, they are working on this and have written an article “WebLogic on Kubernetes, Try It!” with an example based on the Oracle docker images.

This post will discuss the basic steps needed to get this working in OpenShift. While OpenShift is based on Kubernetes, there are some differences particularly around security that require some tweaks. Additionally, the article and associated sample has some issues associated with it that I wanted to cover.

The instructions in the article are clear for the most part, here are the items that require tweaking to work:

a. Before you create the docker image in the wls-12213-domain directory, you will need to edit the file container-scripts/provision-domain.py. This file creates the managed servers ahead of time to support the statefulset, however it creates the listen address as ms-X.wls-subdomain.default.svc.cluster.local. The issue here is that it is hard-coding the default namespace which means you would need to deploy this sample in the default project in OpenShift. In theory, changing it to ms-X.wls-subdomain should work, however it did not for me and I need to investigate further. For now, I changed default to weblogic and deployed the sample into a weblogic project.

b. I also had an issue creating the docker file with the chown failing, see the issue I opened here for a workaround.

c. When creating the docker file, the sample implies you can select an admin user but you can’t. Just use the default weblogic admin user.

d. The Oracle docker images must be run as the Oracle user. OpenShift by default disallows images running as specific users, therefore a service account must be created to grant the anyuid scc.

oc create serviceaccount weblogic
oc adm policy add-scc-to-user anyuid -z weblogic

e. The wls-admin-webhook.yml and wls-stateful.yml files need to be updated to use the weblogic service account:

spec:
containers:
...
serviceAccount: weblogic
serviceAccountName: weblogic

f. The sample exposes the services using NodePort which should work in OpenShift, however I ended up creating routes as follows:

oc expose svc wls-admin-server
oc expose svc wls-service

g. If you are not familiar with WebLogic, to access the web console use the wls-admin-server route and add /console to the end of it. The username will be weblogic and the password whatever you selected.

Obviously the Oracle work is just a sample and a number of things would need to happen to operationalize it. The biggest IMHO is not having to hardcode the OpenShift namespaces the image will be running in. I’m hoping to resolve this as I play around with it further.

ApplicationCommandLine

In Terminix I’m making some improvements to how I’m handling the command-line by moving away from D’s getopts to the GTK method of dealing with it. The primary driver for this is to get the command line sent from the local application to the primary one. To do this, I registered my Options via setMainOptions, added a handler to addOnCommandLine and included the flag HANDLES_COMMAND_LINE when creating the application.

It all worked swimmingly except for one issue, while the primary got the command line just fine, the local instance would just hang and never return. After scratching my head for a bit and asking on the GTK IRC channel, a pointer to a C example gave me the nudge I needed. Looking more closely at the documentation for ApplicationCommandLine, I see this line:

For the case of a remote invocation, the remote process will typically exit when the last reference is dropped on @cmdline.

The issue here is that since D is GC’ed, the ApplicationCommandLine doesn’t get collected in a deterministic fashion. The solution was quite easy, simply destroy it at the end via a scope(exit) block. I’m going to file an issue with GtkD to see if this shouldn’t be made a Scoped reference.

GtkD and Localization

On my todo list for awhile has been adding localization support to Terminix. Unfortunately at this time D does not have an official i18n package nor are there any libraries supporting the GNU gettext which has become a De facto standard for localization over the years. While there are some localization libraries available for D, I really wanted to stick with gettext given the huge eco-system it has in terms of support tooling and websites.

While examining my options, I stumbled on the fact that the GTK GLib library supports gettext and that GtkD has already wrapped them in the glib.Internalization class. After refreshing my knowledge of how gettext works I was able to put together a basic working localization system in about an evenings worth of work. The remainder of the post will describe the steps involved with getting it working.

I knew in Terminix that I would need to support localization, so from day one I had a module gx.i18n.l10n that contained a single function following the gnu gettext pattern as follows:

string _(string text) {
    return text;
}

Whenever something in Terminix would eventually need to be localized, I would simply wrap it in a call to this function. Obviously at this early stage the function does nothing beyond returning the original text. An example of its usage is as follows:

string msg = _("This is a localized string");

For those of you familiar with the usage of gettext in C or other languages this will all seem familiar. For anyone else, in gettext any text you want to localize is wrapped in this function and becomes the key for retrieving a localized version. If no localized version is available, i.e. for a language for which a translation does not yet exist, the key is used as the text that is displayed. in my opinion, this system is much superior to localization systems that require programmers to have to embed artificial keys in the code and is one reason why gettext is so popular.

The next step was to simply incorporate the glib.Internationalization class from GtkD, so I updated my _() method as follows:

string _(string text) {
    return Internationalization.dgettext(_textdomain, text);
}

The textdomain parameter above is used by gettext to differentiate this application from others and will be discussed in more detail later. At this point, the application now supports localization but the real work begins as we need to prepare all of the localization materials that gettext requires. In gettext there are three types of files required for localization:

  • Template. This file, with a .pot extension, is used as the template for the localization. For a given application there is typically only one template file.
  • Portable Object (po). These files, with the extension .po, contain the localization for a specific locale, for example en.po or fr.po. These files are in the same format as the template file.
  • Machine Object (mo). These are the binary files that are used at runtime and are created by the msgfmt utility. These files have a 1:1 mapping with a po file in that each mo file is created from a po file.

Here’s an extract showing what a template/po file looks like:

msgid "Match entire word only"
msgstr "Match entire word only"

msgid "IBeam"
msgstr "IBeam"

msgid "Run command as a login shell"
msgstr "Run command as a login shell"

msgid "Exit the terminal"
msgstr "Exit the terminal"

in the extract above, the msgid is the key while the msgstr is the actual localization and as per above in the template file these will be identical. Additionally for most applications there is no need to provide a localization file for the locale the developer is using since that locale is already embedded in the source code by default.

The challenge at this point was creating the template file, while the gettext program has a utility called xgettext that can extract all of the localized strings from source code, unfortunately D is not one of the languages supported. I thought about creating a version of xgettext using the excellent libdparse, however I opted for a quick and dirty method as Terminix doesn’t have a large amount of strings needing localization.

What I ended up doing is adding some code to my l10n module to capture all the localization requests and then write it out a file when the application terminates. This has the advantage of being relatively easy to do but the disadvantage that you have to exercise the app pretty completely to capture all the strings. For Terminix there were only a few areas I couldn’t exercise easily and for those I simply updated the template file after the fact. Below is the code I used to generate the template file, note the use of the Version specification so this code only gets included in a specific build configuration. I’ve removed some of the comments for the sake of conciseness, you can view the original code in github.

module gx.i18n.l10n;
 
import glib.Internationalization;
 
version (Localize) {
 
    import std.experimental.logger;
    import std.file;
    import std.string;
 
    string[string] messages;
 
    void saveFile(string filename) {
        string output;
        foreach(key,value; messages) {
            if (key.indexOf("%") >= 0) {
                output ~= "#, c-format\n";
            }
            if (key.indexOf("\n") >= 0) {
                string lines;
                foreach(s;key.splitLines()) {
                    lines ~= "\"" ~ s ~ "\"\n"; 
                }
                output ~= ("msgid \"\"\n" ~ lines);
                output ~= ("msgstr \"\"\n" ~ lines ~ "\n");
            } else {
                output ~= "msgid \"" ~ key ~ "\"\n";
                output ~= "msgstr \"" ~ key ~ "\"\n\n";
            }
        }
        write(filename, output);
    } 
}
 
void textdomain(string domain) {
    _textdomain = domain;
}
 
string _(string text) {
    version (Localize) {
        trace("Capturing key " ~ text);
        messages[text] = text;
    }
 
    return Internationalization.dgettext(_textdomain, text);
}
 
private:
string _textdomain;

When capturing text, there are a couple of special cases in gettext to be aware of. The first is that the xgettext utility puts a special comment in front of strings that use C style formatting, i.e. %d or %s. I don’t think this comment is used but I wanted to keep it. The second is that the key for multi-line strings is generated with each line separated. That’s why in the code above you see the check for newline and the splitLines call.

Once the template file is completed we are ready to create our first localization. In my case, I created an English localization as Terminix has some programmatic terms (shortcut identifiers) that were intended to be localized to human friendly language rather then shown directly to the user. Creating the en.po file is just a matter of copying the terminix.pot file to en.po. While gettext has a utility for this, msginit, I just opted to copy it for simplicity.

Once the en.po localization was completed it needs to be compiled into a mo file. In Linux, mo files are stored in usr/share/locale/${LOCALE}/LC_MESSAGES where ${LOCALE} is the standard language/country code. The mo files for the application are named after the textdomain for the application and this is how gettext locates the right file for the application. For example, in Terminix’s case the full path to the English mo file would be usr/share/locale/en/LC_MESSAGES/terminix.mo.

To compile the mo file, simply use the gettext msgfmt utility as follows:

sudo msgfmt en.po -o /usr/share/locale/en/LC_MESSAGES/terminix.mo

Obviously you would want to script the above process as part of creating an installation package, you can see how Terminix does this here.

Using the GAction Framework

In GUI development you typically want to centralize event handlers given that it is pretty common to have multiple GUI elements (button, menu item, etc) invoking the same event. In many frameworks these centralized event handlers are typically referred to as actions. In GTK the action framework is part of GIO and is referred to as GAction. An overview of the framework can be found in the Gnome wiki in this How Do I entry.

Having done Delphi development 15 years ago I had some familiarity with the concept but it still took a bit of time to get up to speed on GActions. While the Gnome wiki provided some basic information, I still found it confusing in some spots so I thought I’d write a bit about my experience using GActions in Terminix as it specifically relates to the D language and the GtkD framework.

First some basics, every GAction has two elements that identify it, a prefix and a name. The prefix is a way to categorize actions and within a prefix the name of the action must be unique. The prefix and the name together, with a period between them, make a detailed action name and this is how you reference actions when binding them to UI elements. So if you have a prefix of “terminal” and a name of “copy” the detailed name would be “terminal.copy”.

Actions are stored in containers which typically implement the interfaces gio.ActionMapIF and gio.ActionGroupIF. Out of the box, GTK implements these interfaces in Application and ApplicationWindow. Actions registered against these objects are automatically assigned the prefix of “app” and “win” respectively. Typically actions registered against the Application object are used in the Gnome application menu and actions registered against ApplicationWindow operate against that window.

An important thing to understand though is you are not limited to registering actions against just Application and ApplicationWindow, you can register actions against any widget in your application using custom prefixes. The key is using the gio.SimpleActionGroup to create your own container to register actions against. You can then bind the SimpleActionGroup to a widget via the method insertActionGroup. An example of this is shown below:

SimpleActionGroup sagTerminalActions = new SimpleActionGroup();
//...create actions and register to the group
insertActionGroup(“terminal”, sagTerminalActions);

Understanding how to create your own action groups gives you a lot of flexibility in organizing your actions. Terminix is an application where the UI is heavily nested and generally I wanted to keep each layer isolated from each other. I didn’t want the Terminal to know too much about the containers it is nested within. Being able to create these custom action groups allows the terminal to have it’s own set of actions without needing a mechanism to bind them to the window.

The other benefit to this approach is that the GIO framework will allow you to have multiple instances of the same action and will determine which instance get’s invoked based on the scope. In Terminix I have multiple terminals each with the same set of “terminal” actions registered. The GAction framework automatically invokes the right action instance based on the terminal that has focus presumably by walking up the widget hierarchy until it finds the first widget with the action it is looking for.

To register an action, you simply create an instance of gio.SimpleAction which is a concrete implementation of the ActionIF interface and the add it to the desired ActionMap. A simple example would look like:

action = new SimpleAction(“copy”, null);
action.addOnActivate(delegate(Variant, SimpleAction) {
vte.copyClipboard();
});
sagTerminalActions.add(action)

Once you have your action, the next step is to bind it to a user interface element. One thing to be aware of is that in GTK there are actually two action frameworks, the one in the gio package (SimpleAction, SimpleActionGroup) and one in the gtk package (Action, ActionGroup). The one in GTK is considered deprecated and it is highly recommended to use the GIO action framework where possible. In general, GTK widgets that work with GIO actions use the method setActionName to specify the target action, the method setActionTargetValue is used by the GTK framework.

To bind the action to a UI element, you simply call the setActionName method with the detailed action name, for example:

button.setActionName(“terminal.copy);

Actions can be represented in menus using a variety of UI elements such as check and radio buttons. This involves declaring that an action has state which is represented by a GLib Variant in the Activate signal. I won’t spend any time on this topic, however for those interested I wrote an actions demo showing how to do this with popovers and it is available in the GtkD repository here.

If you want your action to have an accelerator, i.e. a keyboard shortcut, you need to register the key and the detailed action name with the your global GTK Application instance using the method Application.setAccelsForAction. Here is an example of this:

app.setAccelsForAction(“terminal.copy”, [“c”]);

This concludes the article on GActions, hopefully as you can see this provides a powerful framework for centralizing your event handlers within a GTK application. One final note, for the purposes of this article I’ve been using strings directly for prefixes and names for illustration. Obviously, you should declare these as constants in your code to ensure consistency. If you need to change the name of a prefix or action, having the string declared in one spot makes that job much easier.

Iterating over TreeIter

I’m working on a GTK application and found there were a few spots where I needed to iterate over a TreeModel. Doing this in GtkD is a bit of a pain since TreeModelT isn’t implemented as a Range and so you are forced to use the model methods rather then the convenience of foreach.

To make my life easier, I created a small struct that can be used to wrap a TreeModelIF interface and allows it to be used in D’s foreach construct as per this example:

foreach(TreeIter iter; TreeIterRange(lsProfiles)) {
	if (lsProfiles.getValue(iter, COLUMN_UUID).getString() == window.uuid) {
		lsProfiles.setValue(iter, COLUMN_NAME, profile.name); 
	}
}
/**
 * An implementation of a range that allows using foreach over a range
 */
struct TreeIterRange {
 
private:
    TreeModelIF model;
    TreeIter iter;
    bool _empty;
 
public:
    this(TreeModelIF model) {
        this.model = model;
        _empty = model.getIterFirst(iter);
    }
 
    @property bool empty() {
        return _empty;
    }
 
    @property auto front() {
        return iter;
    }
 
    void popFront() {
        _empty = !model.iterNext(iter);
    }
 
    /**
     * Based on the example here https://www.sociomantic.com/blog/2010/06/opapply-recipe/#.Vm8mW7grKEI
     */
    int opApply(int delegate(ref TreeIter iter) dg) {
        int result = 0;
        TreeIter iter;
        bool hasNext = model.getIterFirst(iter);
        while (hasNext) {
            result = dg(iter);
            if (result) break;
            hasNext = model.iterNext(iter);
        }
        return result;
    }
}

Creating Complex Popovers in D and GtkD

I’m working on a new terminal emulator in my spare time and one UI element I definitely wanted to make use of was Popovers. While creating a simple Popover is quite easy, I was struggling a bit with how to handle the more complex layouts like horizontal buttons as the documentation does not make it very clear. Fortunately, Christian Hergert of Builder fame was on IRC one night and was able to point me in the right direction.

The Popover I created is shown in the image below, in the terminal emulator I’m working on it allows you to split the terminal into multiple views or tiles so the user can see multiple views at the same time in an easily managed container. The Popover allows the user to split the terminal horizontally or vertically (note the icons are temporary, plan on “borrowing” Builder’s icons for this at some point).

Popover

The issue I was struggling with was where to set the display-hint, the section being a gio.Menu doesn’t have methods to set an attribute. What Christian pointed out to me is that you need to create an empty MenuItem to represent the section, add the section to that and then add it the menu used in the Popover. You can then set the necessary attributes on the empty MenuItem to get the items displayed in the way you want.

Here is the code I ended up with:

Popover createPopover(Widget parent) {
GMenu model = new GMenu();
 
GMenuItem splitH = new GMenuItem(null, "view.splith");
splitH.setAttributeValue("verb-icon", new GVariant("go-next-symbolic"));
splitH.setIcon(new ThemedIcon("go-next-symbolic"));
 
GMenuItem splitV = new GMenuItem(null, "view.splitv");
splitV.setAttributeValue("verb-icon", new GVariant("go-down-symbolic"));
splitV.setIcon(new ThemedIcon("go-down-symbolic"));
 
GMenu section = new GMenu();
section.appendItem(splitH);
section.appendItem(splitV);
 
GMenuItem splits = new GMenuItem(null, null);
splits.setSection(section);
splits.setLabel("Split");
splits.setAttributeValue("display-hint", new GVariant("horizontal-buttons"));
 
model.appendItem(splits);
 
Popover pm = new Popover(parent, model);
return pm;
}

Learning D and GTK

Quite awhile ago I wrote a post about learning Python and GTK, unfortunately while I played around with it a bit I never really took to Python. The dynamic typing and the usage of whitespacing as delimiters were not my style. Additionally, I was doing a lot of travel for work at the time and could never really muster the energy at the time to overcome my initial dislike of Python.

So as work slowed down a bit I figured I’d try something new this time. I decided to pick up D due to a number of factors including:

  • It’s derived from the C family so has some similarities to Java
  • It’s a compiled language so performance is excellent
  • Has a garbage collector so again similar to Java.
  • And most importantly, has full and up to date bindings for GTK

I had also considered trying Go, but the last point put a bullet in that the available GTK bindings appear to have a ways to go.

So to put the conclusion first, I did end up writing a small utility in D and GTK called Visual Grep. It’s a GTK based GUI for grep that follows Gnome Human Interface Guidelines (HIG) as much as possible. A screenshot is below, if you are interested in trying it out you can find it on Github.

Visual Grep

Learning D

I’ve had my eye on D for a number of years as a possible language to use for building GUI applications. The language itself is well designed and as a lot of advantages in the GUI space with safety features such as garbage collection combined with high performance as a compiled language.

Learning the D language itself is relatively straight forward and I didn’t find the learning curve to be that high. Coming from Java, the biggest hurdle was understanding ranges which is core to many features in D. I found it helpful as a starting point to think of ranges as an advanced version of Java’s Iterator class. The other key features in D, such as templates, can be learned over time but Ranges are core to D and need to be understood right up front so I’d recommend dedicating some time to it right at the beginning.

The next aspect of learning D is understanding the standard library called phobos. This library can be thought of as occupying a similar space as the core J2SE libraries. Learning the library and features available is not overly difficult, however the documentation available on dlang can often be overly succint. Additionally, in the Java world if you google on any API you can find a wealth of information, discussion and examples. Since D is much less popular, the amount of resources available is also consequently much less (a theme that will re-occur in subsequent sections).

Having said that, the D community is big enough that questions on forums and the irc channel got answered and I never found myself blocked on any particular issue.

The only issue with D, and reflected in phobos, is the general lack of a library eco-system. In more mainstream environments like Java and Python, there are a plethora of libraries to pretty much cover any use case. Want to create and parse XML, work with PDF files, do localization, interface with USB, etc you have your pick of libraries. In D not so much, while most core functions are available, some key ones are not. For example, there is currently no standard i18n package available for localization, logging is deigned experimental, etc.

This lack of resources is also reflected in phobos itself in terms of implementation and execution. For example, D has an excellent design with respect to multi-threaded communication based on message passing. D also supports the concept of declaring variables as immutable which would seem to be a perfect fit with respect to this message passing. From a design perspective it should work, however due to limitations in the implementation of variants in D you can’t actually use an immutable variable in this scenario. Either you pass a shared variable (D’s way of declaring a variable to be shared among multiple threads) or pass a pointer to the immutable variable and de-reference it on the receive.

Finally the last area of D that I struggled with a bit is pointers. While I have done some work with pointers in the past in Delphi, the usage of pointers in Delphi tended to be very rare. I have to be careful and not exaggerate as by no means is the usage of pointers in D constant need when building GTK apps, it’s just I did find I needed to work with them more then I did in Delphi due to the need to interface with C. Having coded only in Java for the last 12 years, coming back to pointers was a bit of a culture shock.

Learning GTK

My previous experience with GUI frameworks includes Java Swing (hate it) and Delphi VCL (love it).. D has a very complete set of bindings for GTK called GTKD and is highly recommended, Mike Wey does a great job keeping these up to date for each release of GTK. Based on my experiences with Swing and VCL, learning and getting productive with the basics of GTK was very easy with two notable exceptions.

The first is the GTK Treeview which, despite the name, is really an all encompassing listbox/treeview which serves multiple use cases. While it is undeniably powerful, I found it’s API a bit obtuse and difficult to accomplish the basic use cases. The sheer variety of objects related to the TreeView (TreeView, TreeModel, TreePath, TreeIter, CellRenderer, etc) is a bit overwhelming to the newbie.

The other aspect I struggled with a bit was the multi-threading. In GTK, the preferred approach to multi-threading is to setup a callback gdk-threads-add-idle. While not an issue in GTK per se, GTKD does not really give any examples on how best to integrate this with D. Looking at the design though, I went with integrating this with D’s threaded message passing infrastructure and it turned out to work very well but I burned quite a lot of time prototyping and testing to make sure I understood how everything worked.

An invaluable resource for learning GTK and GtkD was the grestful application, an open source D/GtkD application for testing REST requests. I borrowed quite a bit of code from that application including the plumbing for being able to tie in a D delegate to the GTK thread idle callback so a big thanks to that project for this.

Finally, I’d really recommend having some basic proficiency in another language with full bindings in GTK, either C or Python. The reason for this is that there is very little information out there on using D with GTK but GTK examples in other languages are easily ported to D providing you can work through the code of the example. My C and Python knowledge is pretty sparse but I found I knew enough to get by.

Tool Chain

The tool chain for D is somewhat primitive in comparison to Java but is imminently useable. I do all my development on Linux and so ended up using Mono-D and generally had a pretty positive experience. My only complaints are as follows:

  • When trying to run a program, you can an error about failing to create a console. Turns out this is an issue with gnome-terminal as documented here. Unsetting the GNOME_DESKTOP_SESSION_ID variable fixed the problem.
  • Code completion seems a bit hit and miss, most of the time it works great but once in a while you hit the’.’ and nothing happens.

Debugging can be done in Mono-D using the GDB plugin and I found this to work quite as well when I was dealing with segmentation faults with my multi-threading code.

For builds I used DUB which is somewhat similar to Maven in the Java world in that it is based on convention not configuration and manages dependencies. I found DUB worked quite well for me and have been pretty happy with it though admittedly my two projects are pretty simple.

Conclusion

Based on my experience so far, I think D and GtkD are a great combination for creating GTK applications in Linux. The learning curve is far shorter then using C and the language itself is much more productive then I say that as a C neophyte so take that with a pinch of salt. Additionally, the safety features of D (GC, RAII, exceptions, etc) make for more reliable software in comparison to C.

Python would also be a very good choice, but coming from a Java background D just felt much more comfortable to me.