Deploying Red Hat Advanced Cluster Security (aka Stackrox) with GitOps

I’ve been running Red Hat Advanced Cluster Security (RHACS) in my personal cluster via the stackrox helm chart for quite awhile, however now that the RHACS operator is available I figured it was time to step up my game and integrate it into my gitops cluster configuration instead of deploying it manually.

Broadly speaking when installing RHACS manually on a cluster there are four steps that you typically need to do:

  1. Subscribe the operator into your cluster via Operator Hub into the stackrox namespace
  2. Deploy an instance of Central which provides the UI, dashboards, etc (i.e. the single pane of glass) to interact with the product using the Central CRD API
  3. Create and download a cluster-init bundle in Central for the sensors and deploy it into the stackrox namespace
  4. Deploy the sensors via the SecuredCluster

When looking at these steps there are a couple of challenges to overcome for the process to be done via GitOps:

  • The steps need to happen sequentially, in particular the cluster-init bundle needs to be deployed before the SecuredCluster
  • Retrieving the cluster-init bundle requires interacting with the Central API as it is not managed via a kubernetes CRD

Fortunately both of these challenges are easily overcome. For the first challenge we can leverage Sync Waves in Argo CD to deploy items in a defined order. To do this, we simply annotate the objects with the desired order, aka wave, that we want using For example, here is the operator subscription which goes first as we defined it in wave ‘0’:

kind: Subscription
  annotations: "0"
  labels: ''
  name: rhacs-operator
  namespace: openshift-operators
  channel: latest
  installPlanApproval: Automatic
  name: rhacs-operator
  source: redhat-operators
  sourceNamespace: openshift-marketplace
  startingCSV: rhacs-operator.v3.62.0

The second challenge, retrieving the cluster-init bundle, is straightforward using the RHACS Central API. To invoke the API we create a small Kubernetes job that Argo CD will deploy after Central is up and running but before the SecuredCluster. The job will use a ServiceAccount with just enough permissions to retrieve the password and then interact with the API, an abbreviated version of the job highlighting the meat of it appears below:

echo "Configuring cluster-init bundle"
export DATA={\"name\":\"local-cluster\"}
curl -k -o /tmp/bundle.json -X POST -u "admin:$PASSWORD" -H "Content-Type: application/json" --data $DATA https://central/v1/cluster-init/init-bundles
echo "Bundle received"
echo "Applying bundle"
# No jq in container, python to the rescue
cat /tmp/bundle.json | python3 -c "import sys, json; print(json.load(sys.stdin)['kubectlBundle'])" | base64 -d | oc apply -f -

The last thing that needs to happen to make this work is define a custom health check in Argo CD for Central. If we do not have this healthcheck Argo CD will not wait for Central to be fully deployed before moving on to the next item in the wave which will cause issues when the job tries to execute and no Central is available. In your argo CD resource customization you need to add the following:
      health.lua: |
        hs = {}
        if obj.status ~= nil and obj.status.conditions ~= nil then
            for i, condition in ipairs(obj.status.conditions) do
              if condition.status == "True" and condition.reason == "InstallSuccessful" then
                  hs.status = "Healthy"
                  hs.message = condition.message
                  return hs
        hs.status = "Progressing"
        hs.message = "Waiting for Central to deploy."
        return hs

A full example of the healthcheck is in the repo I use to install the OpenShift GitOps operator here.

At this point you should have a fully functional RHACS deployment in your cluster being managed by the OpenShift GitOps operator (Argo CD). Going further, you can extend the example by using the Central API to integrate with RH-SSO and other components in your infrastructure using the same job technique to fetch the cluster-init-bundle.

The complete example of this approach is available in the Red Hat Canada GitOps Catalog repo in the acs-operator folder.