Integrating RHACS with OpenShift Authentication in GitOps

Further to my previous post about deploying Red Hat Advanced Cluster Security (RHACS) via GitOps, the newest version of RHACS enables direct integration with OpenShift OAuth. This addition means that it is no longer required to use RH-SSO to integrate with OpenShift authentication which greatly simplifies the configuration.

To configure RHACS to use this feature in GitOps, we can craft a simple kubernetes job to leverage the ACS REST API to push the configuration into RHACS once Central is up and running. This job can be found here in my cluster-config repo but also is shown below:

apiVersion: batch/v1
kind: Job
metadata:
  annotations:
    argocd.argoproj.io/sync-wave: "10"
  name: create-oauth-auth-provider
  namespace: stackrox
spec:
  template:
    spec:
      containers:
        - image: image-registry.openshift-image-registry.svc:5000/openshift/cli:latest
          env:
          - name: PASSWORD
            valueFrom:
              secretKeyRef:
                name: central-htpasswd
                key: password
          - name: DEFAULT_ROLE
            value: Admin
          - name: UI_ENDPOINT
            value: central-stackrox.apps.home.ocplab.com
          command:
            - /bin/bash
            - -c
            - |
              #!/usr/bin/env bash
              # Wait for central to be ready
              attempt_counter=0
              max_attempts=20
              echo "Waiting for central to be available..."
              until $(curl -k --output /dev/null --silent --head --fail https://central); do
                  if [ ${attempt_counter} -eq ${max_attempts} ];then
                    echo "Max attempts reached"
                    exit 1
                  fi
                  printf '.'
                  attempt_counter=$(($attempt_counter+1))
                  echo "Made attempt $attempt_counter, waiting..."
                  sleep 5
              done
              echo "Configuring OpenShift OAuth Provider"
              echo "Test if OpenShift OAuth Provider already exists"
              response=$(curl -k -u "admin:$PASSWORD" https://central/v1/authProviders?name=OpenShift | python3 -c "import sys, json; print(json.load(sys.stdin)['authProviders'], end = '')")
              if [[ "$response" != "[]" ]] ; then
                echo "OpenShift Provider already exists, exiting"
                exit 0
              fi
              export DATA='{"name":"OpenShift","type":"openshift","active":true,"uiEndpoint":"'${UI_ENDPOINT}'","enabled":true}'
              echo "Posting data: ${DATA}"
              authid=$(curl -k -X POST -u "admin:$PASSWORD" -H "Content-Type: application/json" --data $DATA https://central/v1/authProviders | python3 -c "import sys, json; print(json.load(sys.stdin)['id'], end = '')")
              echo "Authentication Provider created with id ${authid}"
              echo "Updating minimum role to ${DEFAULT_ROLE}"
              export DATA='{"previous_groups":[],"required_groups":[{"props":{"authProviderId":"'${authid}'"},"roleName":"'${DEFAULT_ROLE}'"}]}'
              curl -k -X POST -u "admin:$PASSWORD" -H "Content-Type: application/json" --data $DATA https://central/v1/groupsbatch
          imagePullPolicy: Always
          name: create-oauth-auth-provider
      dnsPolicy: ClusterFirst
      restartPolicy: Never
      serviceAccount: create-cluster-init
      serviceAccountName: create-cluster-init
      terminationGracePeriodSeconds: 30

The job will wait for Central to be available so it can be deployed simultaneously with the operator as per my last article. Also while you could optionally run this as a post-sync hook in Argo CD, however since this job is something that only needs to be run once I’ve opted to not annotate it with the post sync hook.