API Testing in OpenShift Pipelines with Newman

If you are writing REST based API applications you probably have some familiarity with the tool Postman which allows you to test your APIs via an interactive GUI. However did you know that there is a CLI equivalent of Postman called Newman that works with your existing Postman collections? Newman enables you to re-use your existing collections to integrate API testing into automated processes where a GUI would not be appropriate. While we will not go into the details of Postman or Newman here if you are new to the tools you can check out this blog which provides a good overview of both.

Integrating Newman into OpenShift Pipelines, aka Tekton, is very easy and straightforward. In this blog we are going to look at how I am using it in my product catalog demo to test the back-end API built in Quarkus as part of the CI/CD process powered by OpenShift Pipelines. This CI/CD process is shown in the diagram below (click for a bigger version) and note the two tasks where we do our API testing in the Development and Test environments, dev-test and test-test (unfortunate name) respectively. These tests are run after the new image is built and deployed in each environment and are thus considered integration tests rather then unit tests.

Product Catalog Server CICD

One of the things I love about Tekton, and thus OpenShift Pipelines, is the extensibility, it’s very easy to extend by creating custom images using either an existing image or an image that you have created yourself. If you are not familiar with OpenShift Pipelines or Tekton I would highly recommend checking out the concepts documentation which provides a good overview.

The first step to using Newman in OpenShift Pipelines is to create a custom task for it. Tasks in Tekton represent a sequence of steps to accomplish a specific goal or as the name implies, task. Each step uses the specified container image to perform it’s function. Fortunately in our case there is an existing container image for newman that we can leverage without having to create our own at docker.io/postman/newman. Our task definition for the newman task appears below:

apiVersion: tekton.dev/v1beta1
kind: Task
metadata:
  name: newman
spec:
  params:
  - name: COLLECTION
    description: The collection to run, typically a remote URL pointing to the collection
    type: string
  - name: ENVIRONMENT
    description: The environment file to use from the newman-env configmap
    default: "newman-env.json"
  steps:
    - name: collections-test
      image: docker.io/postman/newman:latest
      command:
        - newman
      args:
        - run
        - $(inputs.params.COLLECTION)
        - -e
        - /config/$(inputs.params.ENVIRONMENT)
        - --bail
      volumeMounts:
        - name: newman-env
          mountPath: /config
  volumes:
    - name: newman-env
      configMap:
        name: newman-env

There are two parameters declared as part of this task, COLLECTION and ENVIRONMENT. The collection parameter references a URL to the test suite that you want to run, it’s typically created using the Postman GUI and exported as a JSON file. For the pipeline in the product catalog we use this product-catalog-server-tests.json. Each test in the collection represents a request/response to the API along with some simple tests to ensure conformance with the desired results.

For example, when requesting a list of products, we test that the response code was 200 and 12 products were returned as per the picture below:

Postman

Postman

The environment parameter is a configmap with the customization the test suite requires for the specific environment that is being tested. For example, the API for the development and test environments have different URLs so we need to parametize this so we can re-use the same test suite across all environments. You can see the environments for the dev and test in my github repo. The task is designed so that the configmap, newman-env, contains all of the environments as separate files within the configmap as per the example here:

apiVersion: v1
data:
  newman-dev-env.json: "{\n\t\"id\": \"30c331d4-e961-4606-aecb-5a60e8e15213\",\n\t\"name\": \"product-catalog-dev-service\",\n\t\"values\": [\n\t\t{\n\t\t\t\"key\": \"host\",\n\t\t\t\"value\": \"server.product-catalog-dev:8080\",\n\t\t\t\"enabled\": true\n\t\t},\n\t\t{\n\t\t\t\"key\": \"scheme\",\n\t\t\t\"value\": \"http\",\n\t\t\t\"enabled\": true\n\t\t}\n\t],\n\t\"_postman_variable_scope\": \"environment\"\n}"
  newman-test-env.json: "{\n\t\"id\": \"30c331d4-e961-4606-aecb-5a60e8e15213\",\n\t\"name\": \"product-catalog-dev-service\",\n\t\"values\": [\n\t\t{\n\t\t\t\"key\": \"host\",\n\t\t\t\"value\": \"server.product-catalog-test:8080\",\n\t\t\t\"enabled\": true\n\t\t},\n\t\t{\n\t\t\t\"key\": \"scheme\",\n\t\t\t\"value\": \"http\",\n\t\t\t\"enabled\": true\n\t\t}\n\t],\n\t\"_postman_variable_scope\": \"environment\"\n}"
kind: ConfigMap
metadata:
  name: newman-env
  namespace: product-catalog-cicd

In the raw configmap the environments are hard to read due to formatting, however below is what the newman-dev-env.json looks like when formatted properly. Notice the route is pointing to the service in the product-catalog-dev namespace.

{
	"id": "30c331d4-e961-4606-aecb-5a60e8e15213",
	"name": "product-catalog-dev-service",
	"values": [
		{
			"key": "host",
			"value": "server.product-catalog-dev:8080",
			"enabled": true
		},
		{
			"key": "scheme",
			"value": "http",
			"enabled": true
		}
	],
	"_postman_variable_scope": "environment"
}

So now that we have our task, our test suite and our environments we need to add the task to the pipeline to test an environment. You can see the complete pipeline here, an excerpt showing the pipeline testing the dev environment appears below:

    - name: dev-test
      taskRef:
        name: newman
      runAfter:
        - deploy-dev
      params:
        - name: COLLECTION
          value: https://raw.githubusercontent.com/gnunn-gitops/product-catalog-server/master/tests/product-catalog-server-tests.json
        - name: ENVIRONMENT
          value: newman-dev-env.json

When you run the task newman will log the results of the tests and if any of the tests fail will return an error code which propagated up to the pipeline and cause the pipeline itself to fail. Here is the result from testing the Dev environment:

newman
Quarkus Product Catalog
→ Get Products
GET http://server.product-catalog-dev:8080/api/product [200 OK, 3.63KB, 442ms]
✓ response is ok
✓ data valid
→ Get Existing Product
GET http://server.product-catalog-dev:8080/api/product/1 [200 OK, 388B, 14ms]
✓ response is ok
✓ Data is correct
→ Get Missing Product
GET http://server.product-catalog-dev:8080/api/product/99 [404 Not Found, 115B, 18ms]
✓ response is missing
→ Login
POST http://server.product-catalog-dev:8080/api/auth [200 OK, 165B, 145ms]
→ Get Missing User
GET http://server.product-catalog-dev:8080/api/user/8 [404 Not Found, 111B, 12ms]
✓ Is status code 404
→ Get Existing User
GET http://server.product-catalog-dev:8080/api/user/1 [200 OK, 238B, 20ms]
✓ response is ok
✓ Data is correct
→ Get Categories
GET http://server.product-catalog-dev:8080/api/category [200 OK, 458B, 16ms]
✓ response is ok
✓ data valid
→ Get Existing Category
GET http://server.product-catalog-dev:8080/api/category/1 [200 OK, 192B, 9ms]
✓ response is ok
✓ Data is correct
→ Get Missing Category
GET http://server.product-catalog-dev:8080/api/category/99 [404 Not Found, 116B, 9ms]
✓ response is missing
┌─────────────────────────┬───────────────────┬───────────────────┐
│ │ executed │ failed │
├─────────────────────────┼───────────────────┼───────────────────┤
│ iterations │ 10 │
├─────────────────────────┼───────────────────┼───────────────────┤
│ requests │ 90 │
├─────────────────────────┼───────────────────┼───────────────────┤
│ test-scripts │ 80 │
├─────────────────────────┼───────────────────┼───────────────────┤
│ prerequest-scripts │ 00 │
├─────────────────────────┼───────────────────┼───────────────────┤
│ assertions │ 130 │
├─────────────────────────┴───────────────────┴───────────────────┤
│ total run duration: 883ms │
├─────────────────────────────────────────────────────────────────┤
│ total data received: 4.72KB (approx) │
├─────────────────────────────────────────────────────────────────┤
│ average response time: 76ms [min: 9ms, max: 442ms, s.d.: 135ms] │
└─────────────────────────────────────────────────────────────────┘

So to summarize integrating API testing with OpenShift Pipelines is very quick and easy. While in this example we showed the process using Newman other API testing tools can be integrated following a similar process.

OpenShift User Application Monitoring and Grafana the GitOps way!

Update: All of the work outlined in this article is now available as a kustomize overlay in the Red Hat Canada GitOps repo here.

Traditionally in OpenShift, the cluster monitoring that was provided out-of-the-box (OOTB) was only available for cluster monitoring. Administrators could not configure it to support their own application workloads necessitating the deployment of a separate monitoring stack (typically community prometheus and grafana). However this has changed in OpenShift 4.6 as the cluster monitoring operator now supports deploying a separate prometheus instance for application workloads.

One great capability provided by the OpenShift cluster monitoring is that it deploys Thanos to aggregate metrics from both the cluster and application monitoring stacks thus providing a central point for queries. At this point in time you still need to deploy your own Grafana stack for visualizations but I expect a future version of OpenShift will support custom dashboards right in the console alongside the default ones. The monitoring stack architecture for OpenShift 4.6 is shown in the diagram (click for architecture documentation) below:

Monitoring Architecture

In this blog entry we cover deploying the user application monitoring feature (super easy) as well as a Grafana instance (not super easy) using GitOps, specifically in this case with ArgoCD. This blog post is going to assume some familiarity with Prometheus and Grafana and will concentrate on the more challenging aspects of using GitOps to deploy everything.

The first thing we need to do is deploy the user application monitoring in OpenShift, this would typically be done as part of your cluster configuration. To do this, as per the docs, we simply need to deploy the following configmap in the openshift-monitoring namespace:

apiVersion: v1
kind: ConfigMap
metadata:
  name: cluster-monitoring-config
  namespace: openshift-monitoring
data:
  config.yaml: |
    enableUserWorkload: true

You can see this in my GitOps cluster-config here. Once deployed you should see the user monitoring components deployed in the openshift-user-workload-monitoring project as per below:

Now that the user monitoring is up and running we can configure the monitoring of our applications by adding the ServiceMonitor object to define the monitoring targets. This is typically done as part of the application deployment by application teams, it is a separate activity from the deployment of the user monitoring itself which is done in the cluster configurgation by cluster administrators. Here is an example that I have for my product-catalog demo that monitors my quarkus back-end:

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: server
  namespace: product-catalog-dev
spec:
  endpoints:
  - path: /metrics
    port: http
    scheme: http
  selector:
    matchLabels:
      quarkus-prometheus: "true"

In the service monitor above, it defines that any kubernetes services, in the same namespace as the ServiceMonitor, which have the label quarkus-prometheus set to true will have their metrics collected on the port named ‘http’ using the path ‘/metrics’. Of course, your application needs to be enabled for prometheus metrics and most modern frameworks like quarkus make this easy. From a GitOps perspective deploying the ServiceMonitor is just another yaml to deploy along with the application as you can see in my product-catalog manifests here.

As an aside please note that the user monitoring in OpenShift does not support the namespace selector in ServiceMonitor for security reasons, as a result the ServiceMonitor must be deployed in the same namespace as the targets being defined. Thus if you have the same application in three different namespaces (say dev, test and prod) you will need to deploy the ServiceMonitor in each of those namespaces independently.

Now if I were to stop here it would hardly merit a blog post, however for most folks once they deploy the user monitoring the next step is deploying something to visualize them and in this example that will be Grafana. Deploying the Grafana operator via GitOps in OpenShift is somewhat involved since we will use the Operator Lifecycle Manager (OLM) to do it but OLM is asynchronous. Specifically, with OLM you push the Subscription and OperatorGroup and asynchronously OLM will install and deploy the operator. From a GitOps perspective managing the deployment of the operator and the Custom Resources (CR) becomes tricky since the CRs cannot be installed until the Operator Custom Resource Definitions (CRDs) are installed.

Fortunately in ArgoCD there are a number of features available to work around this, specifically adding the `argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true` annotation to our resources will instruct ArgoCD not to error out if some resources cannot be added initially. You can also combine this with retries in your ArgoCD application for more complex operators that take significant time to initialize, for Grafana though the annotation seems to be sufficient. In my product-catalog example, I am adding this annotation across all resources using kustomize:

apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

namespace: product-catalog-monitor

commonAnnotations:
    argocd.argoproj.io/sync-options: SkipDryRunOnMissingResource=true

bases:
- https://github.com/redhat-canada-gitops/catalog/grafana-operator/overlays/aggregate?ref=grafana
- ../../../manifests/app/monitor/base

resources:
- namespace.yaml
- operator-group.yaml
- cluster-monitor-view-rb.yaml

patchesJson6902:
- target:
    version: v1
    group: rbac.authorization.k8s.io
    kind: ClusterRoleBinding
    name: grafana-proxy
  path: patch-proxy-namespace.yaml
- target:
    version: v1alpha1
    group: integreatly.org
    kind: Grafana
    name: grafana
  path: patch-grafana-sar.yaml

Now it’s beyond the scope of this blog to go into a detailed description of kustomize, but in a nutshell it’s a patching framework that enables you to aggregate resources from either local or remote bases as well as add new resources. In the kustomize file above, we are using the Red Hat Canada standard deployment of Grafana, which includes OpenShift OAuth integration, and combining it with my application specific monitoring Grafana resources such as Datasources and Dashboards which is what we will look at next.

Continuing along we need to setup the plumbing to connect Grafana to the cluster monitoring Thanos instance in the openshift-monitoring namespace. This blog article, Custom Grafana dashboards for Red Hat OpenShift Container Platform 4, does a great job of walking you through the process and I am not going to repeat it here, however please do read that article before carrying on.

The first step we need to do is define a GrafanaDatasource object:

apiVersion: integreatly.org/v1alpha1
kind: GrafanaDataSource
metadata:
  name: prometheus
spec:
  datasources:
    - access: proxy
      editable: true
      isDefault: true
      jsonData:
        httpHeaderName1: 'Authorization'
        timeInterval: 5s
        tlsSkipVerify: true
      name: Prometheus
      secureJsonData:
        httpHeaderValue1: 'Bearer ${BEARER_TOKEN}'
      type: prometheus
      url: 'https://thanos-querier.openshift-monitoring.svc.cluster.local:9091'
  name: prometheus.yaml

Notice in httpsHeaderValue1 we are expected to provide a bearer token, this token comes from the grafana-serviceaccount and can only be determined at runtime which makes it a bit of a challenge from a GitOps perspective. To manage this, we deploy a kubernetes job as an ArgoCD PostSync hook in order to patch the GrafanaDatasource with the appropriate token:


apiVersion: batch/v1
kind: Job
metadata:
  name: patch-grafana-ds
  annotations:
    argocd.argoproj.io/hook: PostSync
    argocd.argoproj.io/hook-delete-policy: HookSucceeded
spec:
  template:
    spec:
      containers:
        - image: registry.redhat.io/openshift4/ose-cli:v4.6
          command:
            - /bin/bash
            - -c
            - |
              set -e
              echo "Patching grafana datasource with token for authentication to prometheus"
              TOKEN=`oc serviceaccounts get-token grafana-serviceaccount -n product-catalog-monitor`
              oc patch grafanadatasource prometheus --type='json' -p='[{"op":"add","path":"/spec/datasources/0/secureJsonData/httpHeaderValue1","value":"Bearer '${TOKEN}'"}]'
          imagePullPolicy: Always
          name: patch-grafana-ds
      dnsPolicy: ClusterFirst
      restartPolicy: OnFailure
      serviceAccount: patch-grafana-ds-job
      serviceAccountName: patch-grafana-ds-job
      terminationGracePeriodSeconds: 30

This job runs using a special ServiceAccount which gives the job just enough access to retrieve the token and patch the datasource, once that’s done the job is deleted by ArgoCD.

The other thing we want to do is control access to Grafana, basically we want to grant OpenShift users who have view access on the Grafana route in the namespace access to grafana. The grafana operator uses the OpenShift OAuth Proxy to integrate with OpenShift. This proxy enables the definition of a Subject Access Review (SAR) to determine who is authorized to use Grafana, the SAR is simply a check on a particular object that acts as a way to determine access. For example, to only allow cluster administrators to have access to the Grafana instance we can specify that the user must have access to get namespaces:

-openshift-sar={"resource": "namespaces", "verb": "get"}

In our case we want anyone who has view access to the grafana route in the namespace grafana is hosted, product-catalog-monitor, to have access. So our SAR would appear as follows:

-openshift-sar={"namespace":"product-catalog-monitor","resource":"routes","name":"grafana-route","verb":"get"}

To make this easy for kustomize to patch, the Red Hat Canada grafana implementation passes the SAR as an environment variable. To patch the value we can include a kustomize patch as follows:

- op: replace
  path: /spec/containers/0/env/0/value
  value: '-openshift-sar={"namespace":"product-catalog-monitor","resource":"routes","name":"grafana-route","verb":"get"}'

You can see this patch being applied at the environment level in my product-catalog example here. In my GitOps standards, environments is where the namespace is created and thus it makes sense that any namespace patching that is required is done at this level.

After this it is simply a matter of including the other resources such as the cluster-monitor-view rolebinding to the grafana-serviceaccount so that grafana is authorized to retrieve the metrics.

If everything has gone well to this point you should be able to create a dashboard to view your application metrics.

Initializing Databases in OpenShift Deployment

When deploying a database in OpenShift there is typically a need to initialize the database with a schema and and perhaps an initial dataset or some reference data. This can be done in a variety of ways such as having the application initialize it, use a kubernetes job, etc. In OpenShift 3 with DeploymentConfig one technique that was quite common was to leverage the DeploymentConfig lifecycle post hook to initialize it, for example:

apiVersion: v1
kind: DeploymentConfig
  name: my-database
spec:
  strategy:
    recreateParams:
      post:
        execNewPod:
          command:
            - /bin/sh
            - '-c'
            - >-
              curl -o ~/php-react.sql
              https://raw.githubusercontent.com/gnunn1/openshift-basic-pipeline/master/react-crud/database/php-react.sql
              && /opt/rh/rh-mysql57/root/usr/bin/mysql -h $MYSQL_SERVICE_HOST -u
              $MYSQL_USER -D $MYSQL_DATABASE -p$MYSQL_PASSWORD -P 3306 <
              ~/php-react.sql
          containerName: ${DATABASE_SERVICE_NAME}
        failurePolicy: abort
...

While not necessarily a production grade technique, this was particularly useful for me when creating self-contained demos where I did not want to require someone to manually set up a bunch of infrastructure or provision datasets.

In OpenShift 4 there is a trend towards using the standard Deployment versus the OpenShift specific DeploymentConfig with most cases in the console and the cli defaulting to Deployments. While Deployments and DeploymentConfigs are very similar, there are some key differences in the capabilities between the two as outlined in the documentation. I won’t re-hash them all here, but one feature lacking in Deployments from DeploymentConfig is the lifcycle hook, so how do we accomplish the above using a Deployment?

For me, one technique I’ve found that works well is to leverage the s2i (source-2-image) capabilities of Red Hat’s database containers, however instead of building a custom container we can have s2i do our initializtion at runtime with the generic image. This works because you look at the assemble script the database containers are using for s2i, you can see all the assemble script is doing is copying the file from one location to another. The script itself doesn’t actually do any initialization at build time which means we could simply mount our initialization files directly in the image at the right location without building the container first.

You can see the technique in action in my product-catalog demo repository. In this repo it deploys, using kustomize, a MariaDB database which is then initialized with a schema and an initial dataset. To do this I have a configmap that contains a script, a DDL sql file to define the schema and a DML sql file to insert an initial dataset. Here is an abridged example:

kind: ConfigMap
apiVersion: v1
metadata:
  name: productdb-init
data:
  90-init-database.sh: |
    init_database() {
        local thisdir
        local init_data_file
        thisdir=$(dirname ${BASH_SOURCE[0]})
 
        init_data_file=$(readlink -f ${thisdir}/../mysql-data/schema.sql)
        log_info "Initializing the database schema from file ${init_data_file}..."
        mysql $mysql_flags ${MYSQL_DATABASE} < ${init_data_file}
 
        init_data_file=$(readlink -f ${thisdir}/../mysql-data/import.sql)
        log_info "Initializing the database data from file ${init_data_file}..."
        mysql $mysql_flags ${MYSQL_DATABASE} < ${init_data_file}
    }
 
    if ! [ -v MYSQL_RUNNING_AS_SLAVE ] && $MYSQL_DATADIR_FIRST_INIT ; then
        init_database
    fi
  import.sql: >
    INSERT INTO `categories` (`id`, `name`, `description`, `created`,
    `modified`) VALUES
    (1, 'Smartphone', 'Not a stupid phone', '2015-08-02 23:56:46', '2016-12-20
    06:51:25'),
    (2, 'Tablet', 'A small smartphone-laptop mix', '2015-08-02 23:56:46',
    '2016-12-20 06:51:42'),
    (3, 'Ultrabook', 'Ultra portable and powerful laptop', '2016-12-20
    13:51:15', '2016-12-20 06:51:52');
 
    INSERT INTO `products` (`id`, `name`, `description`, `price`, `category_id`,
    `created`, `modified`) VALUES
    (1, 'ASUS Zenbook 3', 'The most powerful and ultraportable Zenbook ever',
    1799, 3, '2016-12-20 13:53:00', '2016-12-20 06:53:00'),
    (2, 'Dell XPS 13', 'Super powerful and portable ultrabook with ultra thin
    bezel infinity display', 2199, 3, '2016-12-20 13:53:34', '2016-12-20
    06:53:34');

  schema.sql: >-
    DROP TABLE IF EXISTS `categories`;
 
    CREATE TABLE `categories` (
        `id` int(11) NOT NULL AUTO_INCREMENT,
        `created` date DEFAULT NULL,
        `description` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
        `modified` datetime(6) DEFAULT NULL,
        `name` varchar(128) COLLATE utf8mb4_unicode_ci NOT NULL,
        PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8mb4
    COLLATE=utf8mb4_unicode_ci;
 
    --
    -- Table structure for table `products`
    --
 
    DROP TABLE IF EXISTS `products`;
 
    CREATE TABLE `products` (
        `id` int(11) NOT NULL AUTO_INCREMENT,
        `created` date DEFAULT NULL,
        `description` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
        `modified` datetime(6) DEFAULT NULL,
        `name` varchar(128) COLLATE utf8mb4_unicode_ci NOT NULL,
        `price` double NOT NULL,
        `category_id` int(11) NOT NULL,
        PRIMARY KEY (`id`),
        KEY `FKog2rp4qthbtt2lfyhfo32lsw9` (`category_id`),
        CONSTRAINT `FKog2rp4qthbtt2lfyhfo32lsw9` FOREIGN KEY (`category_id`) REFERENCES `categories` (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=13 DEFAULT CHARSET=utf8mb4
    COLLATE=utf8mb4_unicode_ci;
 
    --
    -- Table structure for table `users`
    --
 
    DROP TABLE IF EXISTS `users`;
 
    CREATE TABLE `users` (
        `id` int(11) NOT NULL AUTO_INCREMENT,
        `created_at` datetime(6) DEFAULT NULL,
        `email` varchar(100) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
        `iteration_count` int(11) DEFAULT NULL,
        `password_hash` varchar(100) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
        `salt` varchar(100) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
        PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8mb4
    COLLATE=utf8mb4_unicode_ci;

Now there are definitely some improvements that could be made here, for example the size of the configmap is limited so it could be better to load the DDL and DML files from git or other location rather then inlining them into the configmap.

Once you have the configmap then it’s simply a matter of mounting it at the appropriate location:


apiVersion: apps/v1
kind: Deployment
metadata:
  name: productdb
spec:
  template:
    spec:
      containers:
        - name: productdb
          image: registry.redhat.io/rhel8/mariadb-103:1
          ...
          volumeMounts:
          - mountPath: /var/lib/mysql/data
            name: productdb-data
          - mountPath: /opt/app-root/src/mysql-init/90-init-data.sh
            name: productdb-init
            subPath: 90-init-database.sh
          - mountPath: /opt/app-root/src/mysql-data/import.sql
            name: productdb-init
            subPath: import.sql
          - mountPath: /opt/app-root/src/mysql-data/schema.sql
            name: productdb-init
            subPath: schema.sql
      volumes:
      - configMap:
          name: productdb-init
        name: productdb-init
     ...

The complete Deployment example can be viewed here.

So that’s basically it, while I’ve only tested this with MariaDB I would expect the same technique would work with the MySQL and PostgreSQL databases images as well. As mentioned previously, I would not consider this a production ready technique but it is a useful tool when putting together examples or demos.

Updated GitOps Standards

I maintain a small document in Github outlining the GitOps standards I use in my own repositories. I find with kustomize it’s very important to have a standardized layout in terms of folder structure in an organization or else it becomes challenging for everyone to understand what kustomize is doing. A common frame of reference makes all the difference.

I’ve recently tweaked my standards, feel free to check them out at https://github.com/gnunn-gitops/standards. Comments always welcome as I’m very interested in learning what other folks are doing.