Initializing Databases in OpenShift Deployment

When deploying a database in OpenShift there is typically a need to initialize the database with a schema and and perhaps an initial dataset or some reference data. This can be done in a variety of ways such as having the application initialize it, use a kubernetes job, etc. In OpenShift 3 with DeploymentConfig one technique that was quite common was to leverage the DeploymentConfig lifecycle post hook to initialize it, for example:

apiVersion: v1
kind: DeploymentConfig
  name: my-database
spec:
  strategy:
    recreateParams:
      post:
        execNewPod:
          command:
            - /bin/sh
            - '-c'
            - >-
              curl -o ~/php-react.sql
              https://raw.githubusercontent.com/gnunn1/openshift-basic-pipeline/master/react-crud/database/php-react.sql
              && /opt/rh/rh-mysql57/root/usr/bin/mysql -h $MYSQL_SERVICE_HOST -u
              $MYSQL_USER -D $MYSQL_DATABASE -p$MYSQL_PASSWORD -P 3306 <
              ~/php-react.sql
          containerName: ${DATABASE_SERVICE_NAME}
        failurePolicy: abort
...

While not necessarily a production grade technique, this was particularly useful for me when creating self-contained demos where I did not want to require someone to manually set up a bunch of infrastructure or provision datasets.

In OpenShift 4 there is a trend towards using the standard Deployment versus the OpenShift specific DeploymentConfig with most cases in the console and the cli defaulting to Deployments. While Deployments and DeploymentConfigs are very similar, there are some key differences in the capabilities between the two as outlined in the documentation. I won’t re-hash them all here, but one feature lacking in Deployments from DeploymentConfig is the lifcycle hook, so how do we accomplish the above using a Deployment?

For me, one technique I’ve found that works well is to leverage the s2i (source-2-image) capabilities of Red Hat’s database containers, however instead of building a custom container we can have s2i do our initializtion at runtime with the generic image. This works because you look at the assemble script the database containers are using for s2i, you can see all the assemble script is doing is copying the file from one location to another. The script itself doesn’t actually do any initialization at build time which means we could simply mount our initialization files directly in the image at the right location without building the container first.

You can see the technique in action in my product-catalog demo repository. In this repo it deploys, using kustomize, a MariaDB database which is then initialized with a schema and an initial dataset. To do this I have a configmap that contains a script, a DDL sql file to define the schema and a DML sql file to insert an initial dataset. Here is an abridged example:

kind: ConfigMap
apiVersion: v1
metadata:
  name: productdb-init
data:
  90-init-database.sh: |
    init_database() {
        local thisdir
        local init_data_file
        thisdir=$(dirname ${BASH_SOURCE[0]})
 
        init_data_file=$(readlink -f ${thisdir}/../mysql-data/schema.sql)
        log_info "Initializing the database schema from file ${init_data_file}..."
        mysql $mysql_flags ${MYSQL_DATABASE} < ${init_data_file}
 
        init_data_file=$(readlink -f ${thisdir}/../mysql-data/import.sql)
        log_info "Initializing the database data from file ${init_data_file}..."
        mysql $mysql_flags ${MYSQL_DATABASE} < ${init_data_file}
    }
 
    if ! [ -v MYSQL_RUNNING_AS_SLAVE ] && $MYSQL_DATADIR_FIRST_INIT ; then
        init_database
    fi
  import.sql: >
    INSERT INTO `categories` (`id`, `name`, `description`, `created`,
    `modified`) VALUES
    (1, 'Smartphone', 'Not a stupid phone', '2015-08-02 23:56:46', '2016-12-20
    06:51:25'),
    (2, 'Tablet', 'A small smartphone-laptop mix', '2015-08-02 23:56:46',
    '2016-12-20 06:51:42'),
    (3, 'Ultrabook', 'Ultra portable and powerful laptop', '2016-12-20
    13:51:15', '2016-12-20 06:51:52');
 
    INSERT INTO `products` (`id`, `name`, `description`, `price`, `category_id`,
    `created`, `modified`) VALUES
    (1, 'ASUS Zenbook 3', 'The most powerful and ultraportable Zenbook ever',
    1799, 3, '2016-12-20 13:53:00', '2016-12-20 06:53:00'),
    (2, 'Dell XPS 13', 'Super powerful and portable ultrabook with ultra thin
    bezel infinity display', 2199, 3, '2016-12-20 13:53:34', '2016-12-20
    06:53:34');

  schema.sql: >-
    DROP TABLE IF EXISTS `categories`;
 
    CREATE TABLE `categories` (
        `id` int(11) NOT NULL AUTO_INCREMENT,
        `created` date DEFAULT NULL,
        `description` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
        `modified` datetime(6) DEFAULT NULL,
        `name` varchar(128) COLLATE utf8mb4_unicode_ci NOT NULL,
        PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=4 DEFAULT CHARSET=utf8mb4
    COLLATE=utf8mb4_unicode_ci;
 
    --
    -- Table structure for table `products`
    --
 
    DROP TABLE IF EXISTS `products`;
 
    CREATE TABLE `products` (
        `id` int(11) NOT NULL AUTO_INCREMENT,
        `created` date DEFAULT NULL,
        `description` varchar(255) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
        `modified` datetime(6) DEFAULT NULL,
        `name` varchar(128) COLLATE utf8mb4_unicode_ci NOT NULL,
        `price` double NOT NULL,
        `category_id` int(11) NOT NULL,
        PRIMARY KEY (`id`),
        KEY `FKog2rp4qthbtt2lfyhfo32lsw9` (`category_id`),
        CONSTRAINT `FKog2rp4qthbtt2lfyhfo32lsw9` FOREIGN KEY (`category_id`) REFERENCES `categories` (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=13 DEFAULT CHARSET=utf8mb4
    COLLATE=utf8mb4_unicode_ci;
 
    --
    -- Table structure for table `users`
    --
 
    DROP TABLE IF EXISTS `users`;
 
    CREATE TABLE `users` (
        `id` int(11) NOT NULL AUTO_INCREMENT,
        `created_at` datetime(6) DEFAULT NULL,
        `email` varchar(100) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
        `iteration_count` int(11) DEFAULT NULL,
        `password_hash` varchar(100) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
        `salt` varchar(100) COLLATE utf8mb4_unicode_ci DEFAULT NULL,
        PRIMARY KEY (`id`)
    ) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8mb4
    COLLATE=utf8mb4_unicode_ci;

Now there are definitely some improvements that could be made here, for example the size of the configmap is limited so it could be better to load the DDL and DML files from git or other location rather then inlining them into the configmap.

Once you have the configmap then it’s simply a matter of mounting it at the appropriate location:


apiVersion: apps/v1
kind: Deployment
metadata:
  name: productdb
spec:
  template:
    spec:
      containers:
        - name: productdb
          image: registry.redhat.io/rhel8/mariadb-103:1
          ...
          volumeMounts:
          - mountPath: /var/lib/mysql/data
            name: productdb-data
          - mountPath: /opt/app-root/src/mysql-init/90-init-data.sh
            name: productdb-init
            subPath: 90-init-database.sh
          - mountPath: /opt/app-root/src/mysql-data/import.sql
            name: productdb-init
            subPath: import.sql
          - mountPath: /opt/app-root/src/mysql-data/schema.sql
            name: productdb-init
            subPath: schema.sql
      volumes:
      - configMap:
          name: productdb-init
        name: productdb-init
     ...

The complete Deployment example can be viewed here.

So that’s basically it, while I’ve only tested this with MariaDB I would expect the same technique would work with the MySQL and PostgreSQL databases images as well. As mentioned previously, I would not consider this a production ready technique but it is a useful tool when putting together examples or demos.

Leave a Reply

Your email address will not be published. Required fields are marked *


*