Nov 11, 2017

Run a Mongo cluster with authentication on kubernetes using StatefulSets.

With StatefulSets running a mongo cluster with persistent storage is easy. Kubernetes blog post in [1] explains how to do that. But when we try to enable authentication there is a small problem with the above solution. Problem is we need to start the cluster without replica set (--replSet) option and add the admin user and a key file, then restart with the replica set and the key file.

In order to do that we need to modify the above sample code to add a pod life cycle - post start command.

    spec:
      terminationGracePeriodSeconds: 10
      containers:
        - name: mongo
          image: mongo:3.4.9
          command:
          - /bin/sh
          - -c
          - >
            if [ -f /data/db/admin-user.lock ]; then
              mongod --replSet rs0 --clusterAuthMode keyFile --keyFile /etc/secrets/mongo.key --setParameter authenticationMechanisms=SCRAM-SHA-1;
            else
              mongod --auth;
            fi;
          lifecycle:
            postStart:
              exec:
                command:
                - /bin/sh
                - -c
                - >
                  if [ ! -f /data/db/admin-user.lock ]; then
                    sleep 5;
                    touch /data/db/admin-user.lock
                    if [ "$HOSTNAME" = "mongo-0" ]; then
                      mongo --eval 'db = db.getSiblingDB("admin"); db.createUser({ user: "admin", pwd: "password", roles: [{ role: "root", db: "admin" }]});';
                    fi;
                    mongod --shutdown;
                  fi;
          ports:
            - containerPort: 27017
If you look at the command section of the above statefulSet part you can see I'm checking for a lock file. If that file exists I will start the cluster with replica set option. Else I will start the cluster with just auth.

Then you can notice the postStart section. In post start section we can define what to do at the start of a container in a pod. You can refer to this post in [2] to get an idea of pod lifecycle. In that it will again check for the lock file. If the file does not exists it will create the file and if the hostname is equal to mongo-o it will add the mongo admin user to the cluster. Then it will stop the node. Stopping the node will result the container to restart. Since we are using persistent storage the initially created lock file will exist. Since the lock file is there command will start the cluster with replica set option and postStart will pass its loop.

Running the mongo cluster

Generate a random key to enable keyFile authentication replication as described in this post.

openssl rand -base64 741 > mongodb-keyfile
Create a secret using that random string.
kubectl create secret generic mongo-key --from-file=mongodb-keyfile
Create statefulSet and headless service.
kubectl create -f  https://gist.githubusercontent.com/thilinapiy/0c5abc2c0c28efe1bbe2165b0d8dc115/raw/d3d0e64dfd35158907d076422c362f289d124dfc/mongo-statefulset.yaml
Scale-up if you need mote high availability.
kubectl scale --replicas=3 statefulset mongo
Let me know if you need further help.

1. http://blog.kubernetes.io/2017/01/running-mongodb-on-kubernetes-with-statefulsets.html
2. https://blog.openshift.com/kubernetes-pods-life
3. https://docs.mongodb.com/manual/tutorial/enforce-keyfile-access-control-in-existing-replica-set/