How to Clone a CouchDB Helm Release on GKE

By May 28, 2019 June 3rd, 2019 Blog, Kubernetes, storage

In this tutorial, we will create a clone of the CouchDB database that has been deployed on Google Kubernetes Engine (GKE). We can then alter the clone records and verify that the data from the parent database remains intact.

Before starting with this tutorial, make sure ROBIN Storage is installed on GKE, your CouchDB database is deployed, has data loaded in it, and the Helm release is registered with ROBIN and you have taken a snapshot of the CouchDB database.

Application cloning improves the collaboration across Dev/Test/Ops teams. Teams can share app+data quickly, reducing the procedural delays involved in re-creating environments. Each team can work on their clone without affecting other teams. Clones are useful when you want to run a report on a database without affecting the source database application, or for performing UAT tests or for validating patches before applying them to the production database, etc.

ROBIN clones are ready-to-use “thin copy” of the entire app/database, not just storage volumes. Thin-copy means that data from the snapshot is NOT physically copied, therefore clones can be made very quickly. ROBIN clones are fully-writable and any modifications made to the clone are not visible to the source app/database.

ROBIN lets you clone not just the storage volumes (PVCs) but the entire database application including all its resources such as Pods, StatefulSets, PVCs, Services, ConfigMaps etc. with a single command.

To create a clone from the existing snapshot created above, run the following command. Use the snapshot id we retrieved above.

robin clone create clone-data Your_Snapshot_ID --wait

Let’s verify ROBIN has cloned all relevant Kubernetes resources.

kubectl get pod -o=wide |grep clone-data

You should see an output similar to below.

ROBIN automatically clones all the required Kubernetes resources that are required to stand up a fully-functional clone of our database, not just persistent storage volumes (PVCs). After the clone is complete, the cloned database is ready for use.

Now let us get the Service IP address of our CouchDB database, and note the IP address.

export CLONE_IP_ADDRESS=$(kubectl get service 
clone-data-couch-svc-couchdb -o jsonpath={.spec.clusterIP})

Let us now get admin user and password of our cloned CouchDB from Kubernetes Secret:

export CLONE_COUCHDB_USER=$(kubectl get secret clone-data-couch-couchdb
-o go-template='{{ .data.adminUsername }}' | base64 --decode)

export CLONE_COUCHDB_PASSWORD=$(kubectl get secret clone-data-couch-couchdb
-o go-template='{{ .data.adminPassword }}' | base64 --decode)

To verify we have successfully created a clone of our CouchDB database, run the following command.


You should see an output similar to the following:

We have now successfully created a clone of our original CouchDB database, and verified that the cloned database also includes the robindb with 5 records under employees document, just as it exists in the original database.

We can now alter the records of clone. This action does not impact the data in the original database. We can verify that the data in the parent database remains intact and has not changed. The Cloned database can be treated as an independent database from here onwards.

We can now snapshot the cloned database and also perform rollback operations.

This concludes the Clone CouchDB database on GKE tutorial.


Author Madhura Gaunekar

More posts by Madhura Gaunekar