Skip to content

Restore Data

Please check the guide on how to perform a backup first.

TL;DR

This page explains how to restore from a backup of a MariaDB Galera ( i.e. Data Database and Metadata Database).

Restore Data

There are multiple guides available on how to restore, we prioritize the Helm Chart documentation over the official MariaDB 11.3.2 documentation as it simplifies many steps.

If you are not sure what to do, attempt to restore the database, assuming it was an orderly shutdown. Only then attempt a disaster recovery after a crash shutdown.

Orderly Shutdown

This section follows the official MariaDB 11.3.2 documentation adapted to the Bitnami Chart. This is the case when some (or all) nodes in the cluster fail to reach a quorum due to them switching to a non-primary state and refusing to serve queries to protect data integrity1.

Follow the recommendation of the safe to bootstrap feature on each node N=0, 1 and 2 (or more). Look for safe_to_bootstrap: 1.

kubectl -n $NAMESPACE exec pod/data-db-$N -- cat \
    /bitnami/mariadb/data/grastate.dat

Alternatively, you can use the daemon to read the InnoDB storage engine logs and report the last known transaction position. Use the MySQL daemon mysqld to determine the sequence number.

kubectl -n $NAMESPACE exec pod/data-db-N -- mysqld --wsrep-recover

Alternatively, you can use the PVCs directly and read the sequence number from the volumes if you cannot use the pods. Find the most advanced node directly in the PVCs.

kubectl run -i --rm --tty volpod --overrides='
{
    "apiVersion": "v1",
    "kind": "Pod",
    "metadata": {
        "name": "volpod"
    },
    "spec": {
        "containers": [{
            "command": [
                "cat",
                "/mnt/data/grastate.dat"
            ],
            "image": "bitnami/minideb",
            "name": "mycontainer",
            "volumeMounts": [{
                "mountPath": "/mnt",
                "name": "galeradata"
            }]
        }],
        "restartPolicy": "Never",
        "volumes": [{
            "name": "galeradata",
            "persistentVolumeClaim": {
                "claimName": "data-data-db-N"
            }
        }]
    }
}' --image="bitnami/minideb"

If there exists a node with safe_to_bootstrap: 1, use this node M to bootstrap from. Otherwise, if no node is safe to bootstrap from, follow the next section of unexpected shutdown scenario.

kubectl --namespace $NAMESPACE \
   delete \
   statefulset \
   data-db \
   --cascade=orphan

Then tell the node M to form a new primary component by itself. This process is non-destructive and increases the chance of a fast incremental State Transfer (IST) for the other nodes.

helm upgrade \
    --install \
    my-release \
    "oci://registry.datalab.tuwien.ac.at/dbrepo/helm/dbrepo" \
    --namespace $NAMESPACE \
    --set datadb.rootUser.password=XXXX \
    --set datadb.galera.mariabackup.password=YYYY \
    --set datadb.galera.bootstrap.forceBootstrap=true \
    --set datadb.galera.bootstrap.bootstrapFromNode=M \
    --set datadb.galera.bootstrap.forceSafeToBootstrap=true \
    --set datadb.podManagementPolicy=Parallel

Then delete the failing pods (they will be created immediately again).

kubectl -n $NAMESPACE \
    delete \
    pods \
    -l app.kubernetes.io/name=datadb

Then remove the forced bootstrapping.

helm upgrade \
    --install \
    my-release \
    "oci://registry.datalab.tuwien.ac.at/dbrepo/helm/dbrepo" \
    --namespace $NAMESPACE \
    --set datadb.rootUser.password=XXXX \
    --set datadb.galera.mariabackup.password=YYYY \
    --set datadb.galera.bootstrap.forceBootstrap=true \
    --set datadb.galera.bootstrap.bootstrapFromNode=M \
    --set datadb.galera.bootstrap.forceSafeToBootstrap=false \
    --set datadb.podManagementPolicy=Parallel

Crash Shutdown

This section follows the official MariaDB 11.3.2 documentation. If you need to perform a disaster recovery, start by restoring the backups for each node.

kubectl -n $NAMESPACE \
    cp \
    backup-data-db-0.tar.gz \
    data-db-0:/bitnami/mariadb/backup.tar.gz

Unpack the compressed archive.

kubectl -n $NAMESPACE exec pod/data-db-0 -c mariadb-galera -- \
    tar xzfv \
    /bitnami/mariadb/backup.tar.gz && \
    rm -f \
        /bitnami/mariadb/backup.tar.gz

Prepare the backup for restore.

kubectl -n $NAMESPACE exec pod/data-db-0 -c mariadb-galera -- \
    mariadb-backup \
    --prepare \ 
    --target-dir=/bitnami/mariadb/backup

And ultimately restore the backup.

kubectl -n $NAMESPACE exec pod/data-db-0 -c mariadb-galera -- \
    mariadb-backup \
    --move-back \ 
    --target-dir=/bitnami/mariadb/backup
kubectl -n $NAMESPACE \
    cp \
    backup-metadata-db-0.tar.gz \
    data-db-0:/bitnami/mariadb/backup.tar.gz

Unpack the compressed archive.

kubectl -n $NAMESPACE exec pod/metadata-db-0 -c mariadb-galera -- \
    tar xzfv \
    /bitnami/mariadb/backup.tar.gz && \
    rm -f \
        /bitnami/mariadb/backup.tar.gz

Prepare the backup for restore.

kubectl -n $NAMESPACE exec pod/metadata-db-0 -c mariadb-galera -- \
    mariadb-backup \
    --prepare \ 
    --target-dir=/bitnami/mariadb/backup

And ultimately restore the backup.

kubectl -n $NAMESPACE exec pod/metadata-db-0 -c mariadb-galera -- \
    mariadb-backup \
    --move-back \ 
    --target-dir=/bitnami/mariadb/backup

Repeat this for nodes 1 or 2 (or more).

Troubleshooting

If your restored cluster still fails to reach a quorum (e.g. due to nodes being out of sync and the SST mechanism fails to transmit the snapshots) you can try to perform a manual SST.


  1. https://mariadb.com/docs/galera-cluster/readme/mariadb-galera-cluster-guide#two-nodes-disappear-from-the-cluster