PostgreSQL Backup Management & Restoration
This guide covers how to verify the status of your PostgreSQL backups and performs a disaster recovery (DR) simulation to restore data.
Checking Backup Status
Regularly verify that base backups are completing according to your schedule.
List Backups
To see a list of available backups:
kubectl get backup.postgresql.cnpg.io -n satoshi-sandbox
Example Output:
NAME AGE CLUSTER METHOD PHASE ERROR
satoshi-staging-base-backup-20260201000000 32d satoshi-staging plugin completed
satoshi-staging-base-backup-20260208000000 25d satoshi-staging plugin completed
satoshi-staging-base-backup-20260301000000 4d7h satoshi-staging plugin completed
staging-pg-on-demand-one 8d satoshi-staging plugin completed
Inspect Backup Details
To inspect the metadata and status of a specific backup:
kubectl describe backup.postgresql.cnpg.io satoshi-staging-base-backup-20260301000000 -n satoshi-sandbox
Key fields to look for:
- Status/Phase: Should be
completed. - Begin/End LSN: Indicates the Log Sequence Numbers covered.
- Backup ID: The identifier used for restoration.
Disaster Recovery: Restoring Backups
Recommendation: Perform a restoration test monthly to ensure data integrity and process familiarity.
To avoid impacting the live production environment, we recommend simulating a disaster recovery by restoring the backup to a local Kubernetes cluster (using Kind).
1. Set up Local Cluster
Create a local cluster using kind:
kind create cluster --name kind-1
Verify connectivity:
kubectl cluster-info --context kind-kind-1
2. Install Prerequisites
The local cluster requires the same operators as production to handle the restore.
- Cert Manager
- CloudNativePG Operator
- Barman Cloud Plugin
Create the system namespace:
kubectl create namespace kube360-system
Apply the add-ons:
# Install Cert-Manager
kubectl apply -f ./k3s/addons/cert-manager/cert-manager.yaml
# Install CloudNativePG Operator
kubectl apply -f ./k3s/addons/cloudnative-pg/cloudnative.yaml --server-side
# Install Barman Cloud Plugin
kubectl apply -f ./k3s/addons/barman-pg/barman-pg.yaml
Verify that all pods are running:
kubectl get pods -n kube360-system
3. Initiate Restore
Apply the restoration manifest (restore_backup.yaml). This manifest defines a new cluster that bootstraps from your Object Store backup.
restore_backup.yaml example:
apiVersion: barmancloud.cnpg.io/v1
kind: ObjectStore
metadata:
name: kolme-timestamper-backup
namespace: kube360-system
spec:
configuration:
destinationPath: "s3://kolme-timestamper/tg1/"
endpointURL: "https://<ACCOUNT_ID>.r2.cloudflarestorage.com"
wal:
compression: zstd
data:
compression: gzip
s3Credentials:
accessKeyId:
name: cloudflare-r2-timestamper
key: ACCESS_KEY_ID
secretAccessKey:
name: cloudflare-r2-timestamper
key: AWS_SECRET_ACCESS_KEY
---
apiVersion: v1
data:
ACCESS_KEY_ID: <REDACTED_BASE64_ENCODED_ID>
AWS_SECRET_ACCESS_KEY: <REDACTED_BASE64_ENCODED_KEY>
kind: Secret
metadata:
name: cloudflare-r2-timestamper
namespace: kube360-system
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: timestamper-backup
namespace: kube360-system
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:18.1-system-trixie
postgresql:
parameters:
shared_buffers: "256MB"
effective_cache_size: "768MB"
work_mem: "32MB"
maintenance_work_mem: "512MB"
archive_timeout: "300"
storage:
size: 2Gi
bootstrap:
recovery:
source: origin
externalClusters:
- name: origin
plugin:
name: barman-cloud.cloudnative-pg.io
parameters:
barmanObjectName: kolme-timestamper-backup
serverName: timestamper
resources:
requests:
memory: "1Gi"
cpu: "1"
limits:
memory: "1Gi"
cpu: "1"
Apply the file:
kubectl apply -f restore_backup.yaml
Note: Ensure your restore_backup.yaml contains the correct ObjectStore configuration and credentials (Secrets) pointing to the production backup bucket.
4. Monitor Restoration
Check the status of the new cluster. Initially, it will be initializing.
kubectl cnpg status timestamper-backup -n kube360-system
Wait until the status shows Cluster in healthy state and Ready instances: 1.
5. Verify Data Integrity
Once the cluster is ready, connect to the database to verify the data exists.
Use the cnpg plugin to connect via psql:
kubectl cnpg psql timestamper-backup -n kube360-system
Inside the SQL prompt, switch to your database and check for tables and data:
-- Switch to the specific database
\c timestamper
-- Set search path if necessary
SET search_path to timestamper, public;
-- List tables
\dt
-- Verify row counts
select count(*) from blocks;
If the tables exist and row counts match expectations, the restore was successful.