Backup using OpenShift API for Data Protection and {odf-first}
Following is a use case for using OADP and ODF to back up an application.
Backing up an application using OADP and ODF
In this use case, you back up an application by using OADP and store the backup in an object storage provided by Red Hat OpenShift Data Foundation (ODF).
-
You create an object bucket claim (OBC) to configure the backup storage location. You use ODF to configure an Amazon S3-compatible object storage bucket. ODF provides MultiCloud Object Gateway (NooBaa MCG) and Ceph Object Gateway, also known as RADOS Gateway (RGW), object storage service. In this use case, you use NooBaa MCG as the backup storage location.
-
You use the NooBaa MCG service with OADP by using the
awsprovider plugin. -
You configure the Data Protection Application (DPA) with the backup storage location (BSL).
-
You create a backup custom resource (CR) and specify the application namespace to back up.
-
You create and verify the backup.
-
You installed the OADP Operator.
-
You installed the ODF Operator.
-
You have an application with a database running in a separate namespace.
-
Create an OBC manifest file to request a NooBaa MCG bucket as shown in the following example:
apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: test-obc namespace: openshift-adp spec: storageClassName: openshift-storage.noobaa.io generateBucketName: test-backup-bucketwhere:
test-obc-
Specifies the name of the object bucket claim.
test-backup-bucket-
Specifies the name of the bucket.
-
Create the OBC by running the following command:
$ oc create -f <obc_file_name>where:
<obc_file_name>-
Specifies the file name of the object bucket claim manifest.
-
When you create an OBC, ODF creates a
secretand aconfig mapwith the same name as the object bucket claim. Thesecrethas the bucket credentials, and theconfig maphas information to access the bucket. To get the bucket name and bucket host from the generated config map, run the following command:$ oc extract --to=- cm/test-obctest-obcis the name of the OBC.Example output# BUCKET_NAME backup-c20...41fd # BUCKET_PORT 443 # BUCKET_REGION # BUCKET_SUBREGION # BUCKET_HOST s3.openshift-storage.svc -
To get the bucket credentials from the generated
secret, run the following command:$ oc extract --to=- secret/test-obcExample output# AWS_ACCESS_KEY_ID ebYR....xLNMc # AWS_SECRET_ACCESS_KEY YXf...+NaCkdyC3QPym -
Get the public URL for the S3 endpoint from the s3 route in the
openshift-storagenamespace by running the following command:$ oc get route s3 -n openshift-storage -
Create a
cloud-credentialsfile with the object bucket credentials as shown in the following command:[default] aws_access_key_id=<AWS_ACCESS_KEY_ID> aws_secret_access_key=<AWS_SECRET_ACCESS_KEY> -
Create the
cloud-credentialssecret with thecloud-credentialsfile content as shown in the following command:$ oc create secret generic \ cloud-credentials \ -n openshift-adp \ --from-file cloud=cloud-credentials -
Configure the Data Protection Application (DPA) as shown in the following example:
apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: oadp-backup namespace: openshift-adp spec: configuration: nodeAgent: enable: true uploaderType: kopia velero: defaultPlugins: - aws - openshift - csi defaultSnapshotMoveData: true backupLocations: - velero: config: profile: "default" region: noobaa s3Url: https://s3.openshift-storage.svc s3ForcePathStyle: "true" insecureSkipTLSVerify: "true" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <bucket_name> prefix: oadpwhere:
defaultSnapshotMoveData-
Set to
trueto use the OADP Data Mover to enable movement of Container Storage Interface (CSI) snapshots to a remote object storage. s3Url-
Specifies the S3 URL of ODF storage.
<bucket_name>-
Specifies the bucket name.
-
Create the DPA by running the following command:
$ oc apply -f <dpa_filename> -
Verify that the DPA is created successfully by running the following command. In the example output, you can see the
statusobject hastypefield set toReconciled. This means, the DPA is successfully created.$ oc get dpa -o yamlExample outputapiVersion: v1 items: - apiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: namespace: openshift-adp #...# spec: backupLocations: - velero: config: #...# status: conditions: - lastTransitionTime: "20....9:54:02Z" message: Reconcile complete reason: Complete status: "True" type: Reconciled kind: List metadata: resourceVersion: "" -
Verify that the backup storage location (BSL) is available by running the following command:
$ oc get backupstoragelocations.velero.io -n openshift-adpExample outputNAME PHASE LAST VALIDATED AGE DEFAULT dpa-sample-1 Available 3s 15s true -
Configure a backup CR as shown in the following example:
apiVersion: velero.io/v1 kind: Backup metadata: name: test-backup namespace: openshift-adp spec: includedNamespaces: - <application_namespace>where:
<application_namespace>-
Specifies the namespace for the application to back up.
-
Create the backup CR by running the following command:
$ oc apply -f <backup_cr_filename>
-
Verify that the backup object is in the
Completedphase by running the following command. For more details, see the example output.$ oc describe backup test-backup -n openshift-adpExample outputName: test-backup Namespace: openshift-adp # ....# Status: Backup Item Operations Attempted: 1 Backup Item Operations Completed: 1 Completion Timestamp: 2024-09-25T10:17:01Z Expiration: 2024-10-25T10:16:31Z Format Version: 1.1.0 Hook Status: Phase: Completed Progress: Items Backed Up: 34 Total Items: 34 Start Timestamp: 2024-09-25T10:16:31Z Version: 1 Events: <none>