Disaster recovery for a hosted cluster by using OADP
You can use the OpenShift API for Data Protection (OADP) Operator to perform disaster recovery on Amazon Web Services (AWS) and bare metal.
The disaster recovery process with OpenShift API for Data Protection (OADP) involves the following steps:
-
Preparing your platform, such as Amazon Web Services or bare metal, to use OADP
-
Backing up the data plane workload
-
Backing up the control plane workload
-
Restoring a hosted cluster by using OADP
Prerequisites
You must meet the following prerequisites on the management cluster:
-
You created a storage class.
-
You have access to the cluster with
cluster-adminprivileges. -
You have access to the OADP subscription through a catalog source.
-
You have access to a cloud storage provider that is compatible with OADP, such as S3, Microsoft Azure, Google Cloud, or MinIO.
-
In a disconnected environment, you have access to a self-hosted storage provider, for example Red Hat OpenShift Data Foundation or MinIO, that is compatible with OADP.
-
Your hosted control planes pods are up and running.
Preparing AWS to use OADP
To perform disaster recovery for a hosted cluster, you can use OpenShift API for Data Protection (OADP) on Amazon Web Services (AWS) S3 compatible storage. After creating the DataProtectionApplication object, new velero deployment and node-agent pods are created in the openshift-adp namespace.
To prepare AWS to use OADP, see "Configuring the OpenShift API for Data Protection with Multicloud Object Gateway".
-
Backing up the data plane workload
-
Backing up the control plane workload
Preparing bare metal to use OADP
To perform disaster recovery for a hosted cluster, you can use OpenShift API for Data Protection (OADP) on bare metal. After creating the DataProtectionApplication object, new velero deployment and node-agent pods are created in the openshift-adp namespace.
To prepare bare metal to use OADP, see "Configuring the OpenShift API for Data Protection with AWS S3 compatible storage".
-
Backing up the data plane workload
-
Backing up the control plane workload
Backing up the data plane workload
If the data plane workload is not important, you can skip this procedure. To back up the data plane workload by using the OADP Operator, see "Backing up applications".
-
Restoring a hosted cluster by using OADP
Backing up the control plane workload
You can back up the control plane workload by creating the Backup custom resource (CR). The steps vary depending on whether your platform is AWS or bare metal.
Backing up the control plane workload on AWS
You can back up the control plane workload by creating the Backup custom resource (CR).
To monitor and observe the backup process, see "Observing the backup and restore process".
-
Pause the reconciliation of the
HostedClusterresource by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ patch hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \ --type json -p '[{"op": "add", "path": "/spec/pausedUntil", "value": "true"}]' -
Get the infrastructure ID of your hosted cluster by running the following command:
$ oc get hostedcluster -n local-cluster <hosted_cluster_name> -o=jsonpath="{.spec.infraID}"Note the infrastructure ID to use in the next step.
-
Pause the reconciliation of the
cluster.cluster.x-k8s.ioresource by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ patch cluster.cluster.x-k8s.io \ -n local-cluster-<hosted_cluster_name> <hosted_cluster_infra_id> \ --type json -p '[{"op": "add", "path": "/spec/paused", "value": true}]' -
Pause the reconciliation of the
NodePoolresource by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ patch nodepool -n <hosted_cluster_namespace> <node_pool_name> \ --type json -p '[{"op": "add", "path": "/spec/pausedUntil", "value": "true"}]' -
Pause the reconciliation of the
AgentClusterresource by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate agentcluster -n <hosted_control_plane_namespace> \ cluster.x-k8s.io/paused=true --all' -
Pause the reconciliation of the
AgentMachineresource by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate agentmachine -n <hosted_control_plane_namespace> \ cluster.x-k8s.io/paused=true --all' -
Annotate the
HostedClusterresource to prevent the deletion of the hosted control plane namespace by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \ hypershift.openshift.io/skip-delete-hosted-controlplane-namespace=true -
Create a YAML file that defines the
BackupCR:Example
backup-control-plane.yamlfileapiVersion: velero.io/v1 kind: Backup metadata: name: <backup_resource_name> namespace: openshift-adp labels: velero.io/storage-location: default spec: hooks: {} includedNamespaces: - <hosted_cluster_namespace> - <hosted_control_plane_namespace> includedResources: - sa - role - rolebinding - pod - pvc - pv - bmh - configmap - infraenv - priorityclasses - pdb - agents - hostedcluster - nodepool - secrets - services - deployments - hostedcontrolplane - cluster - agentcluster - agentmachinetemplate - agentmachine - machinedeployment - machineset - machine excludedResources: [] storageLocation: default ttl: 2h0m0s snapshotMoveData: true datamover: "velero" defaultVolumesToFsBackup: true- Replace
backup_resource_namewith the name of yourBackupresource. - Selects specific namespaces to back up objects from them. You must include your hosted cluster namespace and the hosted control plane namespace.
- Replace
<hosted_cluster_namespace>with the name of the hosted cluster namespace, for example,clusters. - Replace
<hosted_control_plane_namespace>with the name of the hosted control plane namespace, for example,clusters-hosted. - You must create the
infraenvresource in a separate namespace. Do not delete theinfraenvresource during the backup process. - Enables the CSI volume snapshots and uploads the control plane workload automatically to the cloud storage.
- Sets the
fs-backupbacking up method for persistent volumes (PVs) as default. This setting is useful when you use a combination of Container Storage Interface (CSI) volume snapshots and thefs-backupmethod.Note
If you want to use CSI volume snapshots, you must add the
backup.velero.io/backup-volumes-excludes=<pv_name>annotation to your PVs.
- Replace
-
Apply the
BackupCR by running the following command:$ oc apply -f backup-control-plane.yaml
-
Verify if the value of the
status.phaseisCompletedby running the following command:$ oc get backups.velero.io <backup_resource_name> -n openshift-adp \ -o jsonpath='{.status.phase}'
-
Restoring a hosted cluster by using OADP
Backing up the control plane workload on a bare-metal platform
You can back up the control plane workload by creating the Backup custom resource (CR).
To monitor and observe the backup process, see "Observing the backup and restore process".
-
Pause the reconciliation of the
HostedClusterresource by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ patch hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \ --type json -p '[{"op": "add", "path": "/spec/pausedUntil", "value": "true"}]' -
Get the infrastructure ID of your hosted cluster by running the following command:
$ oc --kubeconfig <management_cluster_kubeconfig_file> \ get hostedcluster -n <hosted_cluster_namespace> \ <hosted_cluster_name> -o=jsonpath="{.spec.infraID}" -
Note the infrastructure ID to use in the next step.
-
Pause the reconciliation of the
cluster.cluster.x-k8s.ioresource by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate cluster -n <hosted_control_plane_namespace> \ <hosted_cluster_infra_id> cluster.x-k8s.io/paused=true -
Pause the reconciliation of the
NodePoolresource by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ patch nodepool -n <hosted_cluster_namespace> <node_pool_name> \ --type json -p '[{"op": "add", "path": "/spec/pausedUntil", "value": "true"}]' -
Pause the reconciliation of the
AgentClusterresource by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate agentcluster -n <hosted_control_plane_namespace> \ cluster.x-k8s.io/paused=true --all -
Pause the reconciliation of the
AgentMachineresource by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate agentmachine -n <hosted_control_plane_namespace> \ cluster.x-k8s.io/paused=true --all -
If you are backing up and restoring to the same management cluster, annotate the
HostedClusterresource to prevent the deletion of the hosted control plane namespace by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \ hypershift.openshift.io/skip-delete-hosted-controlplane-namespace=true -
Create a YAML file that defines the
BackupCR:Example
backup-control-plane.yamlfileapiVersion: velero.io/v1 kind: Backup metadata: name: <backup_resource_name> namespace: openshift-adp labels: velero.io/storage-location: default spec: hooks: {} includedNamespaces: - <hosted_cluster_namespace> - <hosted_control_plane_namespace> - <agent_namespace> includedResources: - sa - role - rolebinding - pod - pvc - pv - bmh - configmap - infraenv - priorityclasses - pdb - agents - hostedcluster - nodepool - secrets - services - deployments - hostedcontrolplane - cluster - agentcluster - agentmachinetemplate - agentmachine - machinedeployment - machineset - machine excludedResources: [] storageLocation: default ttl: 2h0m0s snapshotMoveData: true datamover: "velero" defaultVolumesToFsBackup: true- Replace
backup_resource_namewith the name of yourBackupresource. - Selects specific namespaces to back up objects from them. You must include your hosted cluster namespace and the hosted control plane namespace.
- Replace
<hosted_cluster_namespace>with the name of the hosted cluster namespace, for example,clusters. - Replace
<hosted_control_plane_namespace>with the name of the hosted control plane namespace, for example,clusters-hosted. - Replace
<agent_namespace>with the namespace where yourAgent,BMH, andInfraEnvCRs are located, for example,agents. - Enables the CSI volume snapshots and uploads the control plane workload automatically to the cloud storage.
- Sets the
fs-backupbacking up method for persistent volumes (PVs) as default. This setting is useful when you use a combination of Container Storage Interface (CSI) volume snapshots and thefs-backupmethod.Note
If you want to use CSI volume snapshots, you must add the
backup.velero.io/backup-volumes-excludes=<pv_name>annotation to your PVs.
- Replace
-
Apply the
BackupCR by running the following command:$ oc apply -f backup-control-plane.yaml
-
Verify if the value of the
status.phaseisCompletedby running the following command:$ oc get backups.velero.io <backup_resource_name> -n openshift-adp \ -o jsonpath='{.status.phase}'
-
Restore a hosted cluster by using OADP.
Restoring a hosted cluster by using OADP
You can restore a hosted cluster into the same management cluster or into a new management cluster.
Restoring a hosted cluster into the same management cluster by using OADP
You can restore the hosted cluster by creating the Restore custom resource (CR).
-
If you are using an in-place update, InfraEnv does not need spare nodes. You need to re-provision the worker nodes from the new management cluster.
-
If you are using a replace update, you need some spare nodes for InfraEnv to deploy the worker nodes.
Important
After you back up your hosted cluster, you must destroy it to initiate the restoring process. To initiate node provisioning, you must back up workloads in the data plane before deleting the hosted cluster.
-
You completed the steps in Removing a cluster by using the console to delete your hosted cluster.
-
You completed the steps in Removing remaining resources after removing a cluster.
To monitor and observe the backup process, see "Observing the backup and restore process".
-
Verify that no pods and persistent volume claims (PVCs) are present in the hosted control plane namespace by running the following command:
$ oc get pod pvc -n <hosted_control_plane_namespace>Expected outputNo resources found -
Create a YAML file that defines the
RestoreCR:Examplerestore-hosted-cluster.yamlfileapiVersion: velero.io/v1 kind: Restore metadata: name: <restore_resource_name> namespace: openshift-adp spec: backupName: <backup_resource_name> restorePVs: true existingResourcePolicy: update excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io- Replace
<restore_resource_name>with the name of yourRestoreresource. - Replace
<backup_resource_name>with the name of yourBackupresource. - Initiates the recovery of persistent volumes (PVs) and its pods.
- Ensures that the existing objects are overwritten with the backed up content.
Important
You must create the
infraenvresource in a separate namespace. Do not delete theinfraenvresource during the restore process. Theinfraenvresource is mandatory for the new nodes to be reprovisioned.
- Replace
-
Apply the
RestoreCR by running the following command:$ oc apply -f restore-hosted-cluster.yaml -
Verify if the value of the
status.phaseisCompletedby running the following command:$ oc get hostedcluster <hosted_cluster_name> -n <hosted_cluster_namespace> \ -o jsonpath='{.status.phase}' -
After the restore process is complete, start the reconciliation of the
HostedClusterandNodePoolresources that you paused during backing up of the control plane workload:-
Start the reconciliation of the
HostedClusterresource by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ patch hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \ --type json \ -p '[{"op": "add", "path": "/spec/pausedUntil", "value": "false"}]' -
Start the reconciliation of the
NodePoolresource by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ patch nodepool -n <hosted_cluster_namespace> <node_pool_name> \ --type json \ -p '[{"op": "add", "path": "/spec/pausedUntil", "value": "false"}]'
-
-
Start the reconciliation of the Agent provider resources that you paused during backing up of the control plane workload:
-
Start the reconciliation of the
AgentClusterresource by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate agentcluster -n <hosted_control_plane_namespace> \ cluster.x-k8s.io/paused- --overwrite=true --all -
Start the reconciliation of the
AgentMachineresource by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate agentmachine -n <hosted_control_plane_namespace> \ cluster.x-k8s.io/paused- --overwrite=true --all
-
-
Remove the
hypershift.openshift.io/skip-delete-hosted-controlplane-namespace-annotation in theHostedClusterresource to avoid manually deleting the hosted control plane namespace by running the following command:$ oc --kubeconfig <management_cluster_kubeconfig_file> \ annotate hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \ hypershift.openshift.io/skip-delete-hosted-controlplane-namespace- \ --overwrite=true --all
Restoring a hosted cluster into a new management cluster by using OADP
You can restore the hosted cluster into a new management cluster by creating the Restore custom resource (CR).
-
If you are using an in-place update, the
InfraEnvresource does not need spare nodes. Instead, you need to re-provision the worker nodes from the new management cluster. -
If you are using a replace update, you need some spare nodes for the
InfraEnvresource to deploy the worker nodes.
-
You configured the new management cluster to use OpenShift API for Data Protection (OADP). The new management cluster must have the same Data Protection Application (DPA) as the management cluster that you backed up from so that the
RestoreCR can access the backup storage. -
You configured the networking settings of the new management cluster to resolve the DNS of the hosted cluster.
-
The DNS of the host must resolve to the IP of both the new management cluster and the hosted cluster.
-
The hosted cluster must resolve to the IP of the new management cluster.
-
To monitor and observe the backup process, see "Observing the backup and restore process".
Important
Complete the following steps on the new management cluster that you are restoring the hosted cluster to, not on the management cluster that you created the backup from.
-
Create a YAML file that defines the
RestoreCR:Examplerestore-hosted-cluster.yamlfileapiVersion: velero.io/v1 kind: Restore metadata: name: <restore_resource_name> namespace: openshift-adp spec: includedNamespaces: - <hosted_cluster_namespace> - <hosted_control_plane_namespace> - <agent_namespace> backupName: <backup_resource_name> cleanupBeforeRestore: CleanupRestored veleroManagedClustersBackupName: <managed_cluster_name> veleroCredentialsBackupName: <credentials_backup_name> veleroResourcesBackupName: <resources_backup_name> restorePVs: true preserveNodePorts: true existingResourcePolicy: update excludedResources: - pod - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io - pv - pvc- Replace
<restore_resource_name>with the name of yourRestoreresource. - Selects specific namespaces to back up objects from them. You must include your hosted cluster namespace and the hosted control plane namespace.
- Replace
<hosted_cluster_namespace>with the name of the hosted cluster namespace, for example,clusters. - Replace
<hosted_control_plane_namespace>with the name of the hosted control plane namespace, for example,clusters-hosted. - Replace
<agent_namespace>with the namespace where yourAgent,BMH, andInfraEnvCRs are located, for example,agents. - Replace
<backup_resource_name>with the name of yourBackupresource. - You can omit this field if you are not using Red Hat Advanced Cluster Management.
- Initiates the recovery of persistent volumes (PVs) and its pods.
- Ensures that the existing objects are overwritten with the backed up content.
- Replace
-
Apply the
RestoreCR by running the following command:$ oc --kubeconfig <restore_management_kubeconfig> apply -f restore-hosted-cluster.yaml -
Verify that the value of the
status.phaseisCompletedby running the following command:$ oc --kubeconfig <restore_management_kubeconfig> \ get restore.velero.io <restore_resource_name> \ -n openshift-adp -o jsonpath='{.status.phase}' -
Verify that all CRs are restored by running the following commands:
$ oc --kubeconfig <restore_management_kubeconfig> get infraenv -n <agent_namespace>$ oc --kubeconfig <restore_management_kubeconfig> get agent -n <agent_namespace>$ oc --kubeconfig <restore_management_kubeconfig> get bmh -n <agent_namespace>$ oc --kubeconfig <restore_management_kubeconfig> get hostedcluster -n <hosted_cluster_namespace>$ oc --kubeconfig <restore_management_kubeconfig> get nodepool -n <hosted_cluster_namespace>$ oc --kubeconfig <restore_management_kubeconfig> get agentmachine -n <hosted_controlplane_namespace>$ oc --kubeconfig <restore_management_kubeconfig> get agentcluster -n <hosted_controlplane_namespace> -
If you plan to use the new management cluster as your main management cluster going forward, complete the following steps. Otherwise, if you plan to use the management cluster that you backed up from as your main management cluster, complete steps 5 - 8 in "Restoring a hosted cluster into the same management cluster by using OADP".
-
Remove the Cluster API deployment from the management cluster that you backed up from by running the following command:
$ oc --kubeconfig <backup_management_kubeconfig> delete deploy cluster-api \ -n <hosted_control_plane_namespace>Because only one Cluster API can access a cluster at a time, this step ensures that the Cluster API for the new management cluster functions correctly.
-
After the restore process is complete, start the reconciliation of the
HostedClusterandNodePoolresources that you paused during backing up of the control plane workload:-
Start the reconciliation of the
HostedClusterresource by running the following command:$ oc --kubeconfig <restore_management_kubeconfig> \ patch hostedcluster -n <hosted_cluster_namespace> <hosted_cluster_name> \ --type json \ -p '[{"op": "replace", "path": "/spec/pausedUntil", "value": "false"}]' -
Start the reconciliation of the
NodePoolresource by running the following command:$ oc --kubeconfig <restore_management_kubeconfig> \ patch nodepool -n <hosted_cluster_namespace> <node_pool_name> \ --type json \ -p '[{"op": "replace", "path": "/spec/pausedUntil", "value": "false"}]' -
Verify that the hosted cluster is reporting that the hosted control plane is available by running the following command:
$ oc --kubeconfig <restore_management_kubeconfig> get hostedcluster -
Verify that the hosted cluster is reporting that the cluster operators are available by running the following command:
$ oc get co --kubeconfig <hosted_cluster_kubeconfig>
-
-
Start the reconciliation of the Agent provider resources that you paused during backing up of the control plane workload:
-
Start the reconciliation of the
AgentClusterresource by running the following command:$ oc --kubeconfig <restore_management_kubeconfig> \ annotate agentcluster -n <hosted_control_plane_namespace> \ cluster.x-k8s.io/paused- --overwrite=true --all -
Start the reconciliation of the
AgentMachineresource by running the following command:$ oc --kubeconfig <restore_management_kubeconfig> \ annotate agentmachine -n <hosted_control_plane_namespace> \ cluster.x-k8s.io/paused- --overwrite=true --all -
Start the reconciliation of the
Clusterresource by running the following command:$ oc --kubeconfig <restore_management_kubeconfig> \ annotate cluster -n <hosted_control_plane_namespace> \ cluster.x-k8s.io/paused- --overwrite=true --all
-
-
Verify that the node pool is working as expected by running the following command:
$ oc --kubeconfig <restore_management_kubeconfig> \ get nodepool -n <hosted_cluster_namespace>Example outputNAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE hosted-0 hosted-0 3 3 False False 4.17.11 False False -
Optional: To ensure that no conflicts exist and that the new management cluster has continued functionality, remove the
HostedClusterresources from the backup management cluster by completing the following steps:-
In the management cluster that you backed up from, in the
ClusterDeploymentresource, set thespec.preserveOnDeleteparameter totrueby running the following command:$ oc --kubeconfig <backup_management_kubeconfig> patch \ -n <hosted_control_plane_namespace> \ ClusterDeployment/<hosted_cluster_name> -p \ '{"spec":{"preserveOnDelete":'true'}}' \ --type=mergeThis step ensures that the hosts are not deprovisioned.
-
Delete the machines by running the following commands:
$ oc --kubeconfig <backup_management_kubeconfig> patch \ <machine_name> -n <hosted_control_plane_namespace> -p \ '[{"op":"remove","path":"/metadata/finalizers"}]' \ --type=merge$ oc --kubeconfig <backup_management_kubeconfig> \ delete machine <machine_name> \ -n <hosted_control_plane_namespace> -
Delete the
AgentClusterandClusterresources by running the following commands:$ oc --kubeconfig <backup_management_kubeconfig> \ delete agentcluster <hosted_cluster_name> \ -n <hosted_control_plane_namespace>$ oc --kubeconfig <backup_management_kubeconfig> \ patch cluster <cluster_name> \ -n <hosted_control_plane_namespace> \ -p '[{"op":"remove","path":"/metadata/finalizers"}]' \ --type=json$ oc --kubeconfig <backup_management_kubeconfig> \ delete cluster <cluster_name> \ -n <hosted_control_plane_namespace> -
If you use Red Hat Advanced Cluster Management, delete the managed cluster by running the following commands:
$ oc --kubeconfig <backup_management_kubeconfig> \ patch managedcluster <hosted_cluster_name> \ -n <hosted_cluster_namespace> \ -p '[{"op":"remove","path":"/metadata/finalizers"}]' \ --type=json$ oc --kubeconfig <backup_management_kubeconfig> \ delete managedcluster <hosted_cluster_name> \ -n <hosted_cluster_namespace> -
Delete the
HostedClusterresource by running the following command:$ oc --kubeconfig <backup_management_kubeconfig> \ delete hostedcluster \ -n <hosted_cluster_namespace> <hosted_cluster_name>
-
-
Observing the backup and restore process
When using OpenShift API for Data Protection (OADP) to backup and restore a hosted cluster, you can monitor and observe the process.
-
Observe the backup process by running the following command:
$ watch "oc get backups.velero.io -n openshift-adp <backup_resource_name> -o jsonpath='{.status}'" -
Observe the restore process by running the following command:
$ watch "oc get restores.velero.io -n openshift-adp <backup_resource_name> -o jsonpath='{.status}'" -
Observe the Velero logs by running the following command:
$ oc logs -n openshift-adp -ldeploy=velero -f -
Observe the progress of all of the OADP objects by running the following command:
$ watch "echo BackupRepositories:;echo;oc get backuprepositories.velero.io -A;echo; echo BackupStorageLocations: ;echo; oc get backupstoragelocations.velero.io -A;echo;echo DataUploads: ;echo;oc get datauploads.velero.io -A;echo;echo DataDownloads: ;echo;oc get datadownloads.velero.io -n openshift-adp; echo;echo VolumeSnapshotLocations: ;echo;oc get volumesnapshotlocations.velero.io -A;echo;echo Backups:;echo;oc get backup -A; echo;echo Restores:;echo;oc get restore -A"
Using the velero CLI to describe the Backup and Restore resources
When using OpenShift API for Data Protection, you can get more details of the Backup and Restore resources by using the velero command-line interface (CLI).
-
Create an alias to use the
veleroCLI from a container by running the following command:$ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero' -
Get details of your
Restorecustom resource (CR) by running the following command:$ velero restore describe <restore_resource_name> --details- Replace
<restore_resource_name>with the name of yourRestoreresource.
- Replace
-
Get details of your
BackupCR by running the following command:$ velero restore describe <backup_resource_name> --details- Replace
<backup_resource_name>with the name of yourBackupresource.
- Replace