Performing an image-based upgrade for {sno} clusters using {ztp}
You can use a single resource on the hub cluster, the ImageBasedGroupUpgrade custom resource (CR), to manage an imaged-based upgrade on a selected group of managed clusters through all stages.
Topology Aware Lifecycle Manager (TALM) reconciles the ImageBasedGroupUpgrade CR and creates the underlying resources to complete the defined stage transitions, either in a manually controlled or a fully automated upgrade flow.
For more information about the image-based upgrade, see "Understanding the image-based upgrade for single-node OpenShift clusters".
Managing the image-based upgrade at scale using the ImageBasedGroupUpgrade CR on the hub
The ImageBasedGroupUpgrade CR combines the ImageBasedUpgrade and ClusterGroupUpgrade APIs.
For example, you can define the cluster selection and rollout strategy with the ImageBasedGroupUpgrade API in the same way as the ClusterGroupUpgrade API.
The stage transitions are different from the ImageBasedUpgrade API.
The ImageBasedGroupUpgrade API allows you to combine several stage transitions, also called actions, into one step that share one rollout strategy.
apiVersion: lcm.openshift.io/v1alpha1
kind: ImageBasedGroupUpgrade
metadata:
name: <filename>
namespace: default
spec:
clusterLabelSelectors:
- matchExpressions:
- key: name
operator: In
values:
- spoke1
- spoke4
- spoke6
ibuSpec:
seedImageRef:
image: quay.io/seed/image:4.21.0
version: 4.21.0
pullSecretRef:
name: "<seed_pull_secret>"
extraManifests:
- name: example-extra-manifests
namespace: openshift-lifecycle-agent
oadpContent:
- name: oadp-cm
namespace: openshift-adp
plan:
- actions: ["Prep", "Upgrade", "FinalizeUpgrade"]
rolloutStrategy:
maxConcurrency: 200
timeout: 2400
- Clusters to upgrade.
- Target platform version, the seed image to be used, and the secret required to access the image.
Note
If you add the seed image pull secret in the hub cluster, in the same namespace as the
ImageBasedGroupUpgraderesource, the secret is added to the manifest list for thePrepstage. The secret is recreated in each spoke cluster in theopenshift-lifecycle-agentnamespace. - Optional: Applies additional manifests, which are not in the seed image, to the target cluster. Also applies
ConfigMapobjects for custom catalog sources. ConfigMapresources that contain the OADPBackupandRestoreCRs.- Upgrade plan details.
- Number of clusters to update in a batch.
- Timeout limit to complete the action in minutes.
Supported action combinations
Actions are the list of stage transitions that TALM completes in the steps of an upgrade plan for the selected group of clusters.
Each action entry in the ImageBasedGroupUpgrade CR is a separate step and a step contains one or several actions that share the same rollout strategy.
You can achieve more control over the rollout strategy for each action by separating actions into steps.
These actions can be combined differently in your upgrade plan and you can add subsequent steps later.
Wait until the previous steps either complete or fail before adding a step to your plan.
The first action of an added step for clusters that failed a previous steps must be either Abort or Rollback.
Important
You cannot remove actions or steps from an ongoing plan.
The following table shows example plans for different levels of control over the rollout strategy:
| Example plan | Description |
|---|---|
|
All actions share the same strategy |
|
Some actions share the same strategy |
|
All actions have different strategies |
Important
Clusters that fail one of the actions will skip the remaining actions in the same step.
The ImageBasedGroupUpgrade API accepts the following actions:
Prep-
Start preparing the upgrade resources by moving to the
Prepstage. Upgrade-
Start the upgrade by moving to the
Upgradestage. FinalizeUpgrade-
Finalize the upgrade on selected clusters that completed the
Upgradeaction by moving to theIdlestage. Rollback-
Start a rollback only on successfully upgraded clusters by moving to the
Rollbackstage. FinalizeRollback-
Finalize the rollback by moving to the
Idlestage. AbortOnFailure-
Cancel the upgrade on selected clusters that failed the
PreporUpgradeactions by moving to theIdlestage. Abort-
Cancel an ongoing upgrade only on clusters that are not yet upgraded by moving to the
Idlestage.
The following action combinations are supported. A pair of brackets signifies one step in the plan section:
-
["Prep"],["Abort"] -
["Prep", "Upgrade", "FinalizeUpgrade"] -
["Prep"],["AbortOnFailure"],["Upgrade"],["AbortOnFailure"],["FinalizeUpgrade"] -
["Rollback", "FinalizeRollback"]
Use one of the following combinations when you need to resume or cancel an ongoing upgrade from a completely new ImageBasedGroupUpgrade CR:
-
["Upgrade","FinalizeUpgrade"] -
["FinalizeUpgrade"] -
["FinalizeRollback"] -
["Abort"] -
["AbortOnFailure"]
Labeling for cluster selection
Use the spec.clusterLabelSelectors field for initial cluster selection.
In addition, TALM labels the managed clusters according to the results of their last stage transition.
When a stage completes or fails, TALM marks the relevant clusters with the following labels:
-
lcm.openshift.io/ibgu-<stage>-completed -
lcm.openshift.io/ibgu-<stage>-failed
Use these cluster labels to cancel or roll back an upgrade on a group of clusters after troubleshooting issues that you might encounter.
Important
If you are using the ImageBasedGroupUpgrade CR to upgrade your clusters, ensure that the lcm.openshift.io/ibgu-<stage>-completed or lcm.openshift.io/ibgu-<stage>-failed cluster labels are updated properly after performing troubleshooting or recovery steps on the managed clusters.
This ensures that the TALM continues to manage the image-based upgrade for the cluster.
For example, if you want to cancel the upgrade for all managed clusters except for clusters that successfully completed the upgrade, you can add an Abort action to your plan.
The Abort action moves back the ImageBasedUpgrade CR to the Idle stage, which cancels the upgrade on clusters that are not yet upgraded.
Adding a separate Abort action ensures that the TALM does not perform the Abort action on clusters that have the lcm.openshift.io/ibgu-upgrade-completed label.
The cluster labels are removed after successfully canceling or finalizing the upgrade.
Status monitoring
The ImageBasedGroupUpgrade CR ensures a better monitoring experience with a comprehensive status reporting for all clusters that is aggregated in one place.
You can monitor the following actions:
status.clusters.completedActions-
Shows all completed actions defined in the
plansection. status.clusters.currentAction-
Shows all actions that are currently in progress.
status.clusters.failedActions-
Shows all failed actions along with a detailed error message.
Performing an image-based upgrade on managed clusters at scale in several steps
For use cases when you need better control of when the upgrade interrupts your service, you can upgrade a set of your managed clusters by using the ImageBasedGroupUpgrade CR with adding actions after the previous step is complete.
After evaluating the results of the previous steps, you can move to the next upgrade stage or troubleshoot any failed steps throughout the procedure.
Important
Only certain action combinations are supported and listed in Supported action combinations.
-
You have logged in to the hub cluster as a user with
cluster-adminprivileges. -
You have created policies and
ConfigMapobjects for resources used in the image-based upgrade. -
You have installed the Lifecycle Agent and OADP Operators on all managed clusters through the hub cluster.
-
Create a YAML file on the hub cluster that contains the
ImageBasedGroupUpgradeCR:apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: image: quay.io/seed/image:4.16.0-rc.1 version: 4.16.0-rc.1 pullSecretRef: name: "<seed_pull_secret>" extraManifests: - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: - name: oadp-cm namespace: openshift-adp plan: - actions: ["Prep"] rolloutStrategy: maxConcurrency: 2 timeout: 2400- Clusters to upgrade.
- Target platform version, the seed image to be used, and the secret required to access the image.
Note
If you add the seed image pull secret in the hub cluster, in the same namespace as the
ImageBasedGroupUpgraderesource, the secret is added to the manifest list for thePrepstage. The secret is recreated in each spoke cluster in theopenshift-lifecycle-agentnamespace. - Optional: Applies additional manifests, which are not in the seed image, to the target cluster. Also applies
ConfigMapobjects for custom catalog sources. - List of
ConfigMapresources that contain the OADPBackupandRestoreCRs. - Upgrade plan details.
-
Apply the created file by running the following command on the hub cluster:
$ oc apply -f <filename>.yaml -
Monitor the status updates by running the following command on the hub cluster:
$ oc get ibgu -o yamlExample output# ... status: clusters: - completedActions: - action: Prep name: spoke1 - completedActions: - action: Prep name: spoke4 - failedActions: - action: Prep name: spoke6 # ...The previous output of an example plan starts with the
Prepstage only and you add actions to the plan based on the results of the previous step. TALM adds a label to the clusters to mark if the upgrade succeeded or failed. For example, thelcm.openshift.io/ibgu-prep-failedis applied to clusters that failed thePrepstage.After investigating the failure, you can add the
AbortOnFailurestep to your upgrade plan. It moves the clusters labeled withlcm.openshift.io/ibgu-<action>-failedback to theIdlestage. Any resources that are related to the upgrade on the selected clusters are deleted. -
Optional: Add the
AbortOnFailureaction to your existingImageBasedGroupUpgradeCR by running the following command:$ oc patch ibgu <filename> --type=json -p \ '[{"op": "add", "path": "/spec/plan/-", "value": {"actions": ["AbortOnFailure"], "rolloutStrategy": {"maxConcurrency": 5, "timeout": 10}}}]'-
Continue monitoring the status updates by running the following command:
$ oc get ibgu -o yaml
-
-
Add the action to your existing
ImageBasedGroupUpgradeCR by running the following command:$ oc patch ibgu <filename> --type=json -p \ '[{"op": "add", "path": "/spec/plan/-", "value": {"actions": ["Upgrade"], "rolloutStrategy": {"maxConcurrency": 2, "timeout": 30}}}]' -
Optional: Add the
AbortOnFailureaction to your existingImageBasedGroupUpgradeCR by running the following command:$ oc patch ibgu <filename> --type=json -p \ '[{"op": "add", "path": "/spec/plan/-", "value": {"actions": ["AbortOnFailure"], "rolloutStrategy": {"maxConcurrency": 5, "timeout": 10}}}]'-
Continue monitoring the status updates by running the following command:
$ oc get ibgu -o yaml
-
-
Add the action to your existing
ImageBasedGroupUpgradeCR by running the following command:$ oc patch ibgu <filename> --type=json -p \ '[{"op": "add", "path": "/spec/plan/-", "value": {"actions": ["FinalizeUpgrade"], "rolloutStrategy": {"maxConcurrency": 10, "timeout": 3}}}]'
-
Monitor the status updates by running the following command:
$ oc get ibgu -o yamlExample output# ... status: clusters: - completedActions: - action: Prep - action: AbortOnFailure failedActions: - action: Upgrade name: spoke1 - completedActions: - action: Prep - action: Upgrade - action: FinalizeUpgrade name: spoke4 - completedActions: - action: AbortOnFailure failedActions: - action: Prep name: spoke6 # ...
Performing an image-based upgrade on managed clusters at scale in one step
For use cases when service interruption is not a concern, you can upgrade a set of your managed clusters by using the ImageBasedGroupUpgrade CR with several actions combined in one step with one rollout strategy.
With one rollout strategy, the upgrade time can be reduced but you can only troubleshoot failed clusters after the upgrade plan is complete.
-
You have logged in to the hub cluster as a user with
cluster-adminprivileges. -
You have created policies and
ConfigMapobjects for resources used in the image-based upgrade. -
You have installed the Lifecycle Agent and OADP Operators on all managed clusters through the hub cluster.
-
Create a YAML file on the hub cluster that contains the
ImageBasedGroupUpgradeCR:apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: - matchExpressions: - key: name operator: In values: - spoke1 - spoke4 - spoke6 ibuSpec: seedImageRef: image: quay.io/seed/image:4.21.0 version: 4.21.0 pullSecretRef: name: "<seed_pull_secret>" extraManifests: - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: - name: oadp-cm namespace: openshift-adp plan: - actions: ["Prep", "Upgrade", "FinalizeUpgrade"] rolloutStrategy: maxConcurrency: 200 timeout: 2400- Clusters to upgrade.
- Target platform version, the seed image to be used, and the secret required to access the image.
Note
If you add the seed image pull secret in the hub cluster, in the same namespace as the
ImageBasedGroupUpgraderesource, the secret is added to the manifest list for thePrepstage. The secret is recreated in each spoke cluster in theopenshift-lifecycle-agentnamespace. - Optional: Applies additional manifests, which are not in the seed image, to the target cluster. Also applies
ConfigMapobjects for custom catalog sources. ConfigMapresources that contain the OADPBackupandRestoreCRs.- Upgrade plan details.
- Number of clusters to update in a batch.
- Timeout limit to complete the action in minutes.
-
Apply the created file by running the following command on the hub cluster:
$ oc apply -f <filename>.yaml
-
Monitor the status updates by running the following command:
$ oc get ibgu -o yamlExample output# ... status: clusters: - completedActions: - action: Prep failedActions: - action: Upgrade name: spoke1 - completedActions: - action: Prep - action: Upgrade - action: FinalizeUpgrade name: spoke4 - failedActions: - action: Prep name: spoke6 # ...
Canceling an image-based upgrade on managed clusters at scale
You can cancel the upgrade on a set of managed clusters that completed the Prep stage.
Important
Only certain action combinations are supported and listed in Supported action combinations.
-
You have logged in to the hub cluster as a user with
cluster-adminprivileges.
-
Create a separate YAML file on the hub cluster that contains the
ImageBasedGroupUpgradeCR:apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: - matchExpressions: - key: name operator: In values: - spoke4 ibuSpec: seedImageRef: image: quay.io/seed/image:4.16.0-rc.1 version: 4.16.0-rc.1 pullSecretRef: name: "<seed_pull_secret>" extraManifests: - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: - name: oadp-cm namespace: openshift-adp plan: - actions: ["Abort"] rolloutStrategy: maxConcurrency: 5 timeout: 10All managed clusters that completed the
Prepstage are moved back to theIdlestage. -
Apply the created file by running the following command on the hub cluster:
$ oc apply -f <filename>.yaml
-
Monitor the status updates by running the following command:
$ oc get ibgu -o yamlExample output# ... status: clusters: - completedActions: - action: Prep currentActions: - action: Abort name: spoke4 # ...
Rolling back an image-based upgrade on managed clusters at scale
Roll back the changes on a set of managed clusters if you encounter unresolvable issues after a successful upgrade.
You need to create a separate ImageBasedGroupUpgrade CR and define the set of managed clusters that you want to roll back.
Important
Only certain action combinations are supported and listed in Supported action combinations.
-
You have logged in to the hub cluster as a user with
cluster-adminprivileges.
-
Create a separate YAML file on the hub cluster that contains the
ImageBasedGroupUpgradeCR:apiVersion: lcm.openshift.io/v1alpha1 kind: ImageBasedGroupUpgrade metadata: name: <filename> namespace: default spec: clusterLabelSelectors: - matchExpressions: - key: name operator: In values: - spoke4 ibuSpec: seedImageRef: image: quay.io/seed/image:4.21.0-rc.1 version: 4.21.0-rc.1 pullSecretRef: name: "<seed_pull_secret>" extraManifests: - name: example-extra-manifests namespace: openshift-lifecycle-agent oadpContent: - name: oadp-cm namespace: openshift-adp plan: - actions: ["Rollback", "FinalizeRollback"] rolloutStrategy: maxConcurrency: 200 timeout: 2400 -
Apply the created file by running the following command on the hub cluster:
$ oc apply -f <filename>.yamlAll managed clusters that match the defined labels are moved back to the
Rollbackand then theIdlestages to finalize the rollback.
-
Monitor the status updates by running the following command:
$ oc get ibgu -o yamlExample output# ... status: clusters: - completedActions: - action: Rollback - action: FinalizeRollback name: spoke4 # ...
Troubleshooting image-based upgrades with Lifecycle Agent
Perform troubleshooting steps on the managed clusters that are affected by an issue.
Important
If you are using the ImageBasedGroupUpgrade CR to upgrade your clusters, ensure that the lcm.openshift.io/ibgu-<stage>-completed or lcm.openshift.io/ibgu-<stage>-failed cluster labels are updated properly after performing troubleshooting or recovery steps on the managed clusters.
This ensures that the TALM continues to manage the image-based upgrade for the cluster.
Collecting logs
You can use the oc adm must-gather CLI to collect information for debugging and troubleshooting.
-
Collect data about the Operators by running the following command:
$ oc adm must-gather \ --dest-dir=must-gather/tmp \ --image=$(oc -n openshift-lifecycle-agent get deployment.apps/lifecycle-agent-controller-manager -o jsonpath='{.spec.template.spec.containers[?(@.name == "manager")].image}') \ --image=quay.io/konveyor/oadp-must-gather:latest \// --image=quay.io/openshift/origin-must-gather:latest- Optional: Add this option if you need to gather more information from the OADP Operator.
- Optional: Add this option if you need to gather more information from the SR-IOV Operator.
AbortFailed or FinalizeFailed error
- Issue
-
During the finalize stage or when you stop the process at the
Prepstage, Lifecycle Agent cleans up the following resources:-
Stateroot that is no longer required
-
Precaching resources
-
OADP CRs
-
ImageBasedUpgradeCR
If the Lifecycle Agent fails to perform the above steps, it transitions to the
AbortFailedorFinalizeFailedstates. The condition message and log show which steps failed.Example error messagemessage: failed to delete all the backup CRs. Perform cleanup manually then add 'lca.openshift.io/manual-cleanup-done' annotation to ibu CR to transition back to Idle observedGeneration: 5 reason: AbortFailed status: "False" type: Idle -
- Resolution
-
-
Inspect the logs to determine why the failure occurred.
-
To prompt Lifecycle Agent to retry the cleanup, add the
lca.openshift.io/manual-cleanup-doneannotation to theImageBasedUpgradeCR.After observing this annotation, Lifecycle Agent retries the cleanup and, if it is successful, the
ImageBasedUpgradestage transitions toIdle.If the cleanup fails again, you can manually clean up the resources.
-
Cleaning up stateroot manually
- Issue
-
Stopping at the
Prepstage, Lifecycle Agent cleans up the new stateroot. When finalizing after a successful upgrade or a rollback, Lifecycle Agent cleans up the old stateroot. If this step fails, it is recommended that you inspect the logs to determine why the failure occurred. - Resolution
-
-
Check if there are any existing deployments in the stateroot by running the following command:
$ ostree admin status -
If there are any, clean up the existing deployment by running the following command:
$ ostree admin undeploy <index_of_deployment> -
After cleaning up all the deployments of the stateroot, wipe the stateroot directory by running the following commands:
Warning
Ensure that the booted deployment is not in this stateroot.
$ stateroot="<stateroot_to_delete>"$ unshare -m /bin/sh -c "mount -o remount,rw /sysroot && rm -rf /sysroot/ostree/deploy/${stateroot}"
-
Cleaning up OADP resources manually
- Issue
-
Automatic cleanup of OADP resources can fail due to connection issues between Lifecycle Agent and the S3 backend. By restoring the connection and adding the
lca.openshift.io/manual-cleanup-doneannotation, the Lifecycle Agent can successfully cleanup backup resources. - Resolution
-
Check the backend connectivity by running the following command:
$ oc get backupstoragelocations.velero.io -n openshift-adpExample outputNAME PHASE LAST VALIDATED AGE DEFAULT dataprotectionapplication-1 Available 33s 8d true -
Remove all backup resources and then add the
lca.openshift.io/manual-cleanup-doneannotation to theImageBasedUpgradeCR.
LVM Storage volume contents not restored
When LVM Storage is used to provide dynamic persistent volume storage, LVM Storage might not restore the persistent volume contents if it is configured incorrectly.
Missing LVM Storage-related fields in Backup CR
- Issue
-
Your
BackupCRs might be missing fields that are needed to restore your persistent volumes. You can check for events in your application pod to determine if you have this issue by running the following:$ oc describe pod <your_app_name>Example output showing missing LVM Storage-related fields in Backup CREvents: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling 58s (x2 over 66s) default-scheduler 0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.. Normal Scheduled 56s default-scheduler Successfully assigned default/db-1234 to sno1.example.lab Warning FailedMount 24s (x7 over 55s) kubelet MountVolume.SetUp failed for volume "pvc-1234" : rpc error: code = Unknown desc = VolumeID is not found - Resolution
-
You must include
logicalvolumes.topolvm.ioin the applicationBackupCR. Without this resource, the application restores its persistent volume claims and persistent volume manifests correctly, however, thelogicalvolumeassociated with this persistent volume is not restored properly after pivot.Example Backup CRapiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: small-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets includedClusterScopedResources: - persistentVolumes - volumesnapshotcontents - logicalvolumes.topolvm.io- To restore the persistent volumes for your application, you must configure this section as shown.
Missing LVM Storage-related fields in Restore CR
- Issue
-
The expected resources for the applications are restored but the persistent volume contents are not preserved after upgrading.
-
List the persistent volumes for you applications by running the following command before pivot:
$ oc get pv,pvc,logicalvolumes.topolvm.io -AExample output before pivotNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Retain Bound default/pvc-db lvms-vg1 4h45m NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 4h45m NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 4h45m -
List the persistent volumes for you applications by running the following command after pivot:
$ oc get pv,pvc,logicalvolumes.topolvm.io -AExample output after pivotNAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/pvc-1234 1Gi RWO Delete Bound default/pvc-db lvms-vg1 19s NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE default persistentvolumeclaim/pvc-db Bound pvc-1234 1Gi RWO lvms-vg1 19s NAMESPACE NAME AGE logicalvolume.topolvm.io/pvc-1234 18s
-
- Resolution
-
The reason for this issue is that the
logicalvolumestatus is not preserved in theRestoreCR. This status is important because it is required for Velero to reference the volumes that must be preserved after pivoting. You must include the following fields in the applicationRestoreCR:Example Restore CRapiVersion: velero.io/v1 kind: Restore metadata: name: sample-vote-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "3" spec: backupName: sample-vote-app restorePVs: true restoreStatus: includedResources: - logicalvolumes- To preserve the persistent volumes for your application, you must set
restorePVstotrue. - To preserve the persistent volumes for your application, you must configure this section as shown.
- To preserve the persistent volumes for your application, you must set
Debugging failed Backup and Restore CRs
- Issue
-
The backup or restoration of artifacts failed.
- Resolution
-
You can debug
BackupandRestoreCRs and retrieve logs with the Velero CLI tool. The Velero CLI tool provides more detailed information than the OpenShift CLI tool.-
Describe the
BackupCR that contains errors by running the following command:$ oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe backup -n openshift-adp backup-acm-klusterlet --details -
Describe the
RestoreCR that contains errors by running the following command:$ oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero describe restore -n openshift-adp restore-acm-klusterlet --details -
Download the backed up resources to a local directory by running the following command:
$ oc exec -n openshift-adp velero-7c87d58c7b-sw6fc -c velero -- ./velero backup download -n openshift-adp backup-acm-klusterlet -o ~/backup-acm-klusterlet.tar.gz
-