Creating ConfigMap objects for the image-based upgrade with the {lcao} using {ztp}
Create your OADP resources, extra manifests, and custom catalog sources wrapped in a ConfigMap object to prepare for the image-based upgrade.
Creating OADP resources for the image-based upgrade with GitOps ZTP
Prepare your OADP resources to restore your application after an upgrade.
-
You have provisioned one or more managed clusters with GitOps ZTP.
-
You have logged in as a user with
cluster-adminprivileges. -
You have generated a seed image from a compatible seed cluster.
-
You have created a separate partition on the target cluster for the container images that is shared between stateroots. For more information, see "Configuring a shared container partition between ostree stateroots when using GitOps ZTP".
-
You have deployed a version of Lifecycle Agent that is compatible with the version used with the seed image.
-
You have installed the OADP Operator, the
DataProtectionApplicationCR, and its secret on the target cluster. -
You have created an S3-compatible storage solution and a ready-to-use bucket with proper credentials configured. For more information, see "Installing and configuring the OADP Operator with GitOps ZTP".
-
The
openshift-adpnamespace for the OADPConfigMapobject must exist on all managed clusters and the hub for the OADPConfigMapto be generated and copied to the clusters.
-
Ensure that your Git repository that you use with the ArgoCD policies application contains the following directory structure:
├── source-crs/ │ ├── ibu/ │ │ ├── ImageBasedUpgrade.yaml │ │ ├── PlatformBackupRestore.yaml │ │ ├── PlatformBackupRestoreLvms.yaml │ │ ├── PlatformBackupRestoreWithIBGU.yaml ├── ... ├── kustomization.yamlThe
source-crs/ibu/PlatformBackupRestoreWithIBGU.yamlfile is provided in the ZTP container image.PlatformBackupRestoreWithIBGU.yamlapiVersion: velero.io/v1 kind: Backup metadata: name: acm-klusterlet annotations: lca.openshift.io/apply-label: "apps/v1/deployments/open-cluster-management-agent/klusterlet,v1/secrets/open-cluster-management-agent/bootstrap-hub-kubeconfig,rbac.authorization.k8s.io/v1/clusterroles/klusterlet,v1/serviceaccounts/open-cluster-management-agent/klusterlet,scheduling.k8s.io/v1/priorityclasses/klusterlet-critical,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-work:ibu-role,rbac.authorization.k8s.io/v1/clusterroles/open-cluster-management:klusterlet-admin-aggregate-clusterrole,rbac.authorization.k8s.io/v1/clusterrolebindings/klusterlet,operator.open-cluster-management.io/v1/klusterlets/klusterlet,apiextensions.k8s.io/v1/customresourcedefinitions/klusterlets.operator.open-cluster-management.io,v1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials" labels: velero.io/storage-location: default namespace: openshift-adp spec: includedNamespaces: - open-cluster-management-agent includedClusterScopedResources: - klusterlets.operator.open-cluster-management.io - clusterroles.rbac.authorization.k8s.io - clusterrolebindings.rbac.authorization.k8s.io - priorityclasses.scheduling.k8s.io includedNamespaceScopedResources: - deployments - serviceaccounts - secrets excludedNamespaceScopedResources: [] --- apiVersion: velero.io/v1 kind: Restore metadata: name: acm-klusterlet namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "1" spec: backupName: acm-klusterlet- If your
multiclusterHubCR does not have.spec.imagePullSecretdefined and the secret does not exist on theopen-cluster-management-agentnamespace in your hub cluster, removev1/secrets/open-cluster-management-agent/open-cluster-management-image-pull-credentials.
Note
If you perform the image-based upgrade directly on managed clusters, use the
PlatformBackupRestore.yamlfile.If you use LVM Storage to create persistent volumes, you can use the
source-crs/ibu/PlatformBackupRestoreLvms.yamlprovided in the ZTP container image to back up your LVM Storage resources.PlatformBackupRestoreLvms.yamlapiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: lvmcluster namespace: openshift-adp spec: includedNamespaces: - openshift-storage includedNamespaceScopedResources: - lvmclusters - lvmvolumegroups - lvmvolumegroupnodestatuses --- apiVersion: velero.io/v1 kind: Restore metadata: name: lvmcluster namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "2" spec: backupName: lvmcluster- The
lca.openshift.io/apply-wavevalue must be lower than the values specified in the applicationRestoreCRs.
- If your
-
If you need to restore applications after the upgrade, create the OADP
BackupandRestoreCRs for your application in theopenshift-adpnamespace:-
Create the OADP CRs for cluster-scoped application artifacts in the
openshift-adpnamespace:Example OADP CRs for cluster-scoped application artifacts for LSO and LVM StorageapiVersion: velero.io/v1 kind: Backup metadata: annotations: lca.openshift.io/apply-label: "apiextensions.k8s.io/v1/customresourcedefinitions/test.example.com,security.openshift.io/v1/securitycontextconstraints/test,rbac.authorization.k8s.io/v1/clusterroles/test-role,rbac.authorization.k8s.io/v1/clusterrolebindings/system:openshift:scc:test" name: backup-app-cluster-resources labels: velero.io/storage-location: default namespace: openshift-adp spec: includedClusterScopedResources: - customresourcedefinitions - securitycontextconstraints - clusterrolebindings - clusterroles excludedClusterScopedResources: - Namespace --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app-cluster-resources namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "3" spec: backupName: backup-app-cluster-resources- Replace the example resource name with your actual resources.
- The
lca.openshift.io/apply-wavevalue must be higher than the value in the platformRestoreCRs and lower than the value in the application namespace-scopedRestoreCR.
-
Create the OADP CRs for your namespace-scoped application artifacts in the
source-crs/custom-crsdirectory:Example OADP CRs namespace-scoped application artifacts when LSO is usedapiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> excludedClusterScopedResources: - persistentVolumes --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "4" spec: backupName: backup-app- Define custom resources for your application.
Example OADP CRs namespace-scoped application artifacts when LVM Storage is usedapiVersion: velero.io/v1 kind: Backup metadata: labels: velero.io/storage-location: default name: backup-app namespace: openshift-adp spec: includedNamespaces: - test includedNamespaceScopedResources: - secrets - persistentvolumeclaims - deployments - statefulsets - configmaps - cronjobs - services - job - poddisruptionbudgets - <application_custom_resources> includedClusterScopedResources: - persistentVolumes - logicalvolumes.topolvm.io - volumesnapshotcontents --- apiVersion: velero.io/v1 kind: Restore metadata: name: test-app namespace: openshift-adp labels: velero.io/storage-location: default annotations: lca.openshift.io/apply-wave: "4" spec: backupName: backup-app restorePVs: true restoreStatus: includedResources: - logicalvolumes- Define custom resources for your application.
- Required field.
- Required field
- Optional if you use LVM Storage volume snapshots.
- Required field.
Important
The same version of the applications must function on both the current and the target release of OpenShift Container Platform.
-
-
Create a
kustomization.yamlwith the following content:apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization configMapGenerator: - files: - source-crs/ibu/PlatformBackupRestoreWithIBGU.yaml #- source-crs/custom-crs/ApplicationClusterScopedBackupRestore.yaml #- source-crs/custom-crs/ApplicationApplicationBackupRestoreLso.yaml name: oadp-cm namespace: openshift-adp generatorOptions: disableNameSuffixHash: true- Creates the
oadp-cmConfigMapobject on the hub cluster withBackupandRestoreCRs. - The namespace must exist on all managed clusters and the hub for the OADP
ConfigMapto be generated and copied to the clusters.
- Creates the
-
Push the changes to your Git repository.
Labeling extra manifests for the image-based upgrade with GitOps ZTP
Label your extra manifests so that the Lifecycle Agent can extract resources that are labeled with the lca.openshift.io/target-ocp-version: <target_version> label.
-
You have provisioned one or more managed clusters with GitOps ZTP.
-
You have logged in as a user with
cluster-adminprivileges. -
You have generated a seed image from a compatible seed cluster.
-
You have created a separate partition on the target cluster for the container images that is shared between stateroots. For more information, see "Configuring a shared container directory between ostree stateroots when using GitOps ZTP".
-
You have deployed a version of Lifecycle Agent that is compatible with the version used with the seed image.
-
Label your required extra manifests with the
lca.openshift.io/target-ocp-version: <target_version>label in your existing sitePolicyGenTemplateCR:apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: example-sno spec: bindingRules: sites: "example-sno" du-profile: "4.15" mcp: "master" sourceFiles: - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-fh" labels: lca.openshift.io/target-ocp-version: "4.15" spec: resourceName: du_fh vlan: 140 - fileName: SriovNetworkNodePolicy.yaml policyName: "config-policy" metadata: name: "sriov-nnp-du-fh" labels: lca.openshift.io/target-ocp-version: "4.15" spec: deviceType: netdevice isRdma: false nicSelector: pfNames: ["ens5f0"] numVfs: 8 priority: 10 resourceName: du_fh - fileName: SriovNetwork.yaml policyName: "config-policy" metadata: name: "sriov-nw-du-mh" labels: lca.openshift.io/target-ocp-version: "4.15" spec: resourceName: du_mh vlan: 150 - fileName: SriovNetworkNodePolicy.yaml policyName: "config-policy" metadata: name: "sriov-nnp-du-mh" labels: lca.openshift.io/target-ocp-version: "4.15" spec: deviceType: vfio-pci isRdma: false nicSelector: pfNames: ["ens7f0"] numVfs: 8 priority: 10 resourceName: du_mh - fileName: DefaultCatsrc.yaml policyName: "config-policy" metadata: name: default-cat-source namespace: openshift-marketplace labels: lca.openshift.io/target-ocp-version: "4.15" spec: displayName: default-cat-source image: quay.io/example-org/example-catalog:v1- Ensure that the
lca.openshift.io/target-ocp-versionlabel matches either the y-stream or the z-stream of the target OpenShift Container Platform version that is specified in thespec.seedImageRef.versionfield of theImageBasedUpgradeCR. The Lifecycle Agent only applies the CRs that match the specified version. - If you do not want to use custom catalog sources, remove this entry.
- Ensure that the
-
Push the changes to your Git repository.