Installing Operators for the image-based upgrade
Prepare your clusters for the upgrade by installing the Lifecycle Agent and the OADP Operator.
To install the OADP Operator with the non-GitOps method, see "Installing the OADP Operator".
Installing the Lifecycle Agent by using the CLI
You can use the OpenShift CLI (oc) to install the Lifecycle Agent.
-
You have installed the OpenShift CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges.
-
Create a
Namespaceobject YAML file for the Lifecycle Agent:apiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management-
Create the
NamespaceCR by running the following command:$ oc create -f <namespace_filename>.yaml
-
-
Create an
OperatorGroupobject YAML file for the Lifecycle Agent:apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-lifecycle-agent namespace: openshift-lifecycle-agent spec: targetNamespaces: - openshift-lifecycle-agent-
Create the
OperatorGroupCR by running the following command:$ oc create -f <operatorgroup_filename>.yaml
-
-
Create a
SubscriptionCR for the Lifecycle Agent:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-lifecycle-agent-subscription namespace: openshift-lifecycle-agent spec: channel: "stable" name: lifecycle-agent source: redhat-operators sourceNamespace: openshift-marketplace-
Create the
SubscriptionCR by running the following command:$ oc create -f <subscription_filename>.yaml
-
-
To verify that the installation succeeded, inspect the CSV resource by running the following command:
$ oc get csv -n openshift-lifecycle-agentExample outputNAME DISPLAY VERSION REPLACES PHASE lifecycle-agent.v4.19.0 Openshift Lifecycle Agent 4.19.0 Succeeded -
Verify that the Lifecycle Agent is up and running by running the following command:
$ oc get deploy -n openshift-lifecycle-agentExample outputNAME READY UP-TO-DATE AVAILABLE AGE lifecycle-agent-controller-manager 1/1 1 1 14s
Installing the Lifecycle Agent by using the web console
You can use the OpenShift Container Platform web console to install the Lifecycle Agent.
-
You have logged in as a user with
cluster-adminprivileges.
-
In the OpenShift Container Platform web console, navigate to Ecosystem → Software Catalog.
-
Search for the Lifecycle Agent from the list of available Operators, and then click Install.
-
On the Install Operator page, under A specific namespace on the cluster select openshift-lifecycle-agent.
-
Click Install.
-
To confirm that the installation is successful:
-
Click Ecosystem → Installed Operators.
-
Ensure that the Lifecycle Agent is listed in the openshift-lifecycle-agent project with a Status of InstallSucceeded.
Note
During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.
-
If the Operator is not installed successfully:
-
Click Ecosystem → Installed Operators, and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.
-
Click Workloads → Pods, and check the logs for pods in the openshift-lifecycle-agent project.
Installing the Lifecycle Agent with GitOps ZTP
Install the Lifecycle Agent with GitOps Zero Touch Provisioning (ZTP) to do an image-based upgrade.
-
Extract the following CRs from the
ztp-site-generatecontainer image and push them to thesource-crdirectory:ExampleLcaSubscriptionNS.yamlfileapiVersion: v1 kind: Namespace metadata: name: openshift-lifecycle-agent annotations: workload.openshift.io/allowed: management ran.openshift.io/ztp-deploy-wave: "2" labels: kubernetes.io/metadata.name: openshift-lifecycle-agentExampleLcaSubscriptionOperGroup.yamlfileapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: lifecycle-agent-operatorgroup namespace: openshift-lifecycle-agent annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: targetNamespaces: - openshift-lifecycle-agentExampleLcaSubscription.yamlfileapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lifecycle-agent namespace: openshift-lifecycle-agent annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: channel: "stable" name: lifecycle-agent source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnownExample directory structure├── kustomization.yaml ├── sno │ ├── example-cnf.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ └── ns.yaml ├── source-crs │ ├── LcaSubscriptionNS.yaml │ ├── LcaSubscriptionOperGroup.yaml │ ├── LcaSubscription.yaml -
Add the CRs to your common PolicyGenerator:
apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: common-latest placementBindingDefaults: name: common-placement-binding policyDefaults: namespace: ztp-common placement: labelSelector: common: "true" du-profile: "latest" remediationAction: inform severity: low namespaceSelector: exclude: - kube-* include: - '*' evaluationInterval: compliant: 10m noncompliant: 10s policies: - name: common-latest-subscriptions-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: "2" manifests: - path: source-crs/LcaSubscriptionNS.yaml - path: source-crs/LcaSubscriptionOperGroup.yaml - path: source-crs/LcaSubscription.yaml [...]
Installing and configuring the OADP Operator with GitOps ZTP
Install and configure the OADP Operator with GitOps ZTP before starting the upgrade.
-
Extract the following CRs from the
ztp-site-generatecontainer image and push them to thesource-crdirectory:ExampleOadpSubscriptionNS.yamlfileapiVersion: v1 kind: Namespace metadata: name: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "2" labels: kubernetes.io/metadata.name: openshift-adpExampleOadpSubscriptionOperGroup.yamlfileapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: redhat-oadp-operator namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: targetNamespaces: - openshift-adpExampleOadpSubscription.yamlfileapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: redhat-oadp-operator namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "2" spec: channel: stable-1.4 name: redhat-oadp-operator source: redhat-operators sourceNamespace: openshift-marketplace installPlanApproval: Manual status: state: AtLatestKnownExampleOadpOperatorStatus.yamlfileapiVersion: operators.coreos.com/v1 kind: Operator metadata: name: redhat-oadp-operator.openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "2" status: components: refs: - kind: Subscription namespace: openshift-adp conditions: - type: CatalogSourcesUnhealthy status: "False" - kind: InstallPlan namespace: openshift-adp conditions: - type: Installed status: "True" - kind: ClusterServiceVersion namespace: openshift-adp conditions: - type: Succeeded status: "True" reason: InstallSucceededExample directory structure├── kustomization.yaml ├── sno │ ├── example-cnf.yaml │ ├── common-ranGen.yaml │ ├── group-du-sno-ranGen.yaml │ ├── group-du-sno-validator-ranGen.yaml │ └── ns.yaml ├── source-crs │ ├── OadpSubscriptionNS.yaml │ ├── OadpSubscriptionOperGroup.yaml │ ├── OadpSubscription.yaml │ ├── OadpOperatorStatus.yaml -
Add the CRs to your common
PolicyGenTemplate:apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "example-common-latest" namespace: "ztp-common" spec: bindingRules: common: "true" du-profile: "latest" sourceFiles: - fileName: OadpSubscriptionNS.yaml policyName: "subscriptions-policy" - fileName: OadpSubscriptionOperGroup.yaml policyName: "subscriptions-policy" - fileName: OadpSubscription.yaml policyName: "subscriptions-policy" - fileName: OadpOperatorStatus.yaml policyName: "subscriptions-policy" [...] -
Create the
DataProtectionApplicationCR and the S3 secret only for the target cluster:-
Extract the following CRs from the
ztp-site-generatecontainer image and push them to thesource-crdirectory:ExampleOadpDataProtectionApplication.yamlfileapiVersion: oadp.openshift.io/v1alpha1 kind: DataProtectionApplication metadata: name: dataprotectionapplication namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "100" spec: configuration: restic: enable: false velero: defaultPlugins: - aws - openshift resourceTimeout: 10m backupLocations: - velero: config: profile: "default" region: minio s3Url: $url insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: $bucketName prefix: $prefixName status: conditions: - reason: Complete status: "True" type: Reconciled- The
spec.configuration.restic.enablefield must be set tofalsefor an image-based upgrade because persistent volume contents are retained and reused after the upgrade. - The bucket defines the bucket name that is created in S3 backend. The prefix defines the name of the subdirectory that will be automatically created in the bucket. The combination of bucket and prefix must be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the Red Hat Advanced Cluster Management hub template function, for example,
prefix: {{hub .ManagedClusterName hub}}.
ExampleOadpSecret.yamlfileapiVersion: v1 kind: Secret metadata: name: cloud-credentials namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "100" type: OpaqueExampleOadpBackupStorageLocationStatus.yamlfileapiVersion: velero.io/v1 kind: BackupStorageLocation metadata: name: dataprotectionapplication-1 namespace: openshift-adp annotations: ran.openshift.io/ztp-deploy-wave: "100" status: phase: Available- The
namevalue in theBackupStorageLocationresource must follow the<DataProtectionApplication.metadata.name>-<index>pattern. The<index>represents the position of the correspondingbackupLocationsentry in thespec.backupLocationsfield in theDataProtectionApplicationresource. The position starts from1. If themetadata.namevalue of theDataProtectionApplicationresource is changed in theOadpDataProtectionApplication.yamlfile, update themetadata.namefield in theBackupStorageLocationresource accordingly.
The
OadpBackupStorageLocationStatus.yamlCR verifies the availability of backup storage locations created by OADP. - The
-
Add the CRs to your site
PolicyGenTemplatewith overrides:apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: "example-cnf" namespace: "ztp-site" spec: bindingRules: sites: "example-cnf" du-profile: "latest" mcp: "master" sourceFiles: ... - fileName: OadpSecret.yaml policyName: "config-policy" data: cloud: <your_credentials> - fileName: OadpDataProtectionApplication.yaml policyName: "config-policy" spec: backupLocations: - velero: config: region: minio s3Url: <your_S3_URL> profile: "default" insecureSkipTLSVerify: "true" s3ForcePathStyle: "true" provider: aws default: true credential: key: cloud name: cloud-credentials objectStorage: bucket: <your_bucket_name> prefix: <cluster_name> - fileName: OadpBackupStorageLocationStatus.yaml policyName: "config-policy"- Specify your credentials for your S3 storage backend.
- If more than one
backupLocationsentries are defined in theOadpDataProtectionApplicationCR, ensure that each location has a correspondingOadpBackupStorageLocationCR added for status tracking. Ensure that the name of each additionalOadpBackupStorageLocationCR is overridden with the correct index as described in the exampleOadpBackupStorageLocationStatus.yamlfile. - Specify the URL for your S3-compatible bucket.
- The
bucketdefines the bucket name that is created in S3 backend. Theprefixdefines the name of the subdirectory that will be automatically created in thebucket. The combination ofbucketandprefixmust be unique for each target cluster to avoid interference between them. To ensure a unique storage directory for each target cluster, you can use the Red Hat Advanced Cluster Management hub template function, for example,prefix: {{hub .ManagedClusterName hub}}.
-