Installing managed clusters with {rh-rhacm} and ClusterInstance resources
You can provision OpenShift Container Platform clusters at scale with Red Hat Advanced Cluster Management (RHACM) using the assisted service and the GitOps plugin policy generator with core-reduction technology enabled. The GitOps Zero Touch Provisioning (ZTP) pipeline performs the cluster installations. GitOps ZTP can be used in a disconnected environment.
Important
Using PolicyGenTemplate CRs to manage and deploy policies to managed clusters will be deprecated in an upcoming OpenShift Container Platform release.
Equivalent and improved functionality is available using Red Hat Advanced Cluster Management (RHACM) and PolicyGenerator CRs.
For more information about PolicyGenerator resources, see the RHACM Integrating Policy Generator documentation.
GitOps ZTP and Topology Aware Lifecycle Manager
GitOps Zero Touch Provisioning (ZTP) generates installation and configuration CRs from manifests stored in Git. These artifacts are applied to a centralized hub cluster where Red Hat Advanced Cluster Management (RHACM), the assisted service, and the Topology Aware Lifecycle Manager (TALM) use the CRs to install and configure the managed cluster. The configuration phase of the GitOps ZTP pipeline uses the TALM to orchestrate the application of the configuration CRs to the cluster. There are several key integration points between GitOps ZTP and the TALM.
- Inform policies
-
By default, GitOps ZTP creates all policies with a remediation action of
inform. These policies cause RHACM to report on compliance status of clusters relevant to the policies but does not apply the desired configuration. During the GitOps ZTP process, after OpenShift installation, the TALM steps through the createdinformpolicies and enforces them on the target managed cluster(s). This applies the configuration to the managed cluster. Outside of the GitOps ZTP phase of the cluster lifecycle, this allows you to change policies without the risk of immediately rolling those changes out to affected managed clusters. You can control the timing and the set of remediated clusters by using TALM. - Automatic creation of ClusterGroupUpgrade CRs
-
To automate the initial configuration of newly deployed clusters, TALM monitors the state of all
ManagedClusterCRs on the hub cluster. AnyManagedClusterCR that does not have aztp-donelabel applied, including newly createdManagedClusterCRs, causes the TALM to automatically create aClusterGroupUpgradeCR with the following characteristics:-
The
ClusterGroupUpgradeCR is created and enabled in theztp-installnamespace. -
ClusterGroupUpgradeCR has the same name as theManagedClusterCR. -
The cluster selector includes only the cluster associated with that
ManagedClusterCR. -
The set of managed policies includes all policies that RHACM has bound to the cluster at the time the
ClusterGroupUpgradeis created. -
Pre-caching is disabled.
-
Timeout set to 4 hours (240 minutes).
The automatic creation of an enabled
ClusterGroupUpgradeensures that initial zero-touch deployment of clusters proceeds without the need for user intervention. Additionally, the automatic creation of aClusterGroupUpgradeCR for anyManagedClusterwithout theztp-donelabel allows a failed GitOps ZTP installation to be restarted by simply deleting theClusterGroupUpgradeCR for the cluster. -
- Waves
-
Each policy generated from a
PolicyGeneratororPolicyGentemplateCR includes aztp-deploy-waveannotation. This annotation is based on the same annotation from each CR which is included in that policy. The wave annotation is used to order the policies in the auto-generatedClusterGroupUpgradeCR. The wave annotation is not used other than for the auto-generatedClusterGroupUpgradeCR.Note
All CRs in the same policy must have the same setting for the
ztp-deploy-waveannotation. The default value of this annotation for each CR can be overridden in thePolicyGeneratororPolicyGentemplate. The wave annotation in the source CR is used for determining and setting the policy wave annotation. This annotation is removed from each built CR which is included in the generated policy at runtime.The TALM applies the configuration policies in the order specified by the wave annotations. The TALM waits for each policy to be compliant before moving to the next policy. It is important to ensure that the wave annotation for each CR takes into account any prerequisites for those CRs to be applied to the cluster. For example, an Operator must be installed before or concurrently with the configuration for the Operator. Similarly, the
CatalogSourcefor an Operator must be installed in a wave before or concurrently with the Operator Subscription. The default wave value for each CR takes these prerequisites into account.Note
Multiple CRs and policies can share the same wave number. Having fewer policies can result in faster deployments and lower CPU usage. It is a best practice to group many CRs into relatively few waves.
To check the default wave value in each source CR, run the following command against the
out/source-crsdirectory that is extracted from theztp-site-generatecontainer image:$ grep -r "ztp-deploy-wave" out/source-crs - Phase labels
-
The
ClusterGroupUpgradeCR is automatically created and includes directives to annotate theManagedClusterCR with labels at the start and end of the GitOps ZTP process.When GitOps ZTP configuration postinstallation commences, the
ManagedClusterhas theztp-runninglabel applied. When all policies are remediated to the cluster and are fully compliant, these directives cause the TALM to remove theztp-runninglabel and apply theztp-donelabel.For deployments that make use of the
informDuValidatorpolicy, theztp-donelabel is applied when the cluster is fully ready for deployment of applications. This includes all reconciliation and resulting effects of the GitOps ZTP applied configuration CRs. Theztp-donelabel affects automaticClusterGroupUpgradeCR creation by TALM. Do not manipulate this label after the initial GitOps ZTP installation of the cluster. - Linked CRs
-
The automatically created
ClusterGroupUpgradeCR has the owner reference set as theManagedClusterfrom which it was derived. This reference ensures that deleting theManagedClusterCR causes the instance of theClusterGroupUpgradeto be deleted along with any supporting resources.
Overview of deploying managed clusters with GitOps ZTP
Red Hat Advanced Cluster Management (RHACM) uses GitOps Zero Touch Provisioning (ZTP) to deploy single-node OpenShift Container Platform clusters, three-node clusters, and standard clusters. You manage site configuration data as OpenShift Container Platform custom resources (CRs) in a Git repository. GitOps ZTP uses a declarative GitOps approach for a develop once, deploy anywhere model to deploy the managed clusters.
The deployment of the clusters includes:
-
Installing the host operating system (RHCOS) on a blank server
-
Deploying OpenShift Container Platform
-
Creating cluster policies and site subscriptions
-
Making the necessary network configurations to the server operating system
-
Deploying profile Operators and performing any needed software-related configuration, such as performance profile, PTP, and SR-IOV
Overview of the managed site installation process
After you apply the managed site custom resources (CRs) on the hub cluster, the following actions happen automatically:
-
A Discovery image ISO file is generated and booted on the target host.
-
When the ISO file successfully boots on the target host it reports the host hardware information to RHACM.
-
After all hosts are discovered, OpenShift Container Platform is installed.
-
When OpenShift Container Platform finishes installing, the hub installs the
klusterletservice on the target cluster. -
The requested add-on services are installed on the target cluster.
The Discovery image ISO process is complete when the Agent CR for the managed cluster is created on the hub cluster.
Important
The target bare-metal host must meet the networking, firmware, and hardware requirements listed in Recommended single-node OpenShift cluster configuration for vDU application workloads.
Creating the managed bare-metal host secrets
Add the required Secret custom resources (CRs) for the managed bare-metal host to the hub cluster. You need a secret for the GitOps Zero Touch Provisioning (ZTP) pipeline to access the Baseboard Management Controller (BMC) and a secret for the assisted installer service to pull cluster installation images from the registry.
Note
The secrets are referenced from the ClusterInstance CR by name. The namespace
must match the ClusterInstance namespace.
-
Create a YAML secret file containing credentials for the host Baseboard Management Controller (BMC) and a pull secret required for installing OpenShift and all add-on cluster Operators:
-
Save the following YAML as the file
example-sno-secret.yaml:apiVersion: v1 kind: Secret metadata: name: example-sno-bmc-secret namespace: example-sno data: password: <base64_password> username: <base64_username> type: Opaque --- apiVersion: v1 kind: Secret metadata: name: pull-secret namespace: example-sno data: .dockerconfigjson: <pull_secret> type: kubernetes.io/dockerconfigjson- Must match the namespace configured in the related
ClusterInstanceCR - Base64-encoded values for
passwordandusername - Must match the namespace configured in the related
ClusterInstanceCR - Base64-encoded pull secret
- Must match the namespace configured in the related
-
-
Add the relative path to
example-sno-secret.yamlto thekustomization.yamlfile that you use to install the cluster.
Configuring Discovery ISO kernel arguments for installations using GitOps ZTP
The GitOps Zero Touch Provisioning (ZTP) workflow uses the Discovery ISO as part of the OpenShift Container Platform installation process on managed bare-metal hosts. You can edit the InfraEnv resource to specify kernel arguments for the Discovery ISO. This is useful for cluster installations with specific environmental requirements.
For example, configure the rd.net.timeout.carrier kernel argument for the Discovery ISO to facilitate static networking for the cluster or to receive a DHCP address before downloading the root file system during installation.
Note
In OpenShift Container Platform 4.19, you can only add kernel arguments. You can not replace or delete kernel arguments.
-
You have installed the OpenShift CLI (oc).
-
You have logged in to the hub cluster as a user with cluster-admin privileges.
-
Create the
InfraEnvCR and edit thespec.kernelArgumentsspecification to configure kernel arguments.-
Save the following YAML in an
InfraEnv-example.yamlfile:Note
The
InfraEnvCR in this example uses template syntax such as{{ .Cluster.ClusterName }}that is populated based on values in theClusterInstanceCR. TheClusterInstanceCR automatically populates values for these templates during deployment. Do not edit the templates manually.apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: annotations: argocd.argoproj.io/sync-wave: "1" name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" spec: clusterRef: name: "{{ .Cluster.ClusterName }}" namespace: "{{ .Cluster.ClusterName }}" kernelArguments: - operation: append value: audit=0 - operation: append value: trace=1 sshAuthorizedKey: "{{ .Site.SshPublicKey }}" proxy: "{{ .Cluster.ProxySettings }}" pullSecretRef: name: "{{ .Site.PullSecretRef.Name }}" ignitionConfigOverride: "{{ .Cluster.IgnitionConfigOverride }}" nmStateConfigLabelSelector: matchLabels: nmstate-label: "{{ .Cluster.ClusterName }}" additionalNTPSources: "{{ .Cluster.AdditionalNTPSources }}"- Specify the append operation to add a kernel argument.
- Specify the kernel argument you want to configure. This example configures the audit kernel argument and the trace kernel argument.
-
-
Commit the
InfraEnv-example.yamlfile to your Git repository and push your changes. The following example shows a sample Git repository structure:~/example-ztp/install └── site-install ├── clusterinstance-example.yaml ├── InfraEnv-example.yaml └── kustomization.yaml -
Update the
kustomization.yamlfile to use theconfigMapGeneratorfield to package theInfraEnvCR into aConfigMap:apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - clusterinstance-example.yaml configMapGenerator: - name: custom-infraenv-cm namespace: example-cluster files: - InfraEnv-example.yaml generatorOptions: disableNameSuffixHash: true- The name of the
ClusterInstanceCR. - The name of the
ConfigMapthat contains the customInfraEnvCR. - The namespace must match the
ClusterInstancenamespace.
- The name of the
-
In your
ClusterInstanceCR, reference theConfigMapin thespec.templateRefsfield:apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: "example-cluster" namespace: "example-cluster" spec: clusterName: "example-cluster" templateRefs: - name: custom-infraenv-cm namespace: example-cluster # ...- Reference to the
ConfigMapCR that contains the customInfraEnvCR template.
- Reference to the
-
Commit the
ClusterInstanceCR andkustomization.yamlto your Git repository and push your changes.When the Argo CD pipeline syncs the changes, the SiteConfig Operator uses the custom
InfraEnv-exampleCR from the generatedConfigMapto configure the infrastructure environment, including the custom kernel arguments.
To verify that the kernel arguments are applied, after the Discovery image verifies that OpenShift Container Platform is ready for installation, you can SSH to the target host before the installation process begins. At that point, you can view the kernel arguments for the Discovery ISO in the /proc/cmdline file.
-
Begin an SSH session with the target host:
$ ssh -i /path/to/privatekey core@<host_name> -
View the system’s kernel arguments by using the following command:
$ cat /proc/cmdline
Deploying a managed cluster with ClusterInstance and GitOps ZTP
Use the following procedure to create a ClusterInstance custom resource (CR) and related files and initiate the GitOps Zero Touch Provisioning (ZTP) cluster deployment.
Note
You require Red Hat Advanced Cluster Management (RHACM) version 2.12 or later to install the SiteConfig Operator and use the ClusterInstance CR.
-
You have installed the OpenShift CLI (
oc). -
You installed the SiteConfig Operator in the hub cluster.
-
You have logged in to the hub cluster as a user with
cluster-adminprivileges. -
You configured the hub cluster for generating the required installation and policy CRs.
-
You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and you must configure it as a source repository for the ArgoCD application. See "Preparing the GitOps ZTP site configuration repository" for more information.
Note
When you create the source repository, ensure that you patch the ArgoCD application with the
argocd/deployment/argocd-openshift-gitops-patch.jsonpatch-file that you extract from theztp-site-generatecontainer. See "Configuring the hub cluster with ArgoCD". -
To be ready for provisioning managed clusters, you require the following for each bare-metal host:
- Network connectivity
-
Your network requires DNS. Managed cluster hosts should be reachable from the hub cluster. Ensure that Layer 3 connectivity exists between the hub cluster and the managed cluster host.
- Baseboard Management Controller (BMC) details
-
GitOps ZTP uses BMC username and password details to connect to the BMC during cluster installation. The GitOps ZTP plugin manages the
ManagedClusterCRs on the hub cluster based on theClusterInstanceCR in your site Git repo. You create individualBMCSecretCRs for each host manually.
-
Create the required managed cluster secrets on the hub cluster. These resources must be in a namespace with a name matching the cluster name. For example, in
out/argocd/example/clusterinstance/example-sno.yaml, the cluster name and namespace isexample-sno.-
Export the cluster namespace by running the following command:
$ export CLUSTERNS=example-sno -
Create the namespace:
$ oc create namespace $CLUSTERNS
-
-
Create pull secret and BMC
SecretCRs for the managed cluster. The pull secret must contain all the credentials necessary for installing OpenShift Container Platform and all required Operators. See "Creating the managed bare-metal host secrets" for more information.Note
The secrets are referenced from the
ClusterInstancecustom resource (CR) by name. The namespace must match theClusterInstancenamespace. -
Create a
ClusterInstanceCR for your cluster in your local clone of the Git repository:-
Choose the appropriate example for your CR from the
out/argocd/example/clusterinstance/folder. The folder includes example files for single node, three-node, and standard clusters:-
example-sno.yaml -
example-3node.yaml -
example-standard.yaml
-
-
Change the cluster and host details in the example file to match the type of cluster you want. For example:
Example single-node OpenShift ClusterInstance CR# example-node1-bmh-secret & assisted-deployment-pull-secret need to be created under same namespace example-ai-sno --- apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: "example-ai-sno" namespace: "example-ai-sno" spec: baseDomain: "example.com" pullSecretRef: name: "assisted-deployment-pull-secret" clusterImageSetNameRef: "openshift-4.21" sshPublicKey: "ssh-rsa AAAA..." clusterName: "example-ai-sno" networkType: "OVNKubernetes" # installConfigOverrides is a generic way of passing install-config # parameters through the siteConfig. The 'capabilities' field configures # the composable openshift feature. In this 'capabilities' setting, we # remove all the optional set of components. # Notes: # - OperatorLifecycleManager is needed for 4.15 and later # - NodeTuning is needed for 4.13 and later, not for 4.12 and earlier # - Ingress is needed for 4.16 and later installConfigOverrides: | { "capabilities": { "baselineCapabilitySet": "None", "additionalEnabledCapabilities": [ "NodeTuning", "OperatorLifecycleManager", "Ingress" ] } } # Include references to extraManifest ConfigMaps. extraManifestsRefs: - name: sno-extra-manifest-configmap extraLabels: ManagedCluster: # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples du-profile: "latest" # These example cluster labels correspond to the bindingRules in the PolicyGenTemplate examples in ../policygentemplates: # ../policygentemplates/common-ranGen.yaml will apply to all clusters with 'common: true' common: "true" # ../policygentemplates/group-du-sno-ranGen.yaml will apply to all clusters with 'group-du-sno: ""' group-du-sno: "" # ../policygentemplates/example-sno-site.yaml will apply to all clusters with 'sites: "example-sno"' # Normally this should match or contain the cluster name so it only applies to a single cluster sites : "example-sno" clusterNetwork: - cidr: 1001:1::/48 hostPrefix: 64 machineNetwork: - cidr: 1111:2222:3333:4444::/64 serviceNetwork: - cidr: 1001:2::/112 additionalNTPSources: - 1111:2222:3333:4444::2 # Initiates the cluster for workload partitioning. Setting specific reserved/isolated CPUSets is done via PolicyTemplate # please see Workload Partitioning Feature for a complete guide. cpuPartitioningMode: AllNodes templateRefs: - name: ai-cluster-templates-v1 namespace: open-cluster-management nodes: - hostName: "example-node1.example.com" role: "master" bmcAddress: "idrac-virtualmedia+https://[1111:2222:3333:4444::bbbb:1]/redfish/v1/Systems/System.Embedded.1" bmcCredentialsName: name: "example-node1-bmh-secret" bootMACAddress: "AA:BB:CC:DD:EE:11" # Use UEFISecureBoot to enable secure boot, UEFI to disable. bootMode: "UEFISecureBoot" rootDeviceHints: deviceName: "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0" # disk partition at `/var/lib/containers` with ignitionConfigOverride. Some values must be updated. See DiskPartitionContainer.md in argocd folder for more details ignitionConfigOverride: | { "ignition": { "version": "3.2.0" }, "storage": { "disks": [ { "device": "/dev/disk/by-path/pci-0000:01:00.0-scsi-0:2:0:0", "partitions": [ { "label": "var-lib-containers", "sizeMiB": 0, "startMiB": 250000 } ], "wipeTable": false } ], "filesystems": [ { "device": "/dev/disk/by-partlabel/var-lib-containers", "format": "xfs", "mountOptions": [ "defaults", "prjquota" ], "path": "/var/lib/containers", "wipeFilesystem": true } ] }, "systemd": { "units": [ { "contents": "# Generated by Butane\n[Unit]\nRequires=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\nAfter=systemd-fsck@dev-disk-by\\x2dpartlabel-var\\x2dlib\\x2dcontainers.service\n\n[Mount]\nWhere=/var/lib/containers\nWhat=/dev/disk/by-partlabel/var-lib-containers\nType=xfs\nOptions=defaults,prjquota\n\n[Install]\nRequiredBy=local-fs.target", "enabled": true, "name": "var-lib-containers.mount" } ] } } nodeNetwork: interfaces: - name: eno1 macAddress: "AA:BB:CC:DD:EE:11" config: interfaces: - name: eno1 type: ethernet state: up ipv4: enabled: false ipv6: enabled: true address: # For SNO sites with static IP addresses, the node-specific, # API and Ingress IPs should all be the same and configured on # the interface - ip: 1111:2222:3333:4444::aaaa:1 prefix-length: 64 dns-resolver: config: search: - example.com server: - 1111:2222:3333:4444::2 routes: config: - destination: ::/0 next-hop-interface: eno1 next-hop-address: 1111:2222:3333:4444::1 table-id: 254 templateRefs: - name: ai-node-templates-v1 namespace: open-cluster-managementNote
For more information about BMC addressing, see the "Additional resources" section. The
installConfigOverridesandignitionConfigOverridefields are expanded in the example for ease of readability.Note
To override the default
BareMetalHostCR for a node, create a custom node template in aConfigMapand reference it in the node-levelspec.nodes.templateRefsfield in theClusterInstanceCR. Ensure that you set theargocd.argoproj.io/sync-wave: "3"annotation in your overrideBareMetalHostCR. -
You can inspect the default set of extra-manifest
MachineConfigCRs inout/argocd/extra-manifest. It is automatically applied to the cluster when it is installed. -
Optional: To provision additional install-time manifests on the provisioned cluster, package your extra manifest CRs in a
ConfigMapand reference it in theextraManifestsRefsfield of theClusterInstanceCR. For more information, see "Customizing extra installation manifests in the GitOps ZTP pipeline".Important
For optimal cluster performance, enable crun for master and worker nodes in single-node OpenShift, single-node OpenShift with additional worker nodes, three-node OpenShift, and standard clusters.
Enable crun in a
ContainerRuntimeConfigCR as an additional Day 0 install-time manifest to avoid the cluster having to reboot.The
enable-crun-master.yamlandenable-crun-worker.yamlCR files are in theout/source-crs/optional-extra-manifest/folder that you can extract from theztp-site-generatecontainer.
-
-
Add the
ClusterInstanceCR to thekustomization.yamlfile in thegeneratorssection, similar to the example shown inout/argocd/example/clusterinstance/kustomization.yaml. -
Commit the
ClusterInstanceCR and associatedkustomization.yamlchanges in your Git repository and push the changes.The ArgoCD pipeline detects the changes and begins the managed cluster deployment.
-
Verify that the custom roles and labels are applied after the node is deployed:
$ oc describe node example-node.example.com
Name: example-node.example.com
Roles: control-plane,example-label,master,worker
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
custom-label/parameter1=true
kubernetes.io/arch=amd64
kubernetes.io/hostname=cnfdf03.telco5gran.eng.rdu2.redhat.com
kubernetes.io/os=linux
node-role.kubernetes.io/control-plane=
node-role.kubernetes.io/example-label=
node-role.kubernetes.io/master=
node-role.kubernetes.io/worker=
node.openshift.io/os_id=rhcos
- The custom label is applied to the node.
Configuring IPsec encryption for single-node OpenShift clusters using GitOps ZTP and ClusterInstance resources
You can enable IPsec encryption in managed single-node OpenShift clusters that you install using GitOps ZTP and Red Hat Advanced Cluster Management (RHACM). You can encrypt traffic between the managed cluster and IPsec endpoints external to the managed cluster. All network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec in Transport mode.
Important
You can also configure IPsec encryption for single-node OpenShift clusters with an additional worker node by following this procedure. It is recommended to use the MachineConfig custom resource (CR) to configure IPsec encryption for single-node OpenShift clusters and single-node OpenShift clusters with an additional worker node because of their low resource availability.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges. -
You have installed the SiteConfig Operator in the hub cluster.
-
You have configured RHACM and the hub cluster for generating the required installation and policy custom resources (CRs) for managed clusters.
-
You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
-
You have installed the
butaneutility version 0.20.0 or later. -
You have a PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format.
-
Extract the latest version of the
ztp-site-generatecontainer source and merge it with your repository where you manage your custom site configuration data. -
Configure
optional-extra-manifest/ipsec/ipsec-endpoint-config.yamlwith the required values that configure IPsec in the cluster. For example:interfaces: - name: hosta_conn type: ipsec libreswan: left: '%defaultroute' leftid: '%fromcert' leftmodecfgclient: false leftcert: left_server leftrsasigkey: '%cert' right: <external_host> rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address> ikev2: insist type: tunnel- The value of this field must match with the name of the certificate used on the remote system.
- Replace
<external_host>with the external host IP address or DNS hostname. - Replace
<external_address>with the IP subnet of the external host on the other side of the IPsec tunnel. - Use the IKEv2 VPN encryption protocol only. Do not use IKEv1, which is deprecated.
-
Add the following certificates to the
optional-extra-manifest/ipsecfolder:-
left_server.p12: The certificate bundle for the IPsec endpoints -
ca.pem: The certificate authority that you signed your certificates withThe certificate files are required for the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in later steps.
-
-
Open a shell prompt at the
optional-extra-manifest/ipsecfolder of the Git repository where you maintain your custom site configuration data. -
Run the
optional-extra-manifest/ipsec/build.shscript to generate the required Butane andMachineConfigCRs files.If the PKCS#12 certificate is protected with a password, set the
-Wargument.Example outputout └── argocd └── example └── optional-extra-manifest └── ipsec ├── 99-ipsec-master-endpoint-config.bu ├── 99-ipsec-master-endpoint-config.yaml ├── 99-ipsec-worker-endpoint-config.bu ├── 99-ipsec-worker-endpoint-config.yaml ├── build.sh ├── ca.pem ├── left_server.p12 ├── enable-ipsec.yaml ├── ipsec-endpoint-config.yml └── README.md- The
ipsec/build.shscript generates the Butane and endpoint configuration CRs. - You provide
ca.pemandleft_server.p12certificate files that are relevant to your network.
- The
-
Create an
ipsec-manifests/folder in the repository where you manage your custom site configuration data. Add theenable-ipsec.yamland99-ipsec-*YAML files to the directory. For example:site-configs/ ├── hub-1/ │ └── clusterinstance-site1-sno-du.yaml ├── ipsec-manifests/ │ ├── enable-ipsec.yaml │ ├── 99-ipsec-worker-endpoint-config.yaml │ └── 99-ipsec-master-endpoint-config.yaml └── kustomization.yaml -
Create a
kustomization.yamlfile that usesconfigMapGeneratorto package your IPsec manifests into aConfigMap:apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - hub-1/clusterinstance-site1-sno-du.yaml configMapGenerator: - name: ipsec-manifests-cm namespace: site1-sno-du files: - ipsec-manifests/enable-ipsec.yaml - ipsec-manifests/99-ipsec-master-endpoint-config.yaml - ipsec-manifests/99-ipsec-worker-endpoint-config.yaml generatorOptions: disableNameSuffixHash: true- The namespace must match the
ClusterInstancenamespace. - Disables the hash suffix so the
ConfigMapname is predictable.
- The namespace must match the
-
In your
ClusterInstanceCR, reference theConfigMapin theextraManifestsRefsfield:apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: "site1-sno-du" namespace: "site1-sno-du" spec: clusterName: "site1-sno-du" networkType: "OVNKubernetes" extraManifestsRefs: - name: ipsec-manifests-cm # ...- Reference to the
ConfigMapcontaining the IPsec manifests.Note
If you have other extra manifests, you can either include them in the same
ConfigMapor create multipleConfigMapresources and reference each of those in theextraManifestsRefsfield.
- Reference to the
-
Commit the
ClusterInstanceCR, IPsec manifest files, andkustomization.yamlchanges in your Git repository and push the changes to provision the managed cluster and configure IPsec encryption.The Argo CD pipeline detects the changes and begins the managed cluster deployment.
During cluster provisioning, the SiteConfig Operator applies the CRs contained in the referenced
ConfigMapresources as extra manifests.
For information about verifying the IPsec encryption, see "Verifying the IPsec encryption".
Configuring IPsec encryption for multi-node clusters using GitOps ZTP and ClusterInstance resources
You can enable IPsec encryption in managed multi-node clusters that you install using GitOps ZTP and Red Hat Advanced Cluster Management (RHACM). You can encrypt traffic between the managed cluster and IPsec endpoints external to the managed cluster. All network traffic between nodes on the OVN-Kubernetes cluster network is encrypted with IPsec in Transport mode.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges. -
You have installed the SiteConfig Operator in the hub cluster.
-
You have configured RHACM and the hub cluster for generating the required installation and policy custom resources (CRs) for managed clusters.
-
You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
-
You have installed the
butaneutility version 0.20.0 or later. -
You have a PKCS#12 certificate for the IPsec endpoint and a CA cert in PEM format.
-
You have installed the NMState Operator.
-
Extract the latest version of the
ztp-site-generatecontainer source and merge it with your repository where you manage your custom site configuration data. -
Configure the
optional-extra-manifest/ipsec/ipsec-config-policy.yamlfile with the required values that configure IPsec in the cluster.ConfigurationPolicyobject for creating an IPsec configurationapiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-config spec: namespaceSelector: include: ["default"] exclude: [] matchExpressions: [] matchLabels: {} remediationAction: inform severity: low evaluationInterval: compliant: noncompliant: object-templates-raw: | {{- range (lookup "v1" "Node" "" "").items }} - complianceType: musthave objectDefinition: kind: NodeNetworkConfigurationPolicy apiVersion: nmstate.io/v1 metadata: name: {{ .metadata.name }}-ipsec-policy spec: nodeSelector: kubernetes.io/hostname: {{ .metadata.name }} desiredState: interfaces: - name: hosta_conn type: ipsec libreswan: left: '%defaultroute' leftid: '%fromcert' leftmodecfgclient: false leftcert: left_server leftrsasigkey: '%cert' right: <external_host> rightid: '%fromcert' rightrsasigkey: '%cert' rightsubnet: <external_address> ikev2: insist type: tunnel- The value of this field must match with the name of the certificate used on the remote system.
- Replace
<external_host>with the external host IP address or DNS hostname. - Replace
<external_address>with the IP subnet of the external host on the other side of the IPsec tunnel. - Use the IKEv2 VPN encryption protocol only. Do not use IKEv1, which is deprecated.
-
Add the following certificates to the
optional-extra-manifest/ipsecfolder:-
left_server.p12: The certificate bundle for the IPsec endpoints -
ca.pem: The certificate authority that you signed your certificates withThe certificate files are required for the Network Security Services (NSS) database on each host. These files are imported as part of the Butane configuration in later steps.
-
-
Open a shell prompt at the
optional-extra-manifest/ipsecfolder of the Git repository where you maintain your custom site configuration data. -
Run the
optional-extra-manifest/ipsec/import-certs.shscript to generate the required Butane andMachineConfigCRs to import the external certs.If the PKCS#12 certificate is protected with a password, set the
-Wargument.Example outputout └── argocd └── example └── optional-extra-manifest └── ipsec ├── 99-ipsec-master-import-certs.bu ├── 99-ipsec-master-import-certs.yaml ├── 99-ipsec-worker-import-certs.bu ├── 99-ipsec-worker-import-certs.yaml ├── import-certs.sh ├── ca.pem ├── left_server.p12 ├── enable-ipsec.yaml ├── ipsec-config-policy.yaml └── README.md- The
ipsec/import-certs.shscript generates the Butane and endpoint configuration CRs. - Add the
ca.pemandleft_server.p12certificate files that are relevant to your network.
- The
-
Create an
ipsec-manifests/folder in the repository where you manage your custom site configuration data and add theenable-ipsec.yamland99-ipsec-*YAML files to the directory.Example site configuration directorysite-configs/ ├── hub-1/ │ └── clusterinstance-site1-mno-du.yaml ├── ipsec-manifests/ │ ├── enable-ipsec.yaml │ ├── 99-ipsec-master-import-certs.yaml │ └── 99-ipsec-worker-import-certs.yaml └── kustomization.yaml -
Create a
kustomization.yamlfile that usesconfigMapGeneratorto package your IPsec manifests into aConfigMap:apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - hub-1/clusterinstance-site1-mno-du.yaml configMapGenerator: - name: ipsec-manifests-cm namespace: site1-mno-du files: - ipsec-manifests/enable-ipsec.yaml - ipsec-manifests/99-ipsec-master-import-certs.yaml - ipsec-manifests/99-ipsec-worker-import-certs.yaml generatorOptions: disableNameSuffixHash: true- The namespace must match the
ClusterInstancenamespace. - Disables the hash suffix so the
ConfigMapname is predictable.
- The namespace must match the
-
In your
ClusterInstanceCR, reference theConfigMapin theextraManifestsRefsfield:apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: "site1-mno-du" namespace: "site1-mno-du" spec: clusterName: "site1-mno-du" networkType: "OVNKubernetes" extraManifestsRefs: - name: ipsec-manifests-cm # ...- Reference to the
ConfigMapcontaining the IPsec certificate import manifests.Note
If you have other extra manifests, you can either include them in the same
ConfigMapor create multipleConfigMapresources and reference them all inextraManifestsRefs.
- Reference to the
-
Include the
ipsec-config-policy.yamlconfig policy file in thesource-crsdirectory in GitOps and reference the file in one of thePolicyGeneratorCRs. -
Commit the
ClusterInstanceCR, IPsec manifest files, andkustomization.yamlchanges in your Git repository and push the changes to provision the managed cluster and configure IPsec encryption.The Argo CD pipeline detects the changes and begins the managed cluster deployment.
During cluster provisioning, the SiteConfig Operator applies the CRs contained in the referenced
ConfigMapresources as extra manifests. The IPsec configuration policy is applied as a Day 2 operation after the cluster is provisioned.
For information about verifying the IPsec encryption, see "Verifying the IPsec encryption".
Verifying the IPsec encryption
You can verify that the IPsec encryption is successfully applied in a managed OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges. -
You have configured the IPsec encryption.
-
Start a debug pod for the managed cluster by running the following command:
$ oc debug node/<node_name> -
Check that the IPsec policy is applied in the cluster node by running the following command:
sh-5.1# ip xfrm policyExample outputsrc 172.16.123.0/24 dst 10.1.232.10/32 dir out priority 1757377 ptype main tmpl src 10.1.28.190 dst 10.1.232.10 proto esp reqid 16393 mode tunnel src 10.1.232.10/32 dst 172.16.123.0/24 dir fwd priority 1757377 ptype main tmpl src 10.1.232.10 dst 10.1.28.190 proto esp reqid 16393 mode tunnel src 10.1.232.10/32 dst 172.16.123.0/24 dir in priority 1757377 ptype main tmpl src 10.1.232.10 dst 10.1.28.190 proto esp reqid 16393 mode tunnel -
Check that the IPsec tunnel is up and connected by running the following command:
sh-5.1# ip xfrm stateExample outputsrc 10.1.232.10 dst 10.1.28.190 proto esp spi 0xa62a05aa reqid 16393 mode tunnel replay-window 0 flag af-unspec esn auth-trunc hmac(sha1) 0x8c59f680c8ea1e667b665d8424e2ab749cec12dc 96 enc cbc(aes) 0x2818a489fe84929c8ab72907e9ce2f0eac6f16f2258bd22240f4087e0326badb anti-replay esn context: seq-hi 0x0, seq 0x0, oseq-hi 0x0, oseq 0x0 replay_window 128, bitmap-length 4 00000000 00000000 00000000 00000000 src 10.1.28.190 dst 10.1.232.10 proto esp spi 0x8e96e9f9 reqid 16393 mode tunnel replay-window 0 flag af-unspec esn auth-trunc hmac(sha1) 0xd960ddc0a6baaccb343396a51295e08cfd8aaddd 96 enc cbc(aes) 0x0273c02e05b4216d5e652de3fc9b3528fea94648bc2b88fa01139fdf0beb27ab anti-replay esn context: seq-hi 0x0, seq 0x0, oseq-hi 0x0, oseq 0x0 replay_window 128, bitmap-length 4 00000000 00000000 00000000 00000000 -
Ping a known IP in the external host subnet by running the following command: For example, ping an IP address in the
rightsubnetrange that you set in theipsec/ipsec-endpoint-config.yamlfile:sh-5.1# ping 172.16.110.8Example outputPING 172.16.110.8 (172.16.110.8) 56(84) bytes of data. 64 bytes from 172.16.110.8: icmp_seq=1 ttl=64 time=153 ms 64 bytes from 172.16.110.8: icmp_seq=2 ttl=64 time=155 ms
ClusterInstance CR installation reference
For a detailed API reference for the ClusterInstance custom resource, see ClusterInstance API in the Red Hat Advanced Cluster Management (RHACM) documentation.
Managing host firmware settings with GitOps ZTP
Hosts require the correct firmware configuration to ensure high performance and optimal efficiency. You can deploy custom host firmware configurations for managed clusters with GitOps ZTP.
Tune hosts with specific hardware profiles in your lab and ensure they are optimized for your requirements. When you have completed host tuning to your satisfaction, you extract the host profile and save it in your GitOps ZTP repository. Then, you use the host profile to configure firmware settings in the managed cluster hosts that you deploy with GitOps ZTP.
You specify the required hardware profiles by creating HostFirmwareSettings CRs, packaging them in ConfigMap resources, and referencing them in the templateRefs field of your ClusterInstance CR.
The SiteConfig Operator generates the required HostFirmwareSettings and BareMetalHost CRs that are applied to the hub cluster.
Use the following best practices to manage your host firmware profiles.
- Identify critical firmware settings with hardware vendors
-
Work with hardware vendors to identify and document critical host firmware settings required for optimal performance and compatibility with the deployed host platform.
- Use common firmware configurations across similar hardware platforms
-
Where possible, use a standardized host firmware configuration across similar hardware platforms to reduce complexity and potential errors during deployment.
- Test firmware configurations in a lab environment
-
Test host firmware configurations in a controlled lab environment before deploying in production to ensure that settings are compatible with hardware, firmware, and software.
- Manage firmware profiles in source control
-
Manage host firmware profiles in Git repositories to track changes, ensure consistency, and facilitate collaboration with vendors.
Retrieving the host firmware schema for a managed cluster
You can discover the host firmware schema for managed clusters. The host firmware schema for bare-metal hosts is populated with information that the Ironic API returns. The API returns information about host firmware interfaces, including firmware setting types, allowable values, ranges, and flags.
-
You have installed the OpenShift CLI (
oc). -
You have installed Red Hat Advanced Cluster Management (RHACM) and logged in to the hub cluster as a user with
cluster-adminprivileges. -
You have provisioned a cluster that is managed by RHACM.
-
Discover the host firmware schema for the managed cluster. Run the following command:
$ oc get firmwareschema -n <managed_cluster_namespace> -o yamlExample outputapiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: FirmwareSchema metadata: creationTimestamp: "2024-09-11T10:29:43Z" generation: 1 name: schema-40562318 namespace: compute-1 ownerReferences: - apiVersion: metal3.io/v1alpha1 kind: HostFirmwareSettings name: compute-1.example.com uid: 65d0e89b-1cd8-4317-966d-2fbbbe033fe9 resourceVersion: "280057624" uid: 511ad25d-f1c9-457b-9a96-776605c7b887 spec: schema: AccessControlService: allowable_values: - Enabled - Disabled attribute_type: Enumeration read_only: false # ...
Retrieving the host firmware settings for a managed cluster
You can retrieve the host firmware settings for managed clusters. This is useful when you have deployed changes to the host firmware and you want to monitor the changes and ensure that they are applied successfully.
-
You have installed the OpenShift CLI (
oc). -
You have installed Red Hat Advanced Cluster Management (RHACM) and logged in to the hub cluster as a user with
cluster-adminprivileges. -
You have provisioned a cluster that is managed by RHACM.
-
Retrieve the host firmware settings for the managed cluster. Run the following command:
$ oc get hostfirmwaresettings -n <cluster_namespace> <node_name> -o yamlExample outputapiVersion: v1 items: - apiVersion: metal3.io/v1alpha1 kind: HostFirmwareSettings metadata: creationTimestamp: "2024-09-11T10:29:43Z" generation: 1 name: compute-1.example.com namespace: kni-qe-24 ownerReferences: - apiVersion: metal3.io/v1alpha1 blockOwnerDeletion: true controller: true kind: BareMetalHost name: compute-1.example.com uid: 0baddbb7-bb34-4224-8427-3d01d91c9287 resourceVersion: "280057626" uid: 65d0e89b-1cd8-4317-966d-2fbbbe033fe9 spec: settings: {} status: conditions: - lastTransitionTime: "2024-09-11T10:29:43Z" message: "" observedGeneration: 1 reason: Success status: "True" type: ChangeDetected - lastTransitionTime: "2024-09-11T10:29:43Z" message: Invalid BIOS setting observedGeneration: 1 reason: ConfigurationError status: "False" type: Valid lastUpdated: "2024-09-11T10:29:43Z" schema: name: schema-40562318 namespace: compute-1 settings: AccessControlService: Enabled AcpiHpet: Enabled AcpiRootBridgePxm: Enabled # ...- Indicates that a change in the host firmware settings has been detected
- Indicates that the host has an invalid firmware setting
- The complete list of configured host firmware settings is returned under the
status.settingsfield
-
Optional: Check the status of the
HostFirmwareSettings(hfs) custom resource in the cluster:$ oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type=="ChangeDetected")].status}'Example outputTrue -
Optional: Check for invalid firmware settings in the cluster host. Run the following command:
$ oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type=="Valid")].status}'Example outputFalse
Deploying user-defined firmware to cluster hosts with GitOps ZTP
You can deploy user-defined firmware settings to cluster hosts by creating custom node templates that include HostFirmwareSettings CRs, and referencing them in the ClusterInstance CR.
You can configure hardware profiles to apply to hosts in the following scenarios:
-
All hosts in the cluster
-
Individual hosts in the cluster
Important
You can configure host hardware profiles to be applied in a hierarchy. Node-level profiles override cluster-wide settings.
-
You have installed the OpenShift CLI (
oc). -
You have installed Red Hat Advanced Cluster Management (RHACM) version 2.12 or later and logged in to the hub cluster as a user with
cluster-adminprivileges. -
You have installed the SiteConfig Operator in the hub cluster.
-
You created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the Argo CD application.
-
Create the
HostFirmwareSettingsCR that contains the firmware settings you want to apply. For example, create the following YAML file:host-firmware-settings.yamlapiVersion: metal3.io/v1alpha1 kind: HostFirmwareSettings metadata: name: "site1-sno-du" namespace: "site1-sno-du" spec: settings: BootMode: "Uefi" LogicalProc: "Enabled" ProcVirtualization: "Enabled" -
Save the
HostFirmwareSettingsCR file relative to thekustomization.yamlfile that you use to provision the cluster. For example:site-configs/ └── site1-sno-du/ ├── clusterinstance-site1-sno-du.yaml ├── kustomization.yaml └── host-firmware-settings.yaml -
Create a
ConfigMapto store theHostFirmwareSettingsCR. You can use akustomization.yamlfile withconfigMapGeneratorto create theConfigMap. For example:apiVersion: kustomize.config.k8s.io/v1beta1 kind: Kustomization resources: - clusterinstance-site1-sno-du.yaml configMapGenerator: - name: host-firmware-settings-cm namespace: site1-sno-du files: - host-firmware-settings.yaml generatorOptions: disableNameSuffixHash: true- The namespace must match the
ClusterInstancenamespace. - The name of the
HostFirmwareSettingsCR.
- The namespace must match the
-
To apply a hardware profile to all hosts in the cluster, reference the
ConfigMapin thespec.templateRefsfield of yourClusterInstanceCR. For example:apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: "site1-sno-du" namespace: "site1-sno-du" spec: clusterName: "site1-sno-du" # ... templateRefs: - name: host-firmware-settings-cm namespace: site1-sno-du nodes: - hostName: "node1.example.com" # ...- Applies the firmware profile to all hosts in the cluster.
-
Optional: To apply a hardware profile to a specific host in the cluster, reference the
ConfigMapin thespec.nodes[].templateRefsfield. For example:apiVersion: siteconfig.open-cluster-management.io/v1alpha1 kind: ClusterInstance metadata: name: "site1-sno-du" namespace: "site1-sno-du" spec: clusterName: "site1-sno-du" # ... nodes: - hostName: "node1.example.com" # ... templateRefs: - name: host-firmware-node1-cm namespace: site1-sno-du - hostName: "node2.example.com" # ...- Applies the firmware profile only to the
node1.example.comhost.Note
Node-level
templateRefssettings override cluster-leveltemplateRefssettings.
- Applies the firmware profile only to the
-
Commit the
ClusterInstanceCR,ConfigMap, and associatedkustomization.yamlchanges in your Git repository and push the changes.The Argo CD pipeline detects the changes and begins the managed cluster deployment.
Note
Cluster deployment proceeds even if an invalid firmware setting is detected. To apply a correction using GitOps ZTP, re-deploy the cluster with the corrected hardware profile.
-
Check that the firmware settings have been applied in the managed cluster host. For example, run the following command:
$ oc get hfs -n <managed_cluster_namespace> <managed_cluster_name> -o jsonpath='{.status.conditions[?(@.type=="Valid")].status}'-
where
<managed_cluster_namespace>is the namespace of the managed cluster and<managed_cluster_name>is the name of the managed cluster.Example outputTrue
-
Monitoring managed cluster installation progress
The Argo CD pipeline syncs the ClusterInstance CR from the Git repository to the hub cluster. The SiteConfig Operator then processes the ClusterInstance CR and generates the required cluster configuration CRs. You can monitor the progress of the cluster installation from the RHACM dashboard or from the command line.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges.
When the synchronization is complete, the installation generally proceeds as follows:
-
The Assisted Service Operator installs OpenShift Container Platform on the cluster. You can monitor the progress of cluster installation from the RHACM dashboard or from the command line by running the following commands:
-
Export the cluster name:
$ export CLUSTER=<clusterName> -
Query the
AgentClusterInstallCR for the managed cluster:$ oc get agentclusterinstall -n $CLUSTER $CLUSTER -o jsonpath='{.status.conditions[?(@.type=="Completed")]}' | jq -
Get the installation events for the cluster:
$ curl -sk $(oc get agentclusterinstall -n $CLUSTER $CLUSTER -o jsonpath='{.status.debugInfo.eventsURL}') | jq '.[-2,-1]'
-
Troubleshooting GitOps ZTP by validating the installation CRs
The ArgoCD pipeline uses the ClusterInstance and PolicyGenerator or PolicyGentemplate custom resources (CRs) to generate the cluster configuration CRs and Red Hat Advanced Cluster Management (RHACM) policies. Use the following steps to troubleshoot issues that might occur during this process.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges.
-
Check that the installation CRs were created by using the following command:
$ oc get AgentClusterInstall -n <cluster_name>If no object is returned, use the following steps to troubleshoot the ArgoCD pipeline flow from
ClusterInstancefiles to the installation CRs. -
Verify that the
ManagedClusterCR was generated using theClusterInstanceCR on the hub cluster:$ oc get managedcluster -
If the
ManagedClusteris missing, check if theclustersapplication failed to synchronize the files from the Git repository to the hub cluster:$ oc get applications.argoproj.io -n openshift-gitops clusters -o yaml
Troubleshooting GitOps ZTP virtual media booting on SuperMicro servers
SuperMicro X11 servers do not support virtual media installations when the image is served using the https protocol. As a result, single-node OpenShift deployments for this environment fail to boot on the target node. To avoid this issue, log in to the hub cluster and disable Transport Layer Security (TLS) in the Provisioning resource. This ensures the image is not served with TLS even though the image address uses the https scheme.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges.
-
Disable TLS in the
Provisioningresource by running the following command:$ oc patch provisioning provisioning-configuration --type merge -p '{"spec":{"disableVirtualMediaTLS": true}}' -
Continue the steps to deploy your single-node OpenShift cluster.
Removing a managed cluster site from the GitOps ZTP pipeline
You can remove a managed site and the associated installation and configuration policy CRs from the GitOps Zero Touch Provisioning (ZTP) pipeline.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges.
-
Remove a site and the associated CRs by removing the associated
ClusterInstanceandPolicyGeneratororPolicyGentemplatefiles from thekustomization.yamlfile. -
Add the following
syncOptionsfield to the ArgoCD application that manages the target site.kind: Application spec: syncPolicy: syncOptions: - PrunePropagationPolicy=backgroundWhen you run the GitOps ZTP pipeline again, the generated CRs are removed.
-
Optional: If you want to permanently remove a site, you should also remove the
ClusterInstanceand site-specificPolicyGeneratororPolicyGentemplatefiles from the Git repository. -
Optional: If you want to remove a site temporarily, for example when redeploying a site, you can leave the
ClusterInstanceand site-specificPolicyGeneratororPolicyGentemplateCRs in the Git repository.
Removing obsolete content from the GitOps ZTP pipeline
If a change to the PolicyGenerator or PolicyGentemplate configuration results in obsolete policies, for example, if you rename policies, use the following procedure to remove the obsolete policies.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges.
-
Remove the affected
PolicyGeneratororPolicyGentemplatefiles from the Git repository, commit and push to the remote repository. -
Wait for the changes to synchronize through the application and the affected policies to be removed from the hub cluster.
-
Add the updated
PolicyGeneratororPolicyGentemplatefiles back to the Git repository, and then commit and push to the remote repository.Note
Removing GitOps Zero Touch Provisioning (ZTP) policies from the Git repository, and as a result also removing them from the hub cluster, does not affect the configuration of the managed cluster. The policy and CRs managed by that policy remains in place on the managed cluster.
-
Optional: As an alternative, after making changes to
PolicyGeneratororPolicyGentemplateCRs that result in obsolete policies, you can remove these policies from the hub cluster manually. You can delete policies from the RHACM console using the Governance tab or by running the following command:$ oc delete policy -n <namespace> <policy_name>
Tearing down the GitOps ZTP pipeline
You can remove the ArgoCD pipeline and all generated GitOps Zero Touch Provisioning (ZTP) artifacts.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the hub cluster as a user with
cluster-adminprivileges.
-
Detach all clusters from Red Hat Advanced Cluster Management (RHACM) on the hub cluster.
-
Delete the
kustomization.yamlfile in thedeploymentdirectory using the following command:$ oc delete -k out/argocd/deployment -
Commit and push your changes to the site repository.