Preparing to install a cluster on {ibm-z-title} and {ibm-linuxone-title} using user-provisioned infrastructure
You prepare to install an OpenShift Container Platform cluster on IBM Z® and IBM® LinuxONE by completing the following steps:
-
Verifying internet connectivity for your cluster.
-
Downloading the installation program.
Note
If you are installing in a disconnected environment, you extract the installation program from the mirrored content. For more information, see Mirroring images for a disconnected installation.
-
Installing the OpenShift CLI (
oc).Note
If you are installing in a disconnected environment, install
octo the mirror host. -
Generating an SSH key pair. You can use this key pair to authenticate into the OpenShift Container Platform cluster’s nodes after it is deployed.
-
Validating DNS resolution.
Internet access for OpenShift Container Platform
In OpenShift Container Platform 4.19, you require access to the internet to install your cluster.
You must have internet access to perform the following actions:
-
Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
-
Access Quay.io to obtain the packages that are required to install your cluster.
-
Obtain the packages that are required to perform cluster updates.
Obtaining the installation program
Before you install OpenShift Container Platform, download the installation file on your provisioning machine.
-
You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space.
-
Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account.
-
Select your infrastructure provider from the Run it yourself section of the page.
-
Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer.
-
Place the downloaded file in the directory where you want to store the installation configuration files.
Important
-
The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster.
-
Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider.
-
-
Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command:
$ tar -xvf openshift-install-linux.tar.gz -
Download your installation pull secret from Red Hat OpenShift Cluster Manager. This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components.
Tip
Alternatively, you can retrieve the installation program from the Red Hat Customer Portal, where you can specify a version of the installation program to download. However, you must have an active subscription to access this page.
Installing the OpenShift CLI on Linux
To manage your cluster and deploy applications from the command line, install the OpenShift CLI (oc) binary on Linux.
Important
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform.
Download and install the new version of oc.
-
Navigate to the Download OpenShift Container Platform page on the Red Hat Customer Portal.
-
Select the architecture from the Product Variant list.
-
Select the appropriate version from the Version list.
-
Click Download Now next to the OpenShift v4.19 Linux Clients entry and save the file.
-
Unpack the archive:
$ tar xvf <file> -
Place the
ocbinary in a directory that is on yourPATH.To check your
PATH, execute the following command:$ echo $PATH
-
After you install the OpenShift CLI, it is available using the
occommand:$ oc <command>
Installing the OpenShift CLI on Windows
To manage your cluster and deploy applications from the command line, install OpenShift CLI (oc) binary on Windows.
Important
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform.
Download and install the new version of oc.
-
Navigate to the Download OpenShift Container Platform page on the Red Hat Customer Portal.
-
Select the appropriate version from the Version list.
-
Click Download Now next to the OpenShift v4.19 Windows Client entry and save the file.
-
Extract the archive with a ZIP program.
-
Move the
ocbinary to a directory that is on yourPATHvariable.To check your
PATHvariable, open the command prompt and execute the following command:C:\> path
-
After you install the OpenShift CLI, it is available using the
occommand:C:\> oc <command>
Installing the OpenShift CLI on macOS
To manage your cluster and deploy applications from the command line, install the OpenShift CLI (oc) binary on macOS.
Important
If you installed an earlier version of oc, you cannot use it to complete all of the commands in OpenShift Container Platform.
Download and install the new version of oc.
-
Navigate to the Download OpenShift Container Platform page on the Red Hat Customer Portal.
-
Select the architecture from the Product Variant list.
-
Select the appropriate version from the Version list.
-
Click Download Now next to the OpenShift v4.19 macOS Clients entry and save the file.
Note
For macOS arm64, choose the OpenShift v4.19 macOS arm64 Client entry.
-
Unpack and unzip the archive.
-
Move the
ocbinary to a directory on yourPATHvariable.To check your
PATHvariable, open a terminal and execute the following command:$ echo $PATH
-
Verify your installation by using an
occommand:$ oc <command>
Generating a key pair for cluster node SSH access
To enable secure, passwordless SSH access to your cluster nodes, provide an SSH public key during the OpenShift Container Platform installation. This ensures that the installation program automatically configures the Red Hat Enterprise Linux CoreOS (RHCOS) nodes for remote authentication through the core user.
The SSH public key gets added to the ~/.ssh/authorized_keys list for the core user on each node. After the key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files, you can use the key pair to SSH in to the RHCOS nodes as the user core. To access the nodes through SSH, the private key identity must be managed by SSH for your local user.
If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes.
Important
Do not skip this procedure in production environments, where disaster recovery and debugging is required.
-
If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command:
$ ssh-keygen -t ed25519 -N '' -f <path>/<file_name>Specifies the path and file name, such as
~/.ssh/id_ed25519, of the new SSH key. If you have an existing key pair, ensure your public key is in the your~/.sshdirectory.Note
If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the
x86_64,ppc64le, ands390xarchitectures, do not create a key that uses theed25519algorithm. Instead, create a key that uses thersaorecdsaalgorithm. -
View the public SSH key:
$ cat <path>/<file_name>.pubFor example, run the following to view the
~/.ssh/id_ed25519.pubpublic key:$ cat ~/.ssh/id_ed25519.pub -
Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the
./openshift-install gathercommand.Note
On some distributions, default SSH private key identities such as
~/.ssh/id_rsaand~/.ssh/id_dsaare managed automatically.-
If the
ssh-agentprocess is not already running for your local user, start it as a background task:$ eval "$(ssh-agent -s)"Example outputAgent pid 31874Note
If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA.
-
-
Add your SSH private key to the
ssh-agent:$ ssh-add <path>/<file_name>Specifies the path and file name for your SSH private key, such as
~/.ssh/id_ed25519Example outputIdentity added: /home/<you>/<path>/<file_name> (<computer_name>)
-
When you install OpenShift Container Platform, provide the SSH public key to the installation program.
Validating DNS resolution for user-provisioned infrastructure
To prevent network-related installation failures and ensure node connectivity in OpenShift Container Platform, validate your DNS configuration before deploying on user-provisioned infrastructure. This verification confirms that all required records resolve correctly, providing the stable foundation necessary for cluster communication.
Important
The validation steps detailed in this section must succeed before you install your cluster.
-
You have configured the required DNS records for your user-provisioned infrastructure.
-
From your installation node, run DNS lookups against the record names of the Kubernetes API, the wildcard routes, and the cluster nodes. Validate that the IP addresses contained in the responses correspond to the correct components.
-
Perform a lookup against the Kubernetes API record name. Check that the result points to the IP address of the API load balancer:
$ dig +noall +answer @<nameserver_ip> api.<cluster_name>.<base_domain>Replace
<nameserver_ip>with the IP address of the name server,<cluster_name>with your cluster name, and<base_domain>with your base domain name.Example outputapi.ocp4.example.com. 604800 IN A 192.168.1.5 -
Perform a lookup against the Kubernetes internal API record name. Check that the result points to the IP address of the API load balancer:
$ dig +noall +answer @<nameserver_ip> api-int.<cluster_name>.<base_domain>Example outputapi-int.ocp4.example.com. 604800 IN A 192.168.1.5 -
Test an example
*.apps.<cluster_name>.<base_domain>DNS wildcard lookup. All of the application wildcard lookups must resolve to the IP address of the application ingress load balancer:$ dig +noall +answer @<nameserver_ip> random.apps.<cluster_name>.<base_domain>Example outputrandom.apps.ocp4.example.com. 604800 IN A 192.168.1.5Note
In the example outputs, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation.
You can replace
randomwith another wildcard value. For example, you can query the route to the OpenShift Container Platform console:$ dig +noall +answer @<nameserver_ip> console-openshift-console.apps.<cluster_name>.<base_domain>Example outputconsole-openshift-console.apps.ocp4.example.com. 604800 IN A 192.168.1.5 -
Run a lookup against the bootstrap DNS record name. Check that the result points to the IP address of the bootstrap node:
$ dig +noall +answer @<nameserver_ip> bootstrap.<cluster_name>.<base_domain>Example outputbootstrap.ocp4.example.com. 604800 IN A 192.168.1.96 -
Use this method to perform lookups against the DNS record names for the control plane and compute nodes. Check that the results correspond to the IP addresses of each node.
-
-
From your installation node, run reverse DNS lookups against the IP addresses of the load balancer and the cluster nodes. Validate that the record names contained in the responses correspond to the correct components.
-
Perform a reverse lookup against the IP address of the API load balancer. Check that the response includes the record names for the Kubernetes API and the Kubernetes internal API:
$ dig +noall +answer @<nameserver_ip> -x 192.168.1.5Example output5.1.168.192.in-addr.arpa. 604800 IN PTR api-int.ocp4.example.com. 5.1.168.192.in-addr.arpa. 604800 IN PTR api.ocp4.example.com.where:
api-int.ocp4.example.com-
Specifies the record name for the Kubernetes internal API.
api.ocp4.example.com-
Specifies the record name for the Kubernetes API.
Note
A PTR record is not required for the OpenShift Container Platform application wildcard. No validation step is needed for reverse DNS resolution against the IP address of the application ingress load balancer.
-
Perform a reverse lookup against the IP address of the bootstrap node. Check that the result points to the DNS record name of the bootstrap node:
$ dig +noall +answer @<nameserver_ip> -x 192.168.1.96Example output96.1.168.192.in-addr.arpa. 604800 IN PTR bootstrap.ocp4.example.com. -
Use this method to perform reverse lookups against the IP addresses for the control plane and compute nodes. Check that the results correspond to the DNS record names of each node.
-
-
See About remote health monitoring for more information about the Telemetry service.