Installing a cluster on Azure with customizations
In OpenShift Container Platform version 4.19, you can install a cluster with a customized configuration or a customized network configuration on infrastructure that the installation program provisions on Microsoft Azure. To install a cluster with customizations or with network customizations, modify parameters in the install-config.yaml file before you install the cluster. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only the kubeProxy configuration parameters in a running cluster.
Using the Azure Marketplace offering
Using the Azure Marketplace offering lets you deploy an OpenShift Container Platform cluster, which is billed on pay-per-use basis (hourly, per core) through Azure, while still being supported directly by Red Hat.
To deploy an OpenShift Container Platform cluster using the Azure Marketplace offering, you must first obtain the Azure Marketplace image. The installation program uses this image to deploy worker or control plane nodes. When obtaining your image, consider the following:
-
While the images are the same, the Azure Marketplace publisher is different depending on your region. If you are located in North America, specify
redhatas the publisher. If you are located in EMEA, specifyredhat-limitedas the publisher. -
The offer includes a
rh-ocp-workerSKU and arh-ocp-worker-gen1SKU. Therh-ocp-workerSKU represents a Hyper-V generation version 2 VM image. The default instance types used in OpenShift Container Platform are version 2 compatible. If you plan to use an instance type that is only version 1 compatible, use the image associated with therh-ocp-worker-gen1SKU. Therh-ocp-worker-gen1SKU represents a Hyper-V version 1 VM image.
Important
Installing images with the Azure marketplace is not supported on clusters with 64-bit ARM instances.
You should only modify the RHCOS image for compute machines to use an Azure Marketplace image. Control plane machines and infrastructure nodes do not require an OpenShift Container Platform subscription and use the public RHCOS default image by default, which does not incur subscription costs on your Azure bill. Therefore, you should not modify the cluster default boot image or the control plane boot images. Applying the Azure Marketplace image to them will incur additional licensing costs that cannot be recovered.
-
You have installed the Azure CLI client
(az). -
Your Azure account is entitled for the offer and you have logged into this account with the Azure CLI client.
-
Display all of the available OpenShift Container Platform images by running one of the following commands:
-
North America:
$ az vm image list --all --offer rh-ocp-worker --publisher redhat -o tableExample outputOffer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker RedHat rh-ocp-worker RedHat:rh-ocp-worker:rh-ocp-worker:4.17.2024100419 4.17.2024100419 rh-ocp-worker RedHat rh-ocp-worker-gen1 RedHat:rh-ocp-worker:rh-ocp-worker-gen1:4.17.2024100419 4.17.2024100419 -
EMEA:
$ az vm image list --all --offer rh-ocp-worker --publisher redhat-limited -o tableExample outputOffer Publisher Sku Urn Version ------------- -------------- ------------------ -------------------------------------------------------------- ----------------- rh-ocp-worker redhat-limited rh-ocp-worker redhat-limited:rh-ocp-worker:rh-ocp-worker:4.17.2024100419 4.17.2024100419 rh-ocp-worker redhat-limited rh-ocp-worker-gen1 redhat-limited:rh-ocp-worker:rh-ocp-worker-gen1:4.17.2024100419 4.17.2024100419
Note
Use the latest image that is available for compute and control plane nodes. If required, your VMs are automatically upgraded as part of the installation process.
-
-
Inspect the image for your offer by running one of the following commands:
-
North America:
$ az vm image show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> -
EMEA:
$ az vm image show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
-
-
Review the terms of the offer by running one of the following commands:
-
North America:
$ az vm image terms show --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> -
EMEA:
$ az vm image terms show --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
-
-
Accept the terms of the offering by running one of the following commands:
-
North America:
$ az vm image terms accept --urn redhat:rh-ocp-worker:rh-ocp-worker:<version> -
EMEA:
$ az vm image terms accept --urn redhat-limited:rh-ocp-worker:rh-ocp-worker:<version>
-
-
Record the image details of your offer. You must update the
computesection in theinstall-config.yamlfile with values forpublisher,offer,sku, andversionbefore deploying the cluster. You may also update thecontrolPlanesection to deploy control plane machines with the specified image details, or thedefaultMachinePlatformsection to deploy both control plane and compute machines with the specified image details. Use the latest available image for control plane and compute nodes.
install-config.yaml file with the Azure Marketplace compute nodesapiVersion: v1
baseDomain: example.com
compute:
- hyperthreading: Enabled
name: worker
platform:
azure:
type: Standard_D4s_v5
osImage:
publisher: redhat
offer: rh-ocp-worker
sku: rh-ocp-worker
version: 413.92.2023101700
replicas: 3
Creating the installation configuration file
You can customize the OpenShift Container Platform cluster you install on Microsoft Azure.
Important
Do not specify windows, microsoft, or other variants of these words in the metadata.name parameter of the install-config.yaml file. Specifying one of these words for the cluster name causes the installation program to generate an error message like the following example message:
The resource name 'windows-xxxx-identity' or a part of the name is a trademarked or reserved word.
Additionally, specifying login at the beginning of the name in the metadata.name parameter of the install-config.yaml file results in the generation of an error message. You can specify login in the middle or end of the name.
-
You have the OpenShift Container Platform installation program and the pull secret for your cluster.
-
You have an Azure subscription ID and tenant ID.
-
If you are installing the cluster using a service principal, you have its application ID and password.
-
If you are installing the cluster using a system-assigned managed identity, you have enabled it on the virtual machine that you will run the installation program from.
-
If you are installing the cluster using a user-assigned managed identity, you have met these prerequisites:
-
You have its client ID.
-
You have assigned it to the virtual machine that you will run the installation program from.
-
-
Optional: If you have run the installation program on this computer before, and want to use an alternative service principal or managed identity, go to the
~/.azure/directory and delete theosServicePrincipal.jsonconfiguration file.Deleting this file prevents the installation program from automatically reusing subscription and authentication values from a previous installation.
-
Create the
install-config.yamlfile.-
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>-
<installation_directory>: For<installation_directory>, specify the directory name to store the files that the installation program creates.When specifying the directory:
-
Verify that the directory has the
executepermission. This permission is required to run Terraform binaries under the installation directory. -
Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
-
-
At the prompts, provide the configuration details for your cloud:
-
Optional: Select an SSH key to use to access your cluster machines.
Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. -
Select azure as the platform to target.
If the installation program cannot locate the
osServicePrincipal.jsonconfiguration file from a previous installation, you are prompted for Azure subscription and authentication values. -
Enter the following Azure parameter values for your subscription:
-
azure subscription id: Enter the subscription ID to use for the cluster.
-
azure tenant id: Enter the tenant ID.
-
-
Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client id:
-
If you are using a service principal, enter its application ID.
-
If you are using a system-assigned managed identity, leave this value blank.
-
If you are using a user-assigned managed identity, specify its client ID.
-
-
Depending on the Azure identity you are using to deploy the cluster, do one of the following when prompted for the azure service principal client secret:
-
If you are using a service principal, enter its password.
-
If you are using a system-assigned managed identity, leave this value blank.
-
If you are using a user-assigned managed identity, leave this value blank.
-
-
Select the region to deploy the cluster to.
-
Select the base domain to deploy the cluster to. The base domain corresponds to the Azure DNS Zone that you created for your cluster.
-
Enter a descriptive name for your cluster.
Important
All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation.
-
-
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section.Note
If you are installing a three-node cluster, be sure to set the
compute.replicasparameter to0. This ensures that the cluster’s control planes are schedulable. For more information, see "Installing a three-node cluster on Azure". -
Back up the
install-config.yamlfile so that you can use it to install multiple clusters.Important
The
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.If previously not detected, the installation program creates an
osServicePrincipal.jsonconfiguration file and stores this file in the~/.azure/directory on your computer. This ensures that the installation program can load the profile when it is creating an OpenShift Container Platform cluster on the target platform.
Minimum resource requirements for cluster installation
Each created cluster must meet minimum requirements so that the cluster runs as expected.
| Machine | Operating System | vCPU [1] | Virtual RAM | Storage | Input/Output Per Second (IOPS)[2] |
|---|---|---|---|---|---|
Bootstrap |
RHCOS |
4 |
16 GB |
100 GB |
300 |
Control plane |
RHCOS |
4 |
16 GB |
100 GB |
300 |
Compute |
RHCOS |
2 |
8 GB |
100 GB |
300 |
-
One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
-
OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
-
As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
Note
For OpenShift Container Platform version 4.19, RHCOS is based on RHEL version 9.6, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires:
-
x86-64 architecture requires x86-64-v2 ISA
-
ARM64 architecture requires ARMv8.0-A ISA
-
IBM Power architecture requires Power 9 ISA
-
s390x architecture requires z14 ISA
For more information, see Architectures (RHEL documentation).
Important
You are required to use Azure virtual machines that have the premiumIO parameter set to true.
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
Tested instance types for Azure
The following Microsoft Azure instance types have been tested with OpenShift Container Platform.
Machine types based on 64-bit x86 architecture
Tested instance types for Azure on 64-bit ARM infrastructures
The following Microsoft Azure ARM64 instance types have been tested with OpenShift Container Platform.
Machine types based on 64-bit ARM architecture
Enabling trusted launch for Azure VMs
You can enable two trusted launch features when installing your cluster on Azure: secure boot and virtualized Trusted Platform Modules.
For more information about the sizes of virtual machines that support the trusted launch features, see Virtual machine sizes.
Important
Trusted launch is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
You have created an
install-config.yamlfile.
-
Edit the
install-config.yamlfile before deploying your cluster:-
Enable trusted launch only on control plane by adding the following stanza:
controlPlane: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled -
Enable trusted launch only on compute node by adding the following stanza:
compute: platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled -
Enable trusted launch on all nodes by adding the following stanza:
platform: azure: settings: securityType: TrustedLaunch trustedLaunch: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled
-
Enabling confidential VMs
You can enable confidential VMs when installing your cluster. You can enable confidential VMs for compute nodes, control plane nodes, or all nodes.
You can use confidential VMs with the following VM sizes:
-
DCasv5-series
-
DCadsv5-series
-
ECasv5-series
-
ECadsv5-series
-
DCesv5-series
-
DCedsv5-series
-
ECesv5-series
-
ECedsv5-series
-
NCCads_H100_v5
Important
Confidential VMs are currently not supported on 64-bit ARM architectures.
-
You have created an
install-config.yamlfile.
-
Edit the
install-config.yamlfile before deploying your cluster:-
Enable confidential VMs only on control plane by adding the following stanza:
controlPlane: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly -
Enable confidential VMs only on compute nodes by adding the following stanza:
compute: platform: azure: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly -
Enable confidential VMs on all nodes by adding the following stanza:
platform: azure: defaultMachinePlatform: settings: securityType: ConfidentialVM confidentialVM: uefiSettings: secureBoot: Enabled virtualizedTrustedPlatformModule: Enabled osDisk: securityProfile: securityEncryptionType: VMGuestStateOnly
-
Configuring a dedicated disk for etcd
You can install your OpenShift Container Platform cluster on Microsoft Azure with a dedicated data disk for etcd. This configuration attaches a separate managed disk to each control plane node and uses it only for etcd data, which can improve cluster performance and stability.
Important
Dedicated disk for etcd is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
You have created an
install-config.yamlfile.
-
To configure a dedicated
etcddisk, edit theinstall-config.yamlfile and add thediskSetupanddataDisksparameters to thecontrolPlanestanza:# ... controlPlane: architecture: amd64 hyperthreading: Enabled name: master platform: azure: type: Standard_D4s_v5 dataDisks: - nameSuffix: etcddisk cachingType: None diskSizeGB: 20 lun: 0 diskSetup: - type: etcd etcd: platformDiskID: etcddisk replicas: 3 # ...
- Specify the same value you defined for
platformDiskID. - Specify
None. Other caching requirements are not currently supported. - Specify a disk size in GB. This value can be any integer greater than
0.Note
A minimum of 20 GB ensures enough space is available for defragmentation operations.
- Specify a logical unit number (LUN). This can be any integer from
0through63that is not used by another disk. - Specify
etcd. This identifiesetcdas the node component type to receive a dedicated disk. - Specify a name to identify the disk. This value must not exceed 12 characters.
Enabling a user-managed DNS
You can install a cluster with a domain name server (DNS) solution that you manage instead of the default cluster-provisioned DNS solution. As a result, you can manage the API and Ingress DNS records in your own system rather than adding the records to the DNS of the cloud. For example, your organization’s security policies might not allow the use of public DNS services such as Microsoft Azure. In such scenarios, you can use your own DNS service to bypass the public DNS service and manage your own DNS for the IP addresses of the API and Ingress services.
If you enable user-managed DNS during installation, the installation program provisions DNS records for the API and Ingress services only within the cluster. To ensure access from outside the cluster, you must provision the DNS records in an external DNS service of your choice for the API and Ingress services after installation.
Important
User-provisioned DNS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
You installed the
jqpackage.
-
Before you deploy your cluster, use a text editor to open the
install-config.yamlfile and add the following stanza:-
To enable user-managed DNS:
featureSet: CustomNoUpgrade featureGates: ["AzureClusterHostedDNSInstall=true"] # ... platform: azure: userProvisionedDNS: Enabledwhere:
userProvisionedDNS-
Enables user-provisioned DNS management.
-
For information about provisioning your DNS records for the API server and the Ingress services, see "Provisioning your own DNS records".
Sample customized install-config.yaml file for Azure
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
Important
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
apiVersion: v1
baseDomain: example.com
controlPlane:
hyperthreading: Enabled
name: master
platform:
azure:
encryptionAtHost: true
ultraSSDCapability: Enabled
osDisk:
diskSizeGB: 1024
diskType: Premium_LRS
diskEncryptionSet:
resourceGroup: disk_encryption_set_resource_group
name: disk_encryption_set_name
subscriptionId: secondary_subscription_id
osImage:
publisher: example_publisher_name
offer: example_image_offer
sku: example_offer_sku
version: example_image_version
type: Standard_D8s_v3
replicas: 3
compute:
- hyperthreading: Enabled
name: worker
platform:
azure:
ultraSSDCapability: Enabled
type: Standard_D2s_v3
encryptionAtHost: true
osDisk:
diskSizeGB: 512
diskType: Standard_LRS
diskEncryptionSet:
resourceGroup: disk_encryption_set_resource_group
name: disk_encryption_set_name
subscriptionId: secondary_subscription_id
osImage:
publisher: example_publisher_name
offer: example_image_offer
sku: example_offer_sku
version: example_image_version
zones:
- "1"
- "2"
- "3"
replicas: 5
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
defaultMachinePlatform:
osImage:
publisher: example_publisher_name
offer: example_image_offer
sku: example_offer_sku
version: example_image_version
ultraSSDCapability: Enabled
baseDomainResourceGroupName: resource_group
region: centralus
resourceGroupName: existing_resource_group
outboundType: Loadbalancer
cloudName: AzurePublicCloud
pullSecret: '{"auths": ...}'
fips: false
sshKey: ssh-ed25519 AAAA...
- Required. The installation program prompts you for this value.
- If you do not provide these parameters and values, the installation program provides the default value.
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - Whether to enable or disable simultaneous multithreading, or
hyperthreading. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value toDisabled. If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines.Important
If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Use larger virtual machine types, such as
Standard_D8s_v3, for your machines if you disable simultaneous multithreading. - You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB.
- Specify a list of zones to deploy your machines to. For high availability, specify at least two zones.
- The cluster network plugin to install. The default value
OVNKubernetesis the only supported value. - Optional: A custom Red Hat Enterprise Linux CoreOS (RHCOS) image that should be used to boot control plane and compute machines. The
publisher,offer,sku, andversionparameters underplatform.azure.defaultMachinePlatform.osImageapply to both control plane and compute machines. If the parameters undercontrolPlane.platform.azure.osImageorcompute.platform.azure.osImageare set, they override theplatform.azure.defaultMachinePlatform.osImageparameters. - Specify the name of the resource group that contains the DNS zone for your base domain.
- Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
Important
To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode.
When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses.
Configuring the cluster-wide proxy during installation
To enable internet access in environments that deny direct connections, configure a cluster-wide proxy in the install-config.yaml file. This configuration ensures that the new OpenShift Container Platform cluster routes traffic through the specified HTTP or HTTPS proxy.
-
You have an existing
install-config.yamlfile. -
You have reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.Note
The
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud, Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).
-
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> httpsProxy: https://<username>:<pswd>@<ip>:<port> noProxy: example.com additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> # ...where:
proxy.httpProxy-
Specifies a proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. proxy.httpsProxy-
Specifies a proxy URL to use for creating HTTPS connections outside the cluster.
proxy.noProxy-
Specifies a comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. additionalTrustBundle-
If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. additionalTrustBundlePolicy-
Specifies the policy that determines the configuration of the
Proxyobject to reference theuser-ca-bundleconfig map in thetrustedCAfield. The allowed values areProxyonlyandAlways. UseProxyonlyto reference theuser-ca-bundleconfig map only whenhttp/httpsproxy is configured. UseAlwaysto always reference theuser-ca-bundleconfig map. The default value isProxyonly. Optional parameter.Note
The installation program does not support the proxy
readinessEndpointsfield.Note
If the installer times out, restart and then complete the deployment by using the
wait-forcommand of the installer. For example:+
$ ./openshift-install wait-for install-complete --log-level debug
-
Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named
clusterthat uses the proxy settings in the providedinstall-config.yamlfile. If no proxy settings are provided, aclusterProxyobject is still created, but it will have a nilspec.Note
Only the
Proxyobject namedclusteris supported, and no additional proxies can be created.
-
For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs.
Network configuration phases
There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration.
- Phase 1
-
You can customize the following network-related fields in the
install-config.yamlfile before you create the manifest files:-
networking.networkType -
networking.clusterNetwork -
networking.serviceNetwork -
networking.machineNetwork -
nodeNetworkingFor more information, see "Installation configuration parameters".
Note
Set the
networking.machineNetworkto match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located.Important
The CIDR range
172.17.0.0/16is reserved bylibVirt. You cannot use any other CIDR range that overlaps with the172.17.0.0/16CIDR range for networks in your cluster.
-
- Phase 2
-
After creating the manifest files by running
openshift-install create manifests, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration.
During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2.
Specifying advanced network configuration
You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment.
You can specify advanced network configuration only before you install the cluster.
Important
Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported.
-
You have created the
install-config.yamlfile and completed any modifications to it.
-
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory><installation_directory>specifies the name of the directory that contains theinstall-config.yamlfile for your cluster.
-
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.ymlin the<installation_directory>/manifests/directory:apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: -
Specify the advanced network configuration for your cluster in the
cluster-network-03-config.ymlfile, such as in the following example:Enable IPsec for the OVN-Kubernetes network providerapiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full -
Optional: Back up the
manifests/cluster-network-03-config.ymlfile. The installation program consumes themanifests/directory when you create the Ignition config files. -
Remove the Kubernetes manifest files that define the control plane machines and compute
MachineSets:$ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yamlBecause you create and manage these resources yourself, you do not have to initialize them.
-
You can preserve the
MachineSetfiles to create compute machines by using the machine API, but you must update references to them to match your environment.
-
Cluster Network Operator configuration
To manage cluster networking, configure the Cluster Network Operator (CNO) Network custom resource (CR) named cluster so the cluster uses the correct IP ranges and network plugin settings for reliable pod and service connectivity. Some settings and fields are inherited at the time of install or by the default.Network.type plugin, OVN-Kubernetes.
The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group:
clusterNetwork-
IP address pools from which pod IP addresses are allocated.
serviceNetwork-
IP address pool for services.
defaultNetwork.type-
Cluster network plugin.
OVNKubernetesis the only supported plugin during installation.
You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster.
Cluster Network Operator configuration object
The fields for the Cluster Network Operator (CNO) are described in the following table:
| Field | Type | Description |
|---|---|---|
|
|
The name of the CNO object. This name is always |
|
|
A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:
|
|
|
A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example:
You can customize this field only in the |
|
|
Configures the network plugin for the cluster network. |
|
|
This setting enables a dynamic routing provider. The FRR routing capability provider is required for the route advertisement feature. The only supported value is
|
Important
For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes.
defaultNetwork object configuration
The values for the defaultNetwork object are defined in the following table:
| Field | Type | Description |
|---|---|---|
|
|
Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. |
|
|
This object is only valid for the OVN-Kubernetes network plugin. |
Configuration for the OVN-Kubernetes network plugin
The following table describes the configuration fields for the OVN-Kubernetes network plugin:
| Field | Type | Description |
|---|---|---|
|
|
The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to |
|
|
The port to use for all Geneve packets. The default value is |
|
|
Specify a configuration object for customizing the IPsec configuration. |
|
|
Specifies a configuration object for IPv4 settings. |
|
|
Specifies a configuration object for IPv6 settings. |
|
|
Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. |
|
|
Specifies whether to advertise cluster network routes. The default value is
|
|
|
Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Valid values are Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. |
| Field | Type | Description |
|---|---|---|
|
string |
If your existing network infrastructure overlaps with the The default value is |
|
string |
If your existing network infrastructure overlaps with the The default value is |
| Field | Type | Description |
|---|---|---|
|
string |
If your existing network infrastructure overlaps with the The default value is |
|
string |
If your existing network infrastructure overlaps with the The default value is |
| Field | Type | Description |
|---|---|---|
|
integer |
The maximum number of messages to generate every second per node. The default value is |
|
integer |
The maximum size for the audit log in bytes. The default value is |
|
integer |
The maximum number of log files that are retained. |
|
string |
One of the following additional audit log targets:
|
|
string |
The syslog facility, such as |
| Field | Type | Description |
|---|---|---|
|
|
Set this field to This field has an interaction with the Open vSwitch hardware offloading feature.
If you set this field to |
|
|
You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the Note The default value of |
|
|
Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. |
|
|
Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. |
| Field | Type | Description |
|---|---|---|
|
|
The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is Important For OpenShift Container Platform 4.17 and later versions, clusters use |
| Field | Type | Description |
|---|---|---|
|
|
The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is Important For OpenShift Container Platform 4.17 and later versions, clusters use |
| Field | Type | Description |
|---|---|---|
|
|
Specifies the behavior of the IPsec implementation. Must be one of the following values:
|
defaultNetwork:
type: OVNKubernetes
ovnKubernetesConfig:
mtu: 1400
genevePort: 6081
ipsecConfig:
mode: Full
Configuring hybrid networking with OVN-Kubernetes
You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations.
Note
This configuration is necessary to run both Linux and Windows nodes in the same cluster.
-
You defined
OVNKubernetesfor thenetworking.networkTypeparameter in theinstall-config.yamlfile. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information.
-
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>where:
<installation_directory>-
Specifies the name of the directory that contains the
install-config.yamlfile for your cluster.
-
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.ymlin the<installation_directory>/manifests/directory:$ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOFwhere:
<installation_directory>-
Specifies the directory name that contains the
manifests/directory for your cluster.
-
Open the
cluster-network-03-config.ymlfile in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example:Specify a hybrid networking configurationapiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898- Specify the CIDR configuration used for nodes on the additional overlay network. The
hybridClusterNetworkCIDR must not overlap with theclusterNetworkCIDR. - Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default
6081port. For more information on this requirement, see Pod-to-pod connectivity between hosts is broken in the Microsoft documentation.Note
Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom
hybridOverlayVXLANPortvalue because this Windows server version does not support selecting a custom VXLAN port.
- Specify the CIDR configuration used for nodes on the additional overlay network. The
-
Save the
cluster-network-03-config.ymlfile and quit the text editor. -
Optional: Back up the
manifests/cluster-network-03-config.ymlfile. The installation program deletes themanifests/directory when creating the cluster.
Note
For more information about using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads.
-
For more details about Accelerated Networking, see Accelerated Networking for Microsoft Azure VMs.
Configuring user-defined tags for Azure
In OpenShift Container Platform, you can use tags for grouping resources and for managing resource access and cost. Tags are applied only to the resources created by the OpenShift Container Platform installation program and its core Operators such as Machine API Operator, Cluster Ingress Operator, Cluster Image Registry Operator. The OpenShift Container Platform consists of the following types of tags:
- OpenShift Container Platform tags
-
By default, OpenShift Container Platform installation program attaches the OpenShift Container Platform tags to the Azure resources. These OpenShift Container Platform tags are not accessible to the users. The format of the OpenShift Container Platform tags is
kubernetes.io_cluster.<cluster_id>:owned, where<cluster_id>is the value of.status.infrastructureNamein the infrastructure resource for the cluster. - User-defined tags
-
User-defined tags are manually created in
install-config.yamlfile during installation. When creating the user-defined tags, you must consider the following points:-
User-defined tags on Azure resources can only be defined during OpenShift Container Platform cluster creation, and cannot be modified after the cluster is created.
-
Support for user-defined tags is available only for the resources created in the Azure Public Cloud.
-
User-defined tags are not supported for the OpenShift Container Platform clusters upgraded to OpenShift Container Platform 4.19.
-
Creating user-defined tags for Azure
To define the list of user-defined tags, edit the .platform.azure.userTags field in the install-config.yaml file.
-
Specify the
.platform.azure.userTagsfield as shown in the followinginstall-config.yamlfile:apiVersion: v1 baseDomain: example.com #... platform: azure: userTags: <key>: <value> #...- Defines the additional keys and values that the installation program adds as tags to all Azure resources that it creates.
- Specify the key and value. You can configure a maximum of 10 tags for resource group and resources. Tag keys are case-insensitive. For more information on requirements for specifying user-defined tags, see "User-defined tags requirements" section.
Example
install-config.yamlfileapiVersion: v1 baseDomain: example.com #... platform: azure: userTags: createdBy: user environment: dev #...
-
Access the list of created user-defined tags for the Azure resources by running the following command:
$ oc get infrastructures.config.openshift.io cluster -o=jsonpath-as-json='{.status.platformStatus.azure.resourceTags}'Example output[ [ { "key": "createdBy", "value": "user" }, { "key": "environment", "value": "dev" } ] ]
User-defined tags requirements
The user-defined tags have the following requirements:
-
A tag key must have a maximum of 128 characters.
-
A tag key must begin with a letter.
-
A tag key must end with a letter, number or underscore.
-
A tag key must contain only letters, numbers, underscores(
_), periods(.), and hyphens(-). -
A tag key must not be specified as
name. -
A tag key must not have the following prefixes:
-
kubernetes.io -
openshift.io -
microsoft -
azure -
windows
-
-
A tag value must have a maximum of 256 characters.
For more information about Azure tags, see Azure user-defined tags.
Alternatives to storing administrator-level secrets in the kube-system project
By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual, you must use one of the following alternatives:
-
To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials.
-
To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an Azure cluster to use short-term credentials.
Manually creating long-term credentials
The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace.
-
If you did not set the
credentialsModeparameter in theinstall-config.yamlconfiguration file toManual, modify the value as shown:Sample configuration file snippetapiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... -
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>where
<installation_directory>is the directory in which the installation program creates files. -
Set a
$RELEASE_IMAGEvariable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}') -
Extract the list of
CredentialsRequestcustom resources (CRs) from the OpenShift Container Platform release image by running the following command:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \ --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ --to=<path_to_directory_for_credentials_requests>- The
--includedparameter includes only the manifests that your specific cluster configuration requires. - Specify the location of the
install-config.yamlfile. - Specify the path to the directory where you want to store the
CredentialsRequestobjects. If the specified directory does not exist, this command creates it.This command creates a YAML file for each
CredentialsRequestobject.SampleCredentialsRequestobjectapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ...
- The
-
Create YAML files for secrets in the
openshift-installmanifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in thespec.secretReffor eachCredentialsRequestobject.SampleCredentialsRequestobject with secretsapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ...SampleSecretobjectapiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>
Important
Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state.
Configuring an Azure cluster to use short-term credentials
To install a cluster that uses Microsoft Entra Workload ID, you must configure the Cloud Credential Operator utility and create the required Azure resources for your cluster.
Configuring the Cloud Credential Operator utility
To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (ccoctl) binary.
Note
The ccoctl utility is a Linux binary that must run in a Linux environment.
-
You have access to an OpenShift Container Platform account with cluster administrator access.
-
You have installed the OpenShift CLI (
oc).
-
You have created a global Azure account for the
ccoctlutility to use with the following permissions:-
Microsoft.Resources/subscriptions/resourceGroups/read -
Microsoft.Resources/subscriptions/resourceGroups/write -
Microsoft.Resources/subscriptions/resourceGroups/delete -
Microsoft.Authorization/roleAssignments/read -
Microsoft.Authorization/roleAssignments/delete -
Microsoft.Authorization/roleAssignments/write -
Microsoft.Authorization/roleDefinitions/read -
Microsoft.Authorization/roleDefinitions/write -
Microsoft.Authorization/roleDefinitions/delete -
Microsoft.Storage/storageAccounts/listkeys/action -
Microsoft.Storage/storageAccounts/delete -
Microsoft.Storage/storageAccounts/read -
Microsoft.Storage/storageAccounts/write -
Microsoft.Storage/storageAccounts/blobServices/containers/delete -
Microsoft.Storage/storageAccounts/blobServices/containers/read -
Microsoft.Storage/storageAccounts/blobServices/containers/write -
Microsoft.ManagedIdentity/userAssignedIdentities/delete -
Microsoft.ManagedIdentity/userAssignedIdentities/read -
Microsoft.ManagedIdentity/userAssignedIdentities/write -
Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/read -
Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/write -
Microsoft.ManagedIdentity/userAssignedIdentities/federatedIdentityCredentials/delete -
Microsoft.Storage/register/action -
Microsoft.ManagedIdentity/register/action
-
-
Set a variable for the OpenShift Container Platform release image by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}') -
Obtain the CCO container image from the OpenShift Container Platform release image by running the following command:
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)Note
Ensure that the architecture of the
$RELEASE_IMAGEmatches the architecture of the environment in which you will use theccoctltool. -
Extract the
ccoctlbinary from the CCO container image within the OpenShift Container Platform release image by running the following command:$ oc image extract $CCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ -a ~/.pull-secret- For
<rhel_version>, specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified,ccoctl.rhel8is used by default. The following values are valid:-
rhel8: Specify this value for hosts that use RHEL 8. -
rhel9: Specify this value for hosts that use RHEL 9.
-
Note
The
ccoctlbinary is created in the directory from where you executed the command and not in/usr/bin/. You must rename the directory or move theccoctl.<rhel_version>binary toccoctl. - For
-
Change the permissions to make
ccoctlexecutable by running the following command:$ chmod 775 ccoctl
-
To verify that
ccoctlis ready to use, display the help file. Use a relative file name when you run the command, for example:$ ./ccoctlExample outputOpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command.
Creating Azure resources with the Cloud Credential Operator utility
You can use the ccoctl azure create-all command to automate the creation of Azure resources.
Note
By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory.
You must have:
-
Extracted and prepared the
ccoctlbinary. -
Access to your Microsoft Azure account by using the Azure CLI.
-
Set a
$RELEASE_IMAGEvariable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}') -
Extract the list of
CredentialsRequestobjects from the OpenShift Container Platform release image by running the following command:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \ --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ --to=<path_to_directory_for_credentials_requests>- The
--includedparameter includes only the manifests that your specific cluster configuration requires. - Specify the location of the
install-config.yamlfile. - Specify the path to the directory where you want to store the
CredentialsRequestobjects. If the specified directory does not exist, this command creates it.Note
This command might take a few moments to run.
- The
-
To enable the
ccoctlutility to detect your Azure credentials automatically, log in to the Azure CLI by running the following command:$ az login -
Use the
ccoctltool to process allCredentialsRequestobjects by running the following command:$ ccoctl azure create-all \ --name=<azure_infra_name> \ --output-dir=<ccoctl_output_dir> \ --region=<azure_region> \ --subscription-id=<azure_subscription_id> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --dnszone-resource-group-name=<azure_dns_zone_resource_group_name> \ --tenant-id=<azure_tenant_id> \ --network-resource-group-name <azure_resource_group> \ --preserve-existing-roles- Specify the user-defined name for all created Azure resources used for tracking.
- Optional: Specify the directory in which you want the
ccoctlutility to create objects. By default, the utility creates objects in the directory in which the commands are run. - Specify the Azure region in which cloud resources will be created.
- Specify the Azure subscription ID to use.
- Specify the directory containing the files for the component
CredentialsRequestobjects. - Specify the name of the resource group containing the cluster’s base domain Azure DNS zone.
- Specify the Azure tenant ID to use.
- Optional: Specify the virtual network resource group if it is different from the cluster resource group.
- Optional: Specify this flag to ensure that any custom role assignments you define on managed identities are not removed during OpenShift Container Platform updates.
Note
If your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgradefeature set, you must include the--enable-tech-previewparameter.To see additional optional parameters and explanations of how to use them, run the
azure create-all --helpcommand.
-
To verify that the OpenShift Container Platform secrets are created, list the files in the
<path_to_ccoctl_output_dir>/manifestsdirectory:$ ls <path_to_ccoctl_output_dir>/manifestsExample outputazure-ad-pod-identity-webhook-config.yaml cluster-authentication-02-config.yaml openshift-cloud-controller-manager-azure-cloud-credentials-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capz-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-disk-credentials-credentials.yaml openshift-cluster-csi-drivers-azure-file-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-azure-cloud-credentials-credentials.yamlYou can verify that the Microsoft Entra ID service accounts are created by querying Azure. For more information, refer to Azure documentation on listing Entra ID service accounts.
Incorporating the Cloud Credential Operator utility manifests
To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility (ccoctl) created to the correct directories for the installation program.
-
You have configured an account with the cloud platform that hosts your cluster.
-
You have configured the Cloud Credential Operator utility (
ccoctl). -
You have created the cloud provider resources that are required for your cluster with the
ccoctlutility.
-
If you did not set the
credentialsModeparameter in theinstall-config.yamlconfiguration file toManual, modify the value as shown:Sample configuration file snippetapiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... -
If you used the
ccoctlutility to create a new Azure resource group instead of using an existing resource group, modify theresourceGroupNameparameter in theinstall-config.yamlas shown:Sample configuration file snippetapiVersion: v1 baseDomain: example.com # ... platform: azure: resourceGroupName: <azure_infra_name> # ...- This value must match the user-defined name for Azure resources that was specified with the
--nameargument of theccoctl azure create-allcommand.
- This value must match the user-defined name for Azure resources that was specified with the
-
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>where
<installation_directory>is the directory in which the installation program creates files. -
Copy the manifests that the
ccoctlutility generated to themanifestsdirectory that the installation program created by running the following command:$ cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ -
Copy the
tlsdirectory that contains the private key to the installation directory:$ cp -a /<path_to_ccoctl_output_dir>/tls .
Deploying the cluster
You can install OpenShift Container Platform on a compatible cloud platform.
Important
You can run the create cluster command of the installation program only once, during initial installation.
-
You have configured an account with the cloud platform that hosts your cluster.
-
You have the OpenShift Container Platform installation program and the pull secret for your cluster.
-
You have an Azure subscription ID and tenant ID.
-
In the directory that contains the installation program, initialize the cluster deployment by running the following command:
$ ./openshift-install create cluster --dir <installation_directory> \ --log-level=info- For
<installation_directory>, specify the location of your customized./install-config.yamlfile. - To view different installation details, specify
warn,debug, orerrorinstead ofinfo.
- For
When the cluster deployment completes successfully:
-
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadminuser. -
Credential information also outputs to
<installation_directory>/.openshift_install.log.
Important
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
Important
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. -
It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
Provisioning your own DNS records
Use the IP address of the API server to provision your own DNS record with the api.<cluster_name>.<base_domain>. hostname by using your cluster name and base cluster domain. Use the IP address of the Ingress service to provision your own DNS record with the *.apps.<cluster_name>.<base_domain>. hostname by using your cluster name and base cluster domain.
Important
User-provisioned DNS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
You have installed the Azure CLI client
(az).
-
Add the
userProvisionedDNSparameter to theinstall-config.yamlfile and enable the parameter. For more information, see "Enabling a user-managed DNS". -
Install your cluster.
-
If you are installing a private cluster, set the
lb_namevariable by running the following command:$ lb_name="${infra_id}-internal"-
Set the
frontendipconfig_idvariable by running the following command:$ frontendipconfig_id=$(az network lb show -n ${lb_name} -g ${cluster_resource_group_name} -ojson | jq -r ".loadBalancingRules[] | select(.frontendPort == 6443) | .frontendIPConfiguration.id") -
Set the
frontendipconfig_namevariable by running the following command:$ frontendipconfig_name=${frontendipconfig_id##*/} -
To retrieve the IP address of the API service, run the following command:
$ az network lb frontend-ip show -n ${frontendipconfig_name} --lb-name ${lb_name} -g ${cluster_resource_group_name} --query "privateIPAddress" -otsv
-
-
If you are installing a public cluster, set the
lb_namevariable by running the following command:$ lb_name="${infra_id}"-
Set the
frontendipconfig_idvariable by running the following command:$ frontendipconfig_id=$(az network lb show -n ${lb_name} -g ${cluster_resource_group_name} -ojson | jq -r ".loadBalancingRules[] | select(.frontendPort == 6443) | .frontendIPConfiguration.id") -
Set the
frontendipconfig_namevariable by running the following command:$ frontendipconfig_name=${frontendipconfig_id##*/} -
Set the
frontendpublicip_idvariable by running the following command:$ frontendpublicip_id=$(az network lb frontend-ip show -n ${frontendipconfig_name} --lb-name ${lb_name} -g ${cluster_resource_group_name} --query "publicIPAddress.id" -otsv) -
To retrieve the IP address of the API service, run the following command:
$ az network public-ip show --ids ${frontendpublicip_id} --query 'ipAddress' -otsv
-
-
Use the IP address and your cluster name and base cluster domain to configure your own DNS record with the
api.<cluster_name>.<base_domain>.hostname. -
If you are installing a private cluster, set the
lb_namevariable by running the following command:$ lb_name="${infra_id}-internal"-
Set the
frontendipconfig_idvariable by running the following command:$ frontendipconfig_id=$(az network lb show -n ${lb_name} -g ${cluster_resource_group_name} -ojson | jq -r ".loadBalancingRules[] | select(.frontendPort == 443) | .frontendIPConfiguration.id") -
Set the
frontendipconfig_namevariable by running the following command:$ frontendipconfig_name=${frontendipconfig_id##*/} -
To retrieve the IP address of the Ingress service, run the following command:
$ az network lb frontend-ip show -n ${frontendipconfig_name} --lb-name ${lb_name} -g ${cluster_resource_group_name} --query "privateIPAddress" -otsv
-
-
If you are installing a public cluster, set the
lb_namevariable by running the following command:$ lb_name="${infra_id}"-
Set the
frontendipconfig_idvariable by running the following command:$ frontendipconfig_id=$(az network lb show -n ${lb_name} -g ${cluster_resource_group_name} -ojson | jq -r ".loadBalancingRules[] | select(.frontendPort == 443) | .frontendIPConfiguration.id") -
Set the
frontendipconfig_namevariable by running the following command:$ frontendipconfig_name=${frontendipconfig_id##*/} -
Set the
frontendpublicip_idvariable by running the following command:$ frontendpublicip_id=$(az network lb frontend-ip show -n ${frontendipconfig_name} --lb-name ${lb_name} -g ${cluster_resource_group_name} --query "publicIPAddress.id" -otsv) -
To retrieve the IP address of the Ingress service, run the following command:
$ az network public-ip show --ids ${frontendpublicip_id} --query 'ipAddress' -otsv
-
-
Use the IP address and your cluster name and base cluster domain to configure your own DNS record with the
*.apps.<cluster_name>.<base_domain>.hostname.
Logging in to the cluster by using the CLI
To log in to your cluster as the default system user, export the kubeconfig file. This configuration enables the CLI to authenticate and connect to the specific API server created during OpenShift Container Platform installation.
The kubeconfig file is specific to a cluster and is created during OpenShift Container Platform installation.
-
You deployed an OpenShift Container Platform cluster.
-
You installed the OpenShift CLI (
oc).
-
Export the
kubeadmincredentials by running the following command:$ export KUBECONFIG=<installation_directory>/auth/kubeconfigwhere:
<installation_directory>-
Specifies the path to the directory that stores the installation files.
-
Verify you can run
occommands successfully using the exported configuration by running the following command:$ oc whoamiExample outputsystem:admin
Next steps
-
If necessary, you can Remote health reporting.