Installing a cluster on Azure Stack Hub with network customizations
In OpenShift Container Platform version 4.19, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Azure Stack Hub. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations.
Note
While you can select azure when using the installation program to deploy a cluster using installer-provisioned infrastructure, this option is only supported for the Azure Public Cloud.
Prerequisites
-
You reviewed details about the OpenShift Container Platform installation and update processes.
-
You read the documentation on selecting a cluster installation method and preparing it for users.
-
You have installed Azure Stack Hub version 2008 or later.
-
You configured an Azure Stack Hub account to host the cluster.
-
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
-
You verified that you have approximately 16 GB of local disk space. Installing the cluster requires that you download the RHCOS virtual hard drive (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Decompressing the VHD files requires this amount of local disk space.
Uploading the RHCOS cluster image
You must download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment.
-
Generate the Ignition config files for your cluster.
-
Obtain the RHCOS VHD cluster image:
-
Export the URL of the RHCOS VHD to an environment variable.
$ export COMPRESSED_VHD_URL=$(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location') -
Download the compressed RHCOS VHD file locally.
$ curl -O -L ${COMPRESSED_VHD_URL}
-
-
Decompress the VHD file.
Note
The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. The VHD file can be deleted once you have uploaded it.
-
Upload the local VHD to the Azure Stack Hub environment, making sure that the blob is publicly available. For example, you can upload the VHD to a blob using the
azcli or the web portal.
Manually creating the installation configuration file
To customise your OpenShift Container Platform deployment and meet specific network requirements, manually create the installation configuration file. This ensures that the installation program uses your tailored settings rather than default values during the setup process.
-
You have an SSH public key on your local machine for use with the installation program. You can use the key for SSH authentication onto your cluster nodes for debugging and disaster recovery.
-
You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
-
Create an installation directory to store your required installation assets in:
$ mkdir <installation_directory>Important
You must create a directory. Some installation assets, such as bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
-
Customize the provided sample
install-config.yamlfile template and save the file in the<installation_directory>.Note
You must name this configuration file
install-config.yaml.Make the following modifications:
-
Specify the required installation parameters.
-
Update the
platform.azuresection to specify the parameters that are specific to Azure Stack Hub. -
Optional: Update one or more of the default configuration parameters to customize the installation.
For more information about the parameters, see "Installation configuration parameters".
-
-
Back up the
install-config.yamlfile so that you can use it to install many clusters.Important
Back up the
install-config.yamlfile now, because the installation process consumes the file in the next step.
Sample customized install-config.yaml file for Azure Stack Hub
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
Important
This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually.
apiVersion: v1
baseDomain: example.com
credentialsMode: Manual
controlPlane:
name: master
platform:
azure:
osDisk:
diskSizeGB: 1024
diskType: premium_LRS
replicas: 3
compute:
- name: worker
platform:
azure:
osDisk:
diskSizeGB: 512
diskType: premium_LRS
replicas: 3
metadata:
name: test-cluster
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
machineNetwork:
- cidr: 10.0.0.0/16
networkType: OVNKubernetes
serviceNetwork:
- 172.30.0.0/16
platform:
azure:
armEndpoint: azurestack_arm_endpoint
baseDomainResourceGroupName: resource_group
region: azure_stack_local_region
resourceGroupName: existing_resource_group
outboundType: Loadbalancer
cloudName: AzureStackCloud
clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd
pullSecret: '{"auths": ...}'
fips: false
sshKey: ssh-ed25519 AAAA...
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
<MY_TRUSTED_CA_CERT>
-----END CERTIFICATE-----
- Required.
- If you do not provide these parameters and values, the installation program provides the default value.
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. - You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB.
- The name of the cluster.
- The cluster network plugin to install. The default value
OVNKubernetesis the only supported value. - The Azure Resource Manager endpoint that your Azure Stack Hub operator provides.
- The name of the resource group that contains the DNS zone for your base domain.
- The name of your Azure Stack Hub local region.
- The name of an existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster.
- The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD.
- The pull secret required to authenticate your cluster.
- Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead.
Important
When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures.
- You can optionally provide the
sshKeyvalue that you use to access the machines in your cluster.Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. - If the Azure Stack Hub environment is using an internal Certificate Authority (CA), adding the CA certificate is required.
Manually manage cloud credentials
The Cloud Credential Operator (CCO) only supports your cloud provider in manual mode. As a result, you must specify the identity and access management (IAM) secrets for your cloud provider.
-
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>where
<installation_directory>is the directory in which the installation program creates files. -
Set a
$RELEASE_IMAGEvariable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}') -
Extract the list of
CredentialsRequestcustom resources (CRs) from the OpenShift Container Platform release image by running the following command:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \ --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ --to=<path_to_directory_for_credentials_requests>- The
--includedparameter includes only the manifests that your specific cluster configuration requires. - Specify the location of the
install-config.yamlfile. - Specify the path to the directory where you want to store the
CredentialsRequestobjects. If the specified directory does not exist, this command creates it.This command creates a YAML file for each
CredentialsRequestobject.SampleCredentialsRequestobjectapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ...
- The
-
Create YAML files for secrets in the
openshift-installmanifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in thespec.secretReffor eachCredentialsRequestobject.SampleCredentialsRequestobject with secretsapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ...SampleSecretobjectapiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>
Important
Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state.
Configuring the cluster to use an internal CA
If the Azure Stack Hub environment is using an internal Certificate Authority (CA), update the cluster-proxy-01-config.yaml file to configure the cluster to use the internal CA.
-
Create the
install-config.yamlfile and specify the certificate trust bundle in.pemformat. -
Create the cluster manifests.
-
From the directory in which the installation program creates files, go to the
manifestsdirectory. -
Add
user-ca-bundleto thespec.trustedCA.namefield.Examplecluster-proxy-01-config.yamlfileapiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {} -
Optional: Back up the
manifests/ cluster-proxy-01-config.yamlfile. The installation program consumes themanifests/directory when you deploy the cluster.
Network configuration phases
There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration.
- Phase 1
-
You can customize the following network-related fields in the
install-config.yamlfile before you create the manifest files:-
networking.networkType -
networking.clusterNetwork -
networking.serviceNetwork -
networking.machineNetwork -
nodeNetworkingFor more information, see "Installation configuration parameters".
Note
Set the
networking.machineNetworkto match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located.Important
The CIDR range
172.17.0.0/16is reserved bylibVirt. You cannot use any other CIDR range that overlaps with the172.17.0.0/16CIDR range for networks in your cluster.
-
- Phase 2
-
After creating the manifest files by running
openshift-install create manifests, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration.
During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2.
Specifying advanced network configuration
You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment.
You can specify advanced network configuration only before you install the cluster.
Important
Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported.
-
You have created the
install-config.yamlfile and completed any modifications to it.
-
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory><installation_directory>specifies the name of the directory that contains theinstall-config.yamlfile for your cluster.
-
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.ymlin the<installation_directory>/manifests/directory:apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: -
Specify the advanced network configuration for your cluster in the
cluster-network-03-config.ymlfile, such as in the following example:Enable IPsec for the OVN-Kubernetes network providerapiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full -
Optional: Back up the
manifests/cluster-network-03-config.ymlfile. The installation program consumes themanifests/directory when you create the Ignition config files. -
Remove the Kubernetes manifest files that define the control plane machines and compute
MachineSets:$ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yamlBecause you create and manage these resources yourself, you do not have to initialize them.
-
You can preserve the
MachineSetfiles to create compute machines by using the machine API, but you must update references to them to match your environment.
-
Cluster Network Operator configuration
To manage cluster networking, configure the Cluster Network Operator (CNO) Network custom resource (CR) named cluster so the cluster uses the correct IP ranges and network plugin settings for reliable pod and service connectivity. Some settings and fields are inherited at the time of install or by the default.Network.type plugin, OVN-Kubernetes.
The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group:
clusterNetwork-
IP address pools from which pod IP addresses are allocated.
serviceNetwork-
IP address pool for services.
defaultNetwork.type-
Cluster network plugin.
OVNKubernetesis the only supported plugin during installation.
You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster.
Cluster Network Operator configuration object
The fields for the Cluster Network Operator (CNO) are described in the following table:
| Field | Type | Description |
|---|---|---|
|
|
The name of the CNO object. This name is always |
|
|
A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:
|
|
|
A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example:
You can customize this field only in the |
|
|
Configures the network plugin for the cluster network. |
|
|
This setting enables a dynamic routing provider. The FRR routing capability provider is required for the route advertisement feature. The only supported value is
|
Important
For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes.
defaultNetwork object configuration
The values for the defaultNetwork object are defined in the following table:
| Field | Type | Description |
|---|---|---|
|
|
Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. |
|
|
This object is only valid for the OVN-Kubernetes network plugin. |
Configuration for the OVN-Kubernetes network plugin
The following table describes the configuration fields for the OVN-Kubernetes network plugin:
| Field | Type | Description |
|---|---|---|
|
|
The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to |
|
|
The port to use for all Geneve packets. The default value is |
|
|
Specify a configuration object for customizing the IPsec configuration. |
|
|
Specifies a configuration object for IPv4 settings. |
|
|
Specifies a configuration object for IPv6 settings. |
|
|
Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. |
|
|
Specifies whether to advertise cluster network routes. The default value is
|
|
|
Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Valid values are Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. |
| Field | Type | Description |
|---|---|---|
|
string |
If your existing network infrastructure overlaps with the The default value is |
|
string |
If your existing network infrastructure overlaps with the The default value is |
| Field | Type | Description |
|---|---|---|
|
string |
If your existing network infrastructure overlaps with the The default value is |
|
string |
If your existing network infrastructure overlaps with the The default value is |
| Field | Type | Description |
|---|---|---|
|
integer |
The maximum number of messages to generate every second per node. The default value is |
|
integer |
The maximum size for the audit log in bytes. The default value is |
|
integer |
The maximum number of log files that are retained. |
|
string |
One of the following additional audit log targets:
|
|
string |
The syslog facility, such as |
| Field | Type | Description |
|---|---|---|
|
|
Set this field to This field has an interaction with the Open vSwitch hardware offloading feature.
If you set this field to |
|
|
You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the Note The default value of |
|
|
Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. |
|
|
Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. |
| Field | Type | Description |
|---|---|---|
|
|
The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is Important For OpenShift Container Platform 4.17 and later versions, clusters use |
| Field | Type | Description |
|---|---|---|
|
|
The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is Important For OpenShift Container Platform 4.17 and later versions, clusters use |
| Field | Type | Description |
|---|---|---|
|
|
Specifies the behavior of the IPsec implementation. Must be one of the following values:
|
defaultNetwork:
type: OVNKubernetes
ovnKubernetesConfig:
mtu: 1400
genevePort: 6081
ipsecConfig:
mode: Full
Configuring hybrid networking with OVN-Kubernetes
You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations.
Note
This configuration is necessary to run both Linux and Windows nodes in the same cluster.
-
You defined
OVNKubernetesfor thenetworking.networkTypeparameter in theinstall-config.yamlfile. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information.
-
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>where:
<installation_directory>-
Specifies the name of the directory that contains the
install-config.yamlfile for your cluster.
-
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.ymlin the<installation_directory>/manifests/directory:$ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOFwhere:
<installation_directory>-
Specifies the directory name that contains the
manifests/directory for your cluster.
-
Open the
cluster-network-03-config.ymlfile in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example:Specify a hybrid networking configurationapiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898- Specify the CIDR configuration used for nodes on the additional overlay network. The
hybridClusterNetworkCIDR must not overlap with theclusterNetworkCIDR. - Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default
6081port. For more information on this requirement, see Pod-to-pod connectivity between hosts is broken in the Microsoft documentation.Note
Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom
hybridOverlayVXLANPortvalue because this Windows server version does not support selecting a custom VXLAN port.
- Specify the CIDR configuration used for nodes on the additional overlay network. The
-
Save the
cluster-network-03-config.ymlfile and quit the text editor. -
Optional: Back up the
manifests/cluster-network-03-config.ymlfile. The installation program deletes themanifests/directory when creating the cluster.
Note
For more information about using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads.
Deploying the cluster
You can install OpenShift Container Platform on a compatible cloud platform.
Important
You can run the create cluster command of the installation program only once, during initial installation.
-
You have configured an account with the cloud platform that hosts your cluster.
-
You have the OpenShift Container Platform installation program and the pull secret for your cluster.
-
You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
-
In the directory that contains the installation program, initialize the cluster deployment by running the following command:
$ ./openshift-install create cluster --dir <installation_directory> \ --log-level=info- For
<installation_directory>, specify the location of your customized./install-config.yamlfile. - To view different installation details, specify
warn,debug, orerrorinstead ofinfo.
- For
When the cluster deployment completes successfully:
-
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadminuser. -
Credential information also outputs to
<installation_directory>/.openshift_install.log.
Important
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
Important
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. -
It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
Logging in to the cluster by using the CLI
To log in to your cluster as the default system user, export the kubeconfig file. This configuration enables the CLI to authenticate and connect to the specific API server created during OpenShift Container Platform installation.
The kubeconfig file is specific to a cluster and is created during OpenShift Container Platform installation.
-
You deployed an OpenShift Container Platform cluster.
-
You installed the OpenShift CLI (
oc).
-
Export the
kubeadmincredentials by running the following command:$ export KUBECONFIG=<installation_directory>/auth/kubeconfigwhere:
<installation_directory>-
Specifies the path to the directory that stores the installation files.
-
Verify you can run
occommands successfully using the exported configuration by running the following command:$ oc whoamiExample outputsystem:admin
Logging in to the cluster by using the web console
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
-
You have access to the installation host.
-
You completed a cluster installation and all cluster Operators are available.
-
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:$ cat <installation_directory>/auth/kubeadmin-passwordNote
Alternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host. -
List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'Note
Alternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example outputconsole console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None -
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.