Installing a cluster on vSphere in a disconnected environment
In OpenShift Container Platform 4.19, you can install a cluster on VMware vSphere infrastructure in a restricted network by creating an internal mirror of the installation release content.
Prerequisites
-
You have completed the tasks in Preparing to install a cluster using installer-provisioned infrastructure.
-
You reviewed your VMware platform licenses. Red Hat does not place any restrictions on your VMware licenses, but some VMware infrastructure components require licensing.
-
You reviewed details about the OpenShift Container Platform installation and update processes.
-
You read the documentation on selecting a cluster installation method and preparing it for users.
-
You created a registry on your mirror host and obtained the
imageContentSourcesdata for your version of OpenShift Container Platform.Important
Because the installation media is on the mirror host, you can use that computer to complete all installation steps.
-
You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide the ReadWriteMany access mode.
-
The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible.
-
If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed.
-
If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to.
Note
If you are configuring a proxy, be sure to also review this site list.
About installations in restricted networks
In OpenShift Container Platform 4.19, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster.
If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere.
To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions.
Additional limits
Clusters in restricted networks have the following additional limitations and restrictions:
-
The
ClusterVersionstatus includes anUnable to retrieve available updateserror. -
By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.
Internet access for OpenShift Container Platform
In OpenShift Container Platform 4.19, you require access to the internet to obtain the images that are necessary to install your cluster.
You must have internet access to perform the following actions:
-
Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster.
-
Access Quay.io to obtain the packages that are required to install your cluster.
-
Obtain the packages that are required to perform cluster updates.
Creating the RHCOS image for restricted network installations
Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network VMware vSphere environment.
-
Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host.
-
Log in to the Red Hat Customer Portal’s Product Downloads page.
-
Under Version, select the most recent release of OpenShift Container Platform 4.19 for RHEL 8.
Important
The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available.
-
Download the Red Hat Enterprise Linux CoreOS (RHCOS) - vSphere image.
-
Upload the image you downloaded to a location that is accessible from the bastion server.
The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment.
VMware vSphere region and zone enablement
You can deploy an OpenShift Container Platform cluster to multiple vSphere data centers. Each data center can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster.
Important
The VMware vSphere region and zone enablement feature requires the vSphere Container Storage Interface (CSI) driver as the default storage driver in the cluster. As a result, the feature is only available on a newly installed cluster.
For a cluster that was upgraded from a previous release, you must enable CSI automatic migration for the cluster. You can then configure multiple regions and zones for the upgraded cluster.
The default installation configuration deploys a cluster to a single vSphere data center. If you want to deploy a cluster to multiple vSphere data centers, you must create an installation configuration file that enables the region and zone feature.
The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere data centers and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single data center.
The following list describes terms associated with defining zones and regions for your cluster:
-
Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a
datastoreobject. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. -
Region: Specifies a vCenter data center. You define a region by using a tag from the
openshift-regiontag category. -
Zone: Specifies a vCenter cluster. You define a zone by using a tag from the
openshift-zonetag category.
Note
If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file.
You must create a vCenter tag for each vCenter data center, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a data center, which represents a zone. After you create the tags, you must attach each tag to their respective data centers and clusters.
The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere data centers running in a single VMware vCenter.
| Data center (region) | Cluster (zone) | Tags |
|---|---|---|
us-east |
us-east-1 |
us-east-1a |
us-east-1b |
||
us-east-2 |
us-east-2a |
|
us-east-2b |
||
us-west |
us-west-1 |
us-west-1a |
us-west-1b |
||
us-west-2 |
us-west-2a |
|
us-west-2b |
VMware vSphere host group enablement
Important
OpenShift zones support for vSphere host groups is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
When deploying an OpenShift Container Platform cluster to VMware vSphere, you can map your vSphere host groups onto OpenShift Container Platform failure domains. This is useful if you are using a stretched cluster configuration, where ESXi hosts are grouped into host groups by physical location.
To enable this feature, you must meet the following requirements:
-
You must arrange your ESXi hosts into host groups.
-
You must create a vCenter tag in the
openshift-regiontag category for your cluster. After you create the tag, you must attach the tag to the cluster. -
You must create a vCenter tag in the
openshift-zonetag category for each host group and then attach the correct tag to each ESXi host. -
You must define multiple failure domains for your OpenShift Container Platform cluster in the
install-config.yamlfile. -
You must grant the
Host.Inventory.EditClusterprivilege on the vSphere vCenter cluster object. -
You must include the following parameters in your
install-config.yamlfile to enable this Technology Preview feature:featureSet: TechPreviewNoUpgrade featureGate: - "VSphereHostVMGroupZonal=true"Note
For further information on feature gates, see "Enabling features using feature gates".
Review the following key terms, which correspond to parameters in your install-config.yaml file that you must configure to enable this feature:
-
Failure domain: Specifies the relationships between regions and zones in OpenShift Container Platform, and clusters and host groups in vSphere. You define a failure domain by using vCenter objects, such as a
datastoreobject. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. -
Region: Specifies a vCenter cluster. You define a region by using a tag from the
openshift-regiontag category. -
Zone: Specifies a vCenter host group. You define a zone by using a tag from the
openshift-zonetag category. -
Region type: Specifies the
ComputeClusterregion type to enable this feature. -
Zone type: Specifies the
HostGroupzone type to enable this feature.
Creating the installation configuration file
You can customize the OpenShift Container Platform cluster you install on VMware vSphere.
-
You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host.
-
You have the
imageContentSourcesvalues that were generated during mirror registry creation. -
You have obtained the contents of the certificate for your mirror registry.
-
You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location.
-
Create the
install-config.yamlfile.-
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>-
<installation_directory>: For<installation_directory>, specify the directory name to store the files that the installation program creates.When specifying the directory:
-
Verify that the directory has the
executepermission. This permission is required to run Terraform binaries under the installation directory. -
Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
-
-
At the prompts, provide the configuration details for your cloud:
-
Optional: Select an SSH key to use to access your cluster machines.
Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. -
Select vsphere as the platform to target.
-
Specify the name of your vCenter instance.
-
Specify the user name and password for the vCenter account that has the required permissions to create the cluster.
The installation program connects to your vCenter instance.
-
Select the data center in your vCenter instance to connect to.
Note
After you create the installation configuration file, you can modify the file to create a multiple vSphere data center environment. This means that you can deploy an OpenShift Container Platform cluster to multiple vSphere data centers. For more information about creating this environment, see the section named VMware vSphere region and zone enablement.
-
Select the default vCenter datastore to use.
Warning
You can specify the path of any datastore that exists in a datastore cluster. By default, Storage Distributed Resource Scheduler (SDRS), which uses Storage vMotion, is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage DRS to avoid data loss issues for your OpenShift Container Platform cluster.
You cannot specify more than one datastore path. If you must specify VMs across multiple datastores, use a
datastoreobject to specify a failure domain in your cluster’sinstall-config.yamlconfiguration file. For more information, see "VMware vSphere region and zone enablement". -
Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool.
-
Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured.
-
Enter the virtual IP address that you configured for control plane API access.
-
Enter the virtual IP address that you configured for cluster ingress.
-
Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured.
-
Enter a descriptive name for your cluster.
The cluster name you enter must match the cluster name you specified when configuring the DNS records.
-
-
-
Choose one of the following methods to speficy an RHCOS image for your cluster than runs in a VMware vSphere vCenter environment.
-
The
clusterOSImageparameter method: In theinstall-config.yamlfile, set the value ofplatform.vsphere.clusterOSImageto the image location or name. For example:platform: vsphere: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-vmware.x86_64.ova?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d -
The
topology.templateparameter method:-
Download the Red Hat Enterprise Linux CoreOS (RHCOS) - vSphere image in Open Virtual Appliance (OVA) format to your local system. For more information, see "Creating the RHCOS image for restricted network installations".
-
From the Hosts and Clusters tab on the vSphere Client, right-click your cluster name and select Deploy OVF Template.
-
On the Select an OVF tab, specify the name of the RHCOS OVA file that you downloaded.
-
On the Select a name and folder tab, set a Virtual machine name for your template, such as
Template-RHCOS. -
Click the name of your vSphere cluster and select the folder you created in the previous step.
-
On the Select a compute resource tab, click the name of your vSphere cluster.
-
On the Select storage tab, configure the storage options for your VM.
-
When creating the OVF template, do not specify values on the Customize template tab or configure the template any further.
-
In the
install-config.yamlfile, set the value oftopology.templateto the path where you imported the image to your vSphere vCenter instance.
-
-
-
Edit the
install-config.yamlfile to give the additional information that is required for an installation in a restricted network.-
Update the
pullSecretvalue to contain the authentication information for your registry:pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "you@example.com"}}}'For
<mirror_host_name>, specify the registry domain name that you specified in the certificate for your mirror registry, and for<credentials>, specify the base64-encoded user name and password for your mirror registry. -
Add the
additionalTrustBundleparameter and value.additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry.
-
Add the image content resources, which resemble the following YAML excerpt:
imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/releaseFor these values, use the
imageContentSourcesthat you recorded during mirror registry creation. -
Set the publishing strategy to
Internal:publish: InternalBy setting this option, you create an internal Ingress Controller and a private load balancer.
-
-
Make any other modifications to the
install-config.yamlfile that you require.For more information about the parameters, see "Installation configuration parameters".
-
Back up the
install-config.yamlfile so that you can use it to install multiple clusters.Important
The
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster
You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster’s platform or change the values of the required parameters.
Note
The sample install-config.yaml file shows the clusterOSImage parameter to specify the URL for the Red Hat Enterprise Linux CoreOS (RHCOS) image. As an alternative to this configuration, you can use the topology.template parameter to point to the path in your vCenter environment that includes an RHCOS image in Open Virtual Appliance (OVA) format.
apiVersion: v1
baseDomain: example.com
compute:
- architecture: amd64
name: <worker_node>
platform: {}
replicas: 3
controlPlane:
architecture: amd64
name: <parent_node>
platform: {}
replicas: 3
metadata:
creationTimestamp: null
name: test
platform:
vsphere:
apiVIPs:
- 10.0.0.1
failureDomains:
- name: <failure_domain_name>
region: <default_region_name>
server: <fully_qualified_domain_name>
topology:
computeCluster: "/<data_center>/host/<cluster>"
datacenter: <data_center>
datastore: "/<data_center>/datastore/<datastore>"
networks:
- <VM_Network_name>
resourcePool: "/<data_center>/host/<cluster>/Resources/<resourcePool>"
folder: "/<data_center_name>/vm/<folder_name>/<subfolder_name>"
tagIDs:
- <tag_id>
zone: <default_zone_name>
ingressVIPs:
- 10.0.0.2
vcenters:
- datacenters:
- <data_center>
password: <password>
port: 443
server: <fully_qualified_domain_name>
user: administrator@vsphere.local
diskType: thin
clusterOSImage: http://mirror.example.com/images/rhcos-47.83.202103221318-0-vmware.x86_64.ova
fips: false
pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "you@example.com"}}}'
sshKey: 'ssh-ed25519 AAAA...'
additionalTrustBundle: |
-----BEGIN CERTIFICATE-----
ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
-----END CERTIFICATE-----
imageContentSources:
- mirrors:
- <mirror_host_name>:<mirror_port>/<repo_name>/release
source: <source_image_1>
- mirrors:
- <mirror_host_name>:<mirror_port>/<repo_name>/release-images
source: <source_image_2>
- The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name.
- The
controlPlanesection is a single mapping, but thecomputesection is a sequence of mappings. To meet the requirements of the different data structures, the first line of thecomputesection must begin with a hyphen,-, and the first line of thecontrolPlanesection must not. Only one control plane pool is used. - The cluster name that you specified in your DNS records.
- Optional: Provides additional configuration for the machine pool parameters for the compute and control plane machines.
Important
The VIPs,
apiVIPandingressVIP, must come from the samenetworking.machineNetworksegment. ForapiVIPand foringressVIP, if thenetworking.machineNetworkis10.0.0.0/16then API VIPs and Ingress VIPs must be in one of the10.0.0.0/16machine networks. - Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a
datastoreobject. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. - The path to the vSphere datastore that holds virtual machine files, templates, and ISO images.
Important
You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster.
If you must specify VMs across multiple datastores, use a
datastoreobject to specify a failure domain in your cluster’sinstall-config.yamlconfiguration file. For more information, see "VMware vSphere region and zone enablement". - Optional: Provides an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster.
- Optional: Each VM created by OpenShift Container Platform is assigned a unique tag that is specific to the cluster. The assigned tag enables the installation program to identify and remove the associated VMs when a cluster is decommissioned. You can list up to ten additional tag IDs to be attached to the VMs provisioned by the installation program.
- The ID of the tag to be associated by the installation program. For example,
urn:vmomi:InventoryServiceTag:208e713c-cae3-4b7f-918e-4051ca7d1f97:GLOBAL. For more information about determining the tag ID, see the vSphere Tags and Attributes documentation. - The vSphere disk provisioning method.
- The location of the Red Hat Enterprise Linux CoreOS (RHCOS) image that is accessible from the bastion server.
- For
<local_registry>, specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For exampleregistry.example.comorregistry.example.com:5000. For<credentials>, specify the base64-encoded user name and password for your mirror registry. - Provide the contents of the certificate file that you used for your mirror registry.
- Provide the
imageContentSourcessection from the output of the command to mirror the repository.
Note
In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings.
Configuring the cluster-wide proxy during installation
To enable internet access in environments that deny direct connections, configure a cluster-wide proxy in the install-config.yaml file. This configuration ensures that the new OpenShift Container Platform cluster routes traffic through the specified HTTP or HTTPS proxy.
-
You have an existing
install-config.yamlfile. -
You have reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.Note
The
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud, Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).
-
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> httpsProxy: https://<username>:<pswd>@<ip>:<port> noProxy: example.com additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> # ...where:
proxy.httpProxy-
Specifies a proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. proxy.httpsProxy-
Specifies a proxy URL to use for creating HTTPS connections outside the cluster.
proxy.noProxy-
Specifies a comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. You must include vCenter’s IP address and the IP range that you use for its machines. additionalTrustBundle-
If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. additionalTrustBundlePolicy-
Specifies the policy that determines the configuration of the
Proxyobject to reference theuser-ca-bundleconfig map in thetrustedCAfield. The allowed values areProxyonlyandAlways. UseProxyonlyto reference theuser-ca-bundleconfig map only whenhttp/httpsproxy is configured. UseAlwaysto always reference theuser-ca-bundleconfig map. The default value isProxyonly. Optional parameter.Note
The installation program does not support the proxy
readinessEndpointsfield.Note
If the installer times out, restart and then complete the deployment by using the
wait-forcommand of the installer. For example:+
$ ./openshift-install wait-for install-complete --log-level debug
-
Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named
clusterthat uses the proxy settings in the providedinstall-config.yamlfile. If no proxy settings are provided, aclusterProxyobject is still created, but it will have a nilspec.Note
Only the
Proxyobject namedclusteris supported, and no additional proxies can be created.
Configuring regions and zones for a VMware vCenter
You can modify the default installation configuration file, so that you can deploy an OpenShift Container Platform cluster to multiple vSphere data centers.
The default install-config.yaml file configuration from the previous release of OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file.
-
You have an existing
install-config.yamlinstallation configuration file.Important
You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision data center objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different data centers, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster.
-
You have installed the
govccommand line tool.Important
The example uses the
govccommand. Thegovccommand is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain thegovccommand. Instructions for downloading and installinggovcare found on the VMware documentation website.
-
Create the
openshift-regionandopenshift-zonevCenter tag categories by running the following commands:Important
If you specify different names for the
openshift-regionandopenshift-zonevCenter tag categories, the installation of the OpenShift Container Platform cluster fails.$ govc tags.category.create -d "OpenShift region" openshift-region$ govc tags.category.create -d "OpenShift zone" openshift-zone -
For each region where you want to deploy your cluster, create a region tag by running the following command:
$ govc tags.create -c <region_tag_category> <region_tag> -
For each zone where you want to deploy your cluster, create a zone tag by running the following command:
$ govc tags.create -c <zone_tag_category> <zone_tag> -
Attach region tags to each vCenter data center object by running the following command:
$ govc tags.attach -c <region_tag_category> <region_tag_1> /<data_center_1> -
Attach the zone tags to each vCenter cluster object by running the following command:
$ govc tags.attach -c <zone_tag_category> <zone_tag_1> /<data_center_1>/host/<cluster1> -
Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements.
Sampleinstall-config.yamlfile with multiple data centers defined in a vSphere center# ... compute: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" # ... controlPlane: # ... vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" # ... platform: vsphere: vcenters: # ... datacenters: - <data_center_1_name> - <data_center_2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <data_center_1> computeCluster: "/<data_center_1>/host/<cluster1>" networks: - <VM_Network1_name> datastore: "/<data_center_1>/datastore/<datastore1>" resourcePool: "/<data_center_1>/host/<cluster1>/Resources/<resourcePool1>" folder: "/<data_center_1>/vm/<folder1>" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <data_center_2> computeCluster: "/<data_center_2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<data_center_2>/datastore/<datastore2>" resourcePool: "/<data_center_2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<data_center_2>/vm/<folder2>" # ...
Configuring host groups for a VMware vCenter
Important
OpenShift zones support for vSphere host groups is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can modify the default installation configuration file to deploy an OpenShift Container Platform cluster on a VMware vSphere stretched cluster, where ESXi hosts are grouped into host groups by physical location.
The default install-config.yaml file configuration from previous releases of OpenShift Container Platform is deprecated. Though you can still use it, the OpenShift Container Platform installer will display a warning message that indicates the use of deprecated fields in the configuration file.
-
You have an existing
install-config.yamlinstallation configuration file. -
You have arranged your ESXi hosts into host groups.
-
You have granted the
Host.Inventory.EditClusterprivilege on the vSphere vCenter cluster object. -
You have downloaded and installed the
govccommand line tool. Instructions can be found on the VMware documentation website. Note thatgovcis an open-source tool that is not maintained by the Red Hat support team. -
You have enabled the
TechPreviewNoUpgradefeature set. For more information, see "Enabling features using feature gates".Important
To enable host group support, you must define multiple failure domains for your OpenShift Container Platform cluster.
-
Create the
openshift-regionandopenshift-zonevCenter tag categories by running the following commands:Important
If you specify different names for the
openshift-regionandopenshift-zonevCenter tag categories, the installation of the OpenShift Container Platform cluster fails.$ govc tags.category.create -d "OpenShift region" openshift-region$ govc tags.category.create -d "OpenShift zone" openshift-zone -
Create a region tag for the vSphere cluster where you want to deploy your OpenShift Container Platform cluster by entering the following command:
$ govc tags.create -c <region_tag_category> <region_tag> -
Create a zone tag for each host group by entering the following command as needed:
$ govc tags.create -c <zone_tag_category> <zone_tag> -
Attach the region tag to the vCenter cluster object by entering the following command:
$ govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>/host/<cluster_1> -
Use zone tags to associate each ESXi host with its host group, by entering the following command for each ESXi host:
$ govc tags.attach -c <zone_tag_category> <zone_tag_for_host_group_1> /<datacenter_1>/host/<cluster_1>/<esxi_host_in_host_group_1> -
Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements.
Sampleinstall-config.yamlfile with multiple host groupsfeatureSet: TechPreviewNoUpgrade featureGate: - "VSphereHostVMGroupZonal=true" # ... platform: vsphere: vcenters: # ... datacenters: - <data_center_1_name> failureDomains: - name: <host_group_1> region: <cluster_1_region_tag> zone: <host_group_1_zone_tag> regionType: "ComputeCluster" zoneType: "HostGroup" server: <fully_qualified_domain_name> topology: datacenter: <data_center_1> computeCluster: "/<data_center_1>/host/<cluster_1>" networks: - <VM_Network1_name> hostGroup: <host_group_1_name> datastore: "/<data_center_1>/datastore/<datastore_1>" resourcePool: "/<data_center_1>/host/<cluster_1>/Resources/<resourcePool_1>" folder: "/<data_center_1>/vm/<folder_1>" - name: <host_group_2> region: <cluster_1_region_tag> zone: <host_group_2_zone_tag> regionType: "ComputeCluster" zoneType: "HostGroup" server: <fully_qualified_domain_name> topology: datacenter: <data_center_1> computeCluster: "/<data_center_1>/host/<cluster_1>" networks: - <VM_Network1_name> hostGroup: <host_group_2_name> datastore: "/<data_center_1>/datastore/<datastore_1>" resourcePool: "/<data_center_1>/host/<cluster_1>/Resources/<resourcePool_1>" folder: "/<data_center_1>/vm/<folder_1>"
Services for a user-managed load balancer
To integrate your infrastructure with existing network standards or gain more control over traffic management in OpenShift Container Platform , configure services for a user-managed load balancer.
Important
Configuring a user-managed load balancer depends on your vendor’s load balancer.
The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor’s load balancer.
Red Hat supports the following services for a user-managed load balancer:
-
Ingress Controller
-
OpenShift API
-
OpenShift MachineConfig API
You can choose whether you want to configure one or all of these services for a user-managed load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams:
The following configuration options are supported for user-managed load balancers:
-
Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration.
-
Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a
/27or/28, you can simplify your load balancer targets.Tip
You can list all IP addresses that exist in a network by checking the machine config pool’s resources.
Before you configure a user-managed load balancer for your OpenShift Container Platform cluster, consider the following information:
-
For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller’s load balancer, and API load balancer. Check the vendor’s documentation for this capability.
-
For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the user-managed load balancer. You can achieve this by completing one of the following actions:
-
Assign a static IP address to each control plane node.
-
Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment.
-
-
Manually define each node that runs the Ingress Controller in the user-managed load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur.
Configuring a user-managed load balancer
To integrate your infrastructure with existing network standards or gain more control over traffic management in OpenShift Container Platform , use a user-managed load balancer in place of the default load balancer.
Important
Before you configure a user-managed load balancer, ensure that you read the "Services for a user-managed load balancer" section.
Read the following prerequisites that apply to the service that you want to configure for your user-managed load balancer.
Note
MetalLB, which runs on a cluster, functions as a user-managed load balancer.
The following list details OpenShift API prerequisites:
-
You defined a front-end IP address.
-
TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items:
-
Port 6443 provides access to the OpenShift API service.
-
Port 22623 can provide ignition startup configurations to nodes.
-
-
The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster.
-
The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes.
-
The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623.
The following list details Ingress Controller prerequisites:
-
You defined a front-end IP address.
-
TCP port 443 and port 80 are exposed on the front-end IP address of your load balancer.
-
The front-end IP address, port 80 and port 443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster.
-
The front-end IP address, port 80 and port 443 are reachable by all nodes that operate in your OpenShift Container Platform cluster.
-
The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936.
The following list details prerequisites for health check URL specifications:
You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services.
The following example shows a Kubernetes API health check specification for a backend service:
Path: HTTPS:6443/readyz
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 10
Interval: 10
The following example shows a Machine Config API health check specification for a backend service:
Path: HTTPS:22623/healthz
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 10
Interval: 10
The following example shows a Ingress Controller health check specification for a backend service:
Path: HTTP:1936/healthz/ready
Healthy threshold: 2
Unhealthy threshold: 2
Timeout: 5
Interval: 10
-
Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 22623, 443, and 80. Depending on your needs, you can specify the IP address of a single subnet or IP addresses from multiple subnets in your HAProxy configuration.
Example HAProxy configuration with one listed subnet# ... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ...Example HAProxy configuration with multiple listed subnets# ... listen api-server-6443 bind *:6443 mode tcp server master-00 192.168.83.89:6443 check inter 1s server master-01 192.168.84.90:6443 check inter 1s server master-02 192.168.85.99:6443 check inter 1s server bootstrap 192.168.80.89:6443 check inter 1s listen machine-config-server-22623 bind *:22623 mode tcp server master-00 192.168.83.89:22623 check inter 1s server master-01 192.168.84.90:22623 check inter 1s server master-02 192.168.85.99:22623 check inter 1s server bootstrap 192.168.80.89:22623 check inter 1s listen ingress-router-80 bind *:80 mode tcp balance source server worker-00 192.168.83.100:80 check inter 1s server worker-01 192.168.83.101:80 check inter 1s listen ingress-router-443 bind *:443 mode tcp balance source server worker-00 192.168.83.100:443 check inter 1s server worker-01 192.168.83.101:443 check inter 1s listen ironic-api-6385 bind *:6385 mode tcp balance source server master-00 192.168.83.89:6385 check inter 1s server master-01 192.168.84.90:6385 check inter 1s server master-02 192.168.85.99:6385 check inter 1s server bootstrap 192.168.80.89:6385 check inter 1s listen inspector-api-5050 bind *:5050 mode tcp balance source server master-00 192.168.83.89:5050 check inter 1s server master-01 192.168.84.90:5050 check inter 1s server master-02 192.168.85.99:5050 check inter 1s server bootstrap 192.168.80.89:5050 check inter 1s # ... -
Use the
curlCLI command to verify that the user-managed load balancer and its resources are operational:-
Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response:
$ curl https://<loadbalancer_ip_address>:6443/version --insecureIf the configuration is correct, you receive a JSON object in response:
{ "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } -
Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output:
$ curl -v https://<loadbalancer_ip_address>:22623/healthz --insecureIf the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK Content-Length: 0 -
Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output:
$ curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address>If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache -
Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output:
$ curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>If the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private
-
-
Configure the DNS records for your cluster to target the front-end IP addresses of the user-managed load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. The following examples shows modified DNS records:
<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front EndImportant
DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record.
-
For your OpenShift Container Platform cluster to use the user-managed load balancer, you must specify the following configuration in your cluster’s
install-config.yamlfile:# ... platform: vsphere: loadBalancer: type: UserManaged apiVIPs: - <api_ip> ingressVIPs: - <ingress_ip> # ...where:
loadBalancer.type-
Set
UserManagedfor thetypeparameter to specify a user-managed load balancer for your cluster. The parameter defaults toOpenShiftManagedDefault, which denotes the default internal load balancer. For services defined in anopenshift-kni-infranamespace, a user-managed load balancer can deploy thecorednsservice to pods in your cluster but ignoreskeepalivedandhaproxyservices. loadBalancer.<api_ip>-
Specifies a user-managed load balancer. Specify the user-managed load balancer’s public IP address, so that the Kubernetes API can communicate with the user-managed load balancer. Mandatory parameter.
loadBalancer.<ingress_ip>-
Specifies a user-managed load balancer. Specify the user-managed load balancer’s public IP address, so that the user-managed load balancer can manage ingress traffic for your cluster. Mandatory parameter.
-
Use the
curlCLI command to verify that the user-managed load balancer and DNS record configuration are operational:-
Verify that you can access the cluster API, by running the following command and observing the output:
$ curl https://api.<cluster_name>.<base_domain>:6443/version --insecureIf the configuration is correct, you receive a JSON object in response:
{ "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } -
Verify that you can access the cluster machine configuration, by running the following command and observing the output:
$ curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecureIf the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK Content-Length: 0 -
Verify that you can access each cluster application on port 80, by running the following command and observing the output:
$ curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecureIf the configuration is correct, the output from the command shows the following response:
HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private -
Verify that you can access each cluster application on port 443, by running the following command and observing the output:
$ curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecureIf the configuration is correct, the output from the command shows the following response:
HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private
-
Deploying the cluster
You can install OpenShift Container Platform on a compatible cloud platform.
Important
You can run the create cluster command of the installation program only once, during initial installation.
-
You have the OpenShift Container Platform installation program and the pull secret for your cluster.
-
You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
-
Optional: Before you create the cluster, you configured an external load balancer in place of the default load balancer.
Important
You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring a user-managed load balancer".
-
In the directory that contains the installation program, initialize the cluster deployment by running the following command:
$ ./openshift-install create cluster --dir <installation_directory> \ --log-level=info- For
<installation_directory>, specify the location of your customized./install-config.yamlfile. - To view different installation details, specify
warn,debug, orerrorinstead ofinfo.
- For
When the cluster deployment completes successfully:
-
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadminuser. -
Credential information also outputs to
<installation_directory>/.openshift_install.log.
Important
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
Important
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. -
It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
Logging in to the cluster by using the CLI
To log in to your cluster as the default system user, export the kubeconfig file. This configuration enables the CLI to authenticate and connect to the specific API server created during OpenShift Container Platform installation.
The kubeconfig file is specific to a cluster and is created during OpenShift Container Platform installation.
-
You deployed an OpenShift Container Platform cluster.
-
You installed the OpenShift CLI (
oc).
-
Export the
kubeadmincredentials by running the following command:$ export KUBECONFIG=<installation_directory>/auth/kubeconfigwhere:
<installation_directory>-
Specifies the path to the directory that stores the installation files.
-
Verify you can run
occommands successfully using the exported configuration by running the following command:$ oc whoamiExample outputsystem:admin
Disabling the default software catalog sources
Operator catalogs that source content provided by Red Hat and community projects are configured for the software catalog by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator.
-
Disable the sources for the default catalogs by adding
disableAllDefaultSources: trueto theOperatorHubobject:$ oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Tip
Alternatively, you can use the web console to manage catalog sources. From the Administration → Cluster Settings → Configuration → OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources.
Creating registry storage
After you install the cluster, you must create storage for the Registry Operator.
Image registry removed during installation
On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed. This allows openshift-installer to complete installations on these platform types.
After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed. When this has completed, you must configure storage.
Image registry storage configuration
The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available.
Configure a persistent volume, which is required for production clusters. Where applicable, you can configure an empty directory as the storage location for non-production clusters.
You can also allow the image registry to use block storage types by using the Recreate rollout strategy during upgrades.
Configuring registry storage for VMware vSphere
As a cluster administrator, following installation you must configure your registry to use storage.
-
Cluster administrator permissions.
-
A cluster on VMware vSphere.
-
Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation.
Important
OpenShift Container Platform supports
ReadWriteOnceaccess for image registry storage when you have only one replica.ReadWriteOnceaccess also requires that the registry uses theRecreaterollout strategy. To deploy an image registry that supports high availability with two or more replicas,ReadWriteManyaccess is required. -
Must have "100Gi" capacity.
Important
Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended.
Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components.
-
Change the
spec.storage.pvcfield in theconfigs.imageregistry/clusterresource.Note
When you use shared storage, review your security settings to prevent outside access.
-
Verify that you do not have a registry pod by running the following command:
$ oc get pod -n openshift-image-registry -l docker-registry=defaultExample outputNo resourses found in openshift-image-registry namespaceNote
If you do have a registry pod in your output, you do not need to continue with this procedure.
-
Check the registry configuration by running the following command:
$ oc edit configs.imageregistry.operator.openshift.ioExample outputstorage: pvc: claim:Leave the
claimfield blank to allow the automatic creation of animage-registry-storagepersistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. -
Check the
clusteroperatorstatus by running the following command:$ oc get clusteroperator image-registryExample outputNAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m
Telemetry access for OpenShift Container Platform
To provide metrics about cluster health and the success of updates, the Telemetry service requires internet access. When connected, this service runs automatically by default and registers your cluster to OpenShift Cluster Manager.
After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager,use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. For more information about subscription watch, see "Data Gathered and Used by Red Hat’s subscription services" in the Additional resources section.
-
See About remote health monitoring for more information about the Telemetry service
Next steps
-
If necessary, you can Remote health reporting.
-
If necessary, see Registering your disconnected cluster.