Installing a cluster on AWS in a disconnected environment
You can install a cluster on Amazon Web Services (AWS) in a restricted network by creating an internal mirror of the installation release content on an existing AWS Virtual Private Cloud (VPC). By using this configuration, you can deploy a cluster in an environment with limited internet connectivity to help ensure compliance with security policies.
Prerequisites
-
You reviewed details about the OpenShift Container Platform installation and update processes.
-
You read the documentation on selecting a cluster installation method and preparing it for users.
-
You mirrored the images for a disconnected installation to your registry and obtained the
imageContentSourcesdata for your version of OpenShift Container Platform.Important
Because the installation media is on the mirror host, you can use that computer to complete all installation steps.
-
You have an existing VPC in AWS. When installing to a restricted network using installer-provisioned infrastructure, you cannot use the installer-provisioned VPC. You must use a user-provisioned VPC that satisfies one of the following requirements:
-
Contains the mirror registry
-
Has firewall rules or a peering connection to access the mirror registry hosted elsewhere
-
-
You configured an AWS account to host the cluster.
Important
If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
-
You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation.
-
If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to.
Note
If you are configuring a proxy, be sure to also review this site list.
About installations in restricted networks
In OpenShift Container Platform 4.19, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster.
If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere.
To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions.
Additional limits
Clusters in restricted networks have the following additional limitations and restrictions:
-
The
ClusterVersionstatus includes anUnable to retrieve available updateserror. -
By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.
About using a custom VPC
In OpenShift Container Platform 4.19, you can deploy a cluster into existing subnets in an existing Amazon Virtual Private Cloud (VPC) in Amazon Web Services (AWS). By deploying OpenShift Container Platform into an existing AWS VPC, you might be able to avoid limit constraints in new accounts or more easily abide by the operational constraints that your company’s guidelines set. If you cannot obtain the infrastructure creation permissions that are required to create the VPC yourself, use this installation option.
Because the installation program cannot know what other components are also in your existing subnets, it cannot choose subnet CIDRs and so forth on your behalf. You must configure networking for the subnets that you install your cluster to yourself.
Requirements for using your VPC
The installation program no longer creates the following components:
-
Internet gateways
-
NAT gateways
-
Subnets
-
Route tables
-
VPCs
-
VPC DHCP options
-
VPC endpoints
Note
The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail.
If you use a custom VPC, you must correctly configure it and its subnets for the installation program and the cluster to use. See Create a VPC in the Amazon Web Services documentation for more information about AWS VPC console wizard configurations and creating and managing an AWS VPC.
The installation program cannot:
-
Subdivide network ranges for the cluster to use.
-
Set route tables for the subnets.
-
Set VPC options like DHCP.
You must complete these tasks before you install the cluster. See VPC networking components and Route tables for your VPC for more information on configuring networking in an AWS VPC.
Your VPC must meet the following characteristics:
-
The VPC must not use the
kubernetes.io/cluster/.*: owned,Name, andopenshift.io/clustertags.The installation program modifies your subnets to add the
kubernetes.io/cluster/.*: sharedtag, so your subnets must have at least one free tag slot available for it. See Tag Restrictions in the AWS documentation to confirm that the installation program can add a tag to each subnet that you specify. You cannot use aNametag, because it overlaps with the EC2Namefield and the installation fails. -
If you want to extend your OpenShift Container Platform cluster into an AWS Outpost and have an existing Outpost subnet, the existing subnet must use the
kubernetes.io/cluster/unmanaged: truetag. If you do not apply this tag, the installation might fail due to the Cloud Controller Manager creating a service load balancer in the Outpost subnet, which is an unsupported configuration. -
You must enable the
enableDnsSupportandenableDnsHostnamesattributes in your VPC, so that the cluster can use the Route 53 zones that are attached to the VPC to resolve cluster’s internal DNS records. See DNS Support in Your VPC in the AWS documentation.If you prefer to use your own Route 53 hosted private zone, you must associate the existing hosted zone with your VPC prior to installing a cluster. You can define your hosted zone using the
platform.aws.hostedZoneandplatform.aws.hostedZoneRolefields in theinstall-config.yamlfile. You can use a private hosted zone from another account by sharing it with the account where you install the cluster. If you use a private hosted zone from another account, you must use thePassthroughorManualcredentials mode.
If you are working in a disconnected environment, you are unable to reach the public IP addresses for EC2, ELB, and S3 endpoints. Depending on the level to which you want to restrict internet traffic during the installation, the following configuration options are available:
Option 1: Create VPC endpoints
Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows:
-
ec2.<aws_region>.amazonaws.com -
elasticloadbalancing.<aws_region>.amazonaws.com -
s3.<aws_region>.amazonaws.com
With this option, network traffic remains private between your VPC and the required AWS services.
Option 2: Create a proxy without VPC endpoints
As part of the installation process, you can configure an HTTP or HTTPS proxy. With this option, internet traffic goes through the proxy to reach the required AWS services.
Option 3: Create a proxy with VPC endpoints
As part of the installation process, you can configure an HTTP or HTTPS proxy with VPC endpoints. Create a VPC endpoint and attach it to the subnets that the clusters are using. Name the endpoints as follows:
-
ec2.<aws_region>.amazonaws.com -
elasticloadbalancing.<aws_region>.amazonaws.com -
s3.<aws_region>.amazonaws.com
When configuring the proxy in the install-config.yaml file, add these endpoints to the noProxy field. With this option, the proxy prevents the cluster from accessing the internet directly. However, network traffic remains private between your VPC and the required AWS services.
You must provide a suitable VPC and subnets that allow communication to your machines.
| Component | AWS type | Description | |
|---|---|---|---|
VPC |
|
You must provide a public VPC for the cluster to use. The VPC uses an endpoint that references the route tables for each subnet to improve communication with the registry that is hosted in S3. |
|
Public subnets |
|
Your VPC must have public subnets for between 1 and 3 availability zones and associate them with appropriate Ingress rules. |
|
Internet gateway |
|
You must have a public internet gateway, with public routes, attached to the VPC. In the provided templates, each public subnet has a NAT gateway with an EIP address. These NAT gateways allow cluster resources, like private subnet instances, to reach the internet and are not required for some restricted network or proxy scenarios. |
|
Network access control |
|
You must allow the VPC to access the following ports: |
|
Port |
Reason |
||
|
Inbound HTTP traffic |
||
|
Inbound HTTPS traffic |
||
|
Inbound SSH traffic |
||
|
Inbound ephemeral traffic |
||
|
Outbound ephemeral traffic |
||
Private subnets |
|
Your VPC can have private subnets. The provided CloudFormation templates can create private subnets for between 1 and 3 availability zones. If you use private subnets, you must provide appropriate routes and tables for them. |
|
VPC validation
To ensure that the subnets that you provide are suitable, the installation program confirms the following data:
-
All the subnets that you specify exist.
-
You provide private subnets.
-
The subnet CIDRs belong to the machine CIDR that you specified.
-
You provide subnets for each availability zone. Each availability zone contains no more than one public and one private subnet. If you use a private cluster, provide only a private subnet for each availability zone. Otherwise, provide exactly one public and private subnet for each availability zone.
-
You provide a public subnet for each private subnet availability zone. Machines are not provisioned in availability zones that you do not provide private subnets for.
If you destroy a cluster that uses an existing VPC, the VPC is not deleted. When you remove the OpenShift Container Platform cluster from a VPC, the kubernetes.io/cluster/.*: shared tag is removed from the subnets that it used.
Division of permissions
Starting with OpenShift Container Platform 4.3, you do not need all of the permissions that are required for an installation program-provisioned infrastructure cluster to deploy a cluster. This change mimics the division of permissions that you might have at your company: some individuals can create different resource in your clouds than others. For example, you might be able to create application-specific items, like instances, buckets, and load balancers, but not networking-related components such as VPCs, subnets, or ingress rules.
The AWS credentials that you use when you create your cluster do not need the networking permissions that are required to make VPCs and core networking components within the VPC, such as subnets, routing tables, internet gateways, NAT, and VPN. You still need permission to make the application resources that the machines within the cluster require, such as ELBs, security groups, S3 buckets, and nodes.
Isolation between clusters
If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways:
-
You can install multiple OpenShift Container Platform clusters in the same VPC.
-
ICMP ingress is allowed from the entire network.
-
TCP 22 ingress (SSH) is allowed to the entire network.
-
Control plane TCP 6443 ingress (Kubernetes API) is allowed to the entire network.
-
Control plane TCP 22623 ingress (MCS) is allowed to the entire network.
Creating the installation configuration file
You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS).
-
You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host.
-
You have the
imageContentSourcesvalues that were generated during mirror registry creation. -
You have obtained the contents of the certificate for your mirror registry.
-
Create the
install-config.yamlfile.-
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>-
<installation_directory>: For<installation_directory>, specify the directory name to store the files that the installation program creates.When specifying the directory:
-
Verify that the directory has the
executepermission. This permission is required to run Terraform binaries under the installation directory. -
Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
-
-
At the prompts, provide the configuration details for your cloud:
-
Optional: Select an SSH key to use to access your cluster machines.
Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. -
Select AWS as the platform to target.
-
If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
-
Select the AWS region to deploy the cluster to.
-
Select the base domain for the Route 53 service that you configured for your cluster.
-
Enter a descriptive name for your cluster.
-
-
-
Edit the
install-config.yamlfile to give the additional information that is required for an installation in a restricted network.-
Update the
pullSecretvalue to contain the authentication information for your registry:pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "you@example.com"}}}'For
<mirror_host_name>, specify the registry domain name that you specified in the certificate for your mirror registry, and for<credentials>, specify the base64-encoded user name and password for your mirror registry. -
Add the
additionalTrustBundleparameter and value.additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry.
-
Define the subnets for the VPC to install the cluster in, as in the following example:
platform: aws: vpc: subnets: - id: subnet-<id_1> - id: subnet-<id_2> - id: subnet-<id_3> -
Add the image content resources, which resemble the following YAML excerpt:
imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/releaseFor these values, use the
imageContentSourcesthat you recorded during mirror registry creation. -
Set the publishing strategy to
Internal:publish: InternalBy setting this option, you create an internal Ingress Controller and a private load balancer.
-
-
Make any other modifications to the
install-config.yamlfile that you require.For more information about the parameters, see "Installation configuration parameters".
-
Back up the
install-config.yamlfile so that you can use it to install multiple clusters.Important
The
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
Minimum resource requirements for cluster installation
Each created cluster must meet minimum requirements so that the cluster runs as expected.
| Machine | Operating System | vCPU [1] | Virtual RAM | Storage | Input/Output Per Second (IOPS)[2] |
|---|---|---|---|---|---|
Bootstrap |
RHCOS |
4 |
16 GB |
100 GB |
300 |
Control plane |
RHCOS |
4 |
16 GB |
100 GB |
300 |
Compute |
RHCOS |
2 |
8 GB |
100 GB |
300 |
-
One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core × cores) × sockets = vCPUs.
-
OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
-
As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
Note
For OpenShift Container Platform version 4.19, RHCOS is based on RHEL version 9.6, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires:
-
x86-64 architecture requires x86-64-v2 ISA
-
ARM64 architecture requires ARMv8.0-A ISA
-
IBM Power architecture requires Power 9 ISA
-
s390x architecture requires z14 ISA
For more information, see Architectures (RHEL documentation).
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
Enabling a user-managed DNS
You can install a cluster with a domain name server (DNS) solution that you manage instead of the default cluster-provisioned DNS solution that uses the Route 53 service for Amazon Web Services (AWS).
For example, your organization’s security policies might not allow the use of public DNS services such as Amazon Web Services DNS. In such scenarios, you can use your own DNS service to bypass the public DNS service and manage your own DNS for the IP addresses of the API and Ingress services.
If you enable user-managed DNS during installation, the installation program provisions DNS records for the API and Ingress services only within the cluster. To ensure access from outside the cluster, you must provision the DNS records in an external DNS service of your choice for the API and Ingress services after installation.
Important
User-provisioned DNS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
Before you deploy your cluster, use a text editor to open the
install-config.yamlfile and add the following stanza:-
To enable user-managed DNS:
featureSet: CustomNoUpgrade featureGates: ["AWSClusterHostedDNSInstall=true"] # ... platform: aws: userProvisionedDNS: Enabledwhere:
userProvisionedDNS-
Enables user-provisioned DNS management.
-
For information about provisioning your DNS records for the API server and the Ingress services, see "Provisioning your own DNS records".
Sample customized install-config.yaml file for AWS
You can customize the installation configuration file (install-config.yaml) to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
Important
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
For a full list and description of all installation configuration parameters, see Installation configuration parameters for AWS.
install-config.yaml file for AWSapiVersion: v1
baseDomain: example.com
sshKey: ssh-ed25519 AAAA...
pullSecret: '{"auths": ...}'
metadata:
name: example-cluster
controlPlane:
name: master
platform:
aws:
type: m6i.xlarge
replicas: 3
compute:
- name: worker
platform:
aws:
type: c5.4xlarge
replicas: 3
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
platform:
aws:
region: us-west-2
- Parameters at the first level of indentation apply to the cluster globally.
- The
controlPlanestanza applies to control plane machines. - The
computestanza applies to compute machines. - The
networkingstanza applies to the cluster networking configuration. If you do not provide networking values, the installation program provides default values. - The
platformstanza applies to the infrastructure platform that hosts the cluster.
Configuring the cluster-wide proxy during installation
To enable internet access in environments that deny direct connections, configure a cluster-wide proxy in the install-config.yaml file. This configuration ensures that the new OpenShift Container Platform cluster routes traffic through the specified HTTP or HTTPS proxy.
-
You have an existing
install-config.yamlfile. -
You have reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.Note
The
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud, Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).
-
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> httpsProxy: https://<username>:<pswd>@<ip>:<port> noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> # ...where:
proxy.httpProxy-
Specifies a proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. proxy.httpsProxy-
Specifies a proxy URL to use for creating HTTPS connections outside the cluster.
proxy.noProxy-
Specifies a comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. If you have added the AmazonEC2,Elastic Load Balancing, andS3VPC endpoints to your VPC, you must add these endpoints to thenoProxyfield. additionalTrustBundle-
If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. additionalTrustBundlePolicy-
Specifies the policy that determines the configuration of the
Proxyobject to reference theuser-ca-bundleconfig map in thetrustedCAfield. The allowed values areProxyonlyandAlways. UseProxyonlyto reference theuser-ca-bundleconfig map only whenhttp/httpsproxy is configured. UseAlwaysto always reference theuser-ca-bundleconfig map. The default value isProxyonly. Optional parameter.Note
The installation program does not support the proxy
readinessEndpointsfield.Note
If the installer times out, restart and then complete the deployment by using the
wait-forcommand of the installer. For example:+
$ ./openshift-install wait-for install-complete --log-level debug
-
Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named
clusterthat uses the proxy settings in the providedinstall-config.yamlfile. If no proxy settings are provided, aclusterProxyobject is still created, but it will have a nilspec.Note
Only the
Proxyobject namedclusteris supported, and no additional proxies can be created.
Alternatives to storing administrator-level secrets in the kube-system project
By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual, you must use one of the following alternatives:
-
To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials.
-
To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials.
Manually creating long-term credentials
The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace.
-
If you did not set the
credentialsModeparameter in theinstall-config.yamlconfiguration file toManual, modify the value as shown:Sample configuration file snippetapiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... -
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>where
<installation_directory>is the directory in which the installation program creates files. -
Set a
$RELEASE_IMAGEvariable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}') -
Extract the list of
CredentialsRequestcustom resources (CRs) from the OpenShift Container Platform release image by running the following command:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \ --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ --to=<path_to_directory_for_credentials_requests>- The
--includedparameter includes only the manifests that your specific cluster configuration requires. - Specify the location of the
install-config.yamlfile. - Specify the path to the directory where you want to store the
CredentialsRequestobjects. If the specified directory does not exist, this command creates it.This command creates a YAML file for each
CredentialsRequestobject.SampleCredentialsRequestobjectapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ...
- The
-
Create YAML files for secrets in the
openshift-installmanifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in thespec.secretReffor eachCredentialsRequestobject.SampleCredentialsRequestobject with secretsapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ...SampleSecretobjectapiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>
Important
Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state.
Configuring an AWS cluster to use short-term credentials
To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster.
Configuring the Cloud Credential Operator utility
To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (ccoctl) binary.
Note
The ccoctl utility is a Linux binary that must run in a Linux environment.
-
You have access to an OpenShift Container Platform account with cluster administrator access.
-
You have installed the OpenShift CLI (
oc).
-
You have created an AWS account for the
ccoctlutility to use with the following permissions:Required
iampermissions-
iam:CreateOpenIDConnectProvider -
iam:CreateRole -
iam:DeleteOpenIDConnectProvider -
iam:DeleteRole -
iam:DeleteRolePolicy -
iam:GetOpenIDConnectProvider -
iam:GetRole -
iam:GetUser -
iam:ListOpenIDConnectProviders -
iam:ListRolePolicies -
iam:ListRoles -
iam:PutRolePolicy -
iam:TagOpenIDConnectProvider -
iam:TagRole
Required
s3permissions-
s3:CreateBucket -
s3:DeleteBucket -
s3:DeleteObject -
s3:GetBucketAcl -
s3:GetBucketTagging -
s3:GetObject -
s3:GetObjectAcl -
s3:GetObjectTagging -
s3:ListBucket -
s3:PutBucketAcl -
s3:PutBucketPolicy -
s3:PutBucketPublicAccessBlock -
s3:PutBucketTagging -
s3:PutObject -
s3:PutObjectAcl -
s3:PutObjectTagging
Required
cloudfrontpermissions-
cloudfront:ListCloudFrontOriginAccessIdentities -
cloudfront:ListDistributions -
cloudfront:ListTagsForResource
-
-
If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the
ccoctlutility requires the following additional permissions:-
cloudfront:CreateCloudFrontOriginAccessIdentity -
cloudfront:CreateDistribution -
cloudfront:DeleteCloudFrontOriginAccessIdentity -
cloudfront:DeleteDistribution -
cloudfront:GetCloudFrontOriginAccessIdentity -
cloudfront:GetCloudFrontOriginAccessIdentityConfig -
cloudfront:GetDistribution -
cloudfront:TagResource -
cloudfront:UpdateDistribution
Note
These additional permissions support the use of the
--create-private-s3-bucketoption when processing credentials requests with theccoctl aws create-allcommand. -
-
Set a variable for the OpenShift Container Platform release image by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}') -
Obtain the CCO container image from the OpenShift Container Platform release image by running the following command:
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)Note
Ensure that the architecture of the
$RELEASE_IMAGEmatches the architecture of the environment in which you will use theccoctltool. -
Extract the
ccoctlbinary from the CCO container image within the OpenShift Container Platform release image by running the following command:$ oc image extract $CCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ -a ~/.pull-secret- For
<rhel_version>, specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified,ccoctl.rhel8is used by default. The following values are valid:-
rhel8: Specify this value for hosts that use RHEL 8. -
rhel9: Specify this value for hosts that use RHEL 9.
-
Note
The
ccoctlbinary is created in the directory from where you executed the command and not in/usr/bin/. You must rename the directory or move theccoctl.<rhel_version>binary toccoctl. - For
-
Change the permissions to make
ccoctlexecutable by running the following command:$ chmod 775 ccoctl
-
To verify that
ccoctlis ready to use, display the help file. Use a relative file name when you run the command, for example:$ ./ccoctlExample outputOpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command.
Creating AWS resources with the Cloud Credential Operator utility
You have the following options when creating AWS resources:
-
You can use the
ccoctl aws create-allcommand to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command. -
If you need to review the JSON files that the
ccoctltool creates before modifying AWS resources, or if the process theccoctltool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually.
Creating AWS resources with a single command
If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources.
Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually".
Note
By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory.
You must have:
-
Extracted and prepared the
ccoctlbinary.
-
Set a
$RELEASE_IMAGEvariable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}') -
Extract the list of
CredentialsRequestobjects from the OpenShift Container Platform release image by running the following command:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \ --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ --to=<path_to_directory_for_credentials_requests>- The
--includedparameter includes only the manifests that your specific cluster configuration requires. - Specify the location of the
install-config.yamlfile. - Specify the path to the directory where you want to store the
CredentialsRequestobjects. If the specified directory does not exist, this command creates it.Note
This command might take a few moments to run.
- The
-
Use the
ccoctltool to process allCredentialsRequestobjects by running the following command:$ ccoctl aws create-all \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --output-dir=<path_to_ccoctl_output_dir> \ --create-private-s3-bucket \ --permissions-boundary-arn=<policy_arn>- Specify the name used to tag any cloud resources that are created for tracking.
- Specify the AWS region in which cloud resources will be created.
- Specify the directory containing the files for the component
CredentialsRequestobjects. - Optional: Specify the directory in which you want the
ccoctlutility to create objects. By default, the utility creates objects in the directory in which the commands are run. - Optional: By default, the
ccoctlutility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the--create-private-s3-bucketparameter. - Optional: Specify the Amazon Resource Name (ARN) of the AWS IAM policy to use as the permissions boundary for the IAM roles created by the
ccoctlutility.Note
If your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgradefeature set, you must include the--enable-tech-previewparameter.
-
To verify that the OpenShift Container Platform secrets are created, list the files in the
<path_to_ccoctl_output_dir>/manifestsdirectory:$ ls <path_to_ccoctl_output_dir>/manifestsExample outputcluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yamlYou can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles.
Creating AWS resources individually
You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments.
Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command".
Note
By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory.
Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters.
-
Extract and prepare the
ccoctlbinary.
-
Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command:
$ ccoctl aws create-key-pairExample output2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installerwhere
serviceaccount-signer.privateandserviceaccount-signer.publicare the generated key files.This command also creates a private key that the cluster requires during installation in
/<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key. -
Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command:
$ ccoctl aws create-identity-provider \ --name=<name> \ --region=<aws_region> \ --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public<name>is the name used to tag any cloud resources that are created for tracking.<aws-region>is the AWS region in which cloud resources will be created.<path_to_ccoctl_output_dir>is the path to the public key file that theccoctl aws create-key-paircommand generated.Example output2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.comwhere
openid-configurationis a discovery document andkeys.jsonis a JSON web key set file.This command also creates a YAML configuration file in
/<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml. This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens.
-
Create IAM roles for each component in the cluster:
-
Set a
$RELEASE_IMAGEvariable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}') -
Extract the list of
CredentialsRequestobjects from the OpenShift Container Platform release image:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \ --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ --to=<path_to_directory_for_credentials_requests>- The
--includedparameter includes only the manifests that your specific cluster configuration requires. - Specify the location of the
install-config.yamlfile. - Specify the path to the directory where you want to store the
CredentialsRequestobjects. If the specified directory does not exist, this command creates it.
- The
-
Use the
ccoctltool to process allCredentialsRequestobjects by running the following command:$ ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.comNote
For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the
--regionparameter.If your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgradefeature set, you must include the--enable-tech-previewparameter.For each
CredentialsRequestobject,ccoctlcreates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in eachCredentialsRequestobject from the OpenShift Container Platform release image.
-
-
To verify that the OpenShift Container Platform secrets are created, list the files in the
<path_to_ccoctl_output_dir>/manifestsdirectory:$ ls <path_to_ccoctl_output_dir>/manifestsExample outputcluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yamlYou can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles.
Incorporating the Cloud Credential Operator utility manifests
To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility (ccoctl) created to the correct directories for the installation program.
-
You have configured an account with the cloud platform that hosts your cluster.
-
You have configured the Cloud Credential Operator utility (
ccoctl). -
You have created the cloud provider resources that are required for your cluster with the
ccoctlutility.
-
If you did not set the
credentialsModeparameter in theinstall-config.yamlconfiguration file toManual, modify the value as shown:Sample configuration file snippetapiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... -
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>where
<installation_directory>is the directory in which the installation program creates files. -
Copy the manifests that the
ccoctlutility generated to themanifestsdirectory that the installation program created by running the following command:$ cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ -
Copy the
tlsdirectory that contains the private key to the installation directory:$ cp -a /<path_to_ccoctl_output_dir>/tls .
Deploying the cluster
You can install OpenShift Container Platform on a compatible cloud platform.
Important
You can run the create cluster command of the installation program only once, during initial installation.
-
You have configured an account with the cloud platform that hosts your cluster.
-
You have the OpenShift Container Platform installation program and the pull secret for your cluster.
-
You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
-
In the directory that contains the installation program, initialize the cluster deployment by running the following command:
$ ./openshift-install create cluster --dir <installation_directory> \ --log-level=info- For
<installation_directory>, specify the location of your customized./install-config.yamlfile. - To view different installation details, specify
warn,debug, orerrorinstead ofinfo.
- For
-
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.Note
The elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
When the cluster deployment completes successfully:
-
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadminuser. -
Credential information also outputs to
<installation_directory>/.openshift_install.log.
Important
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
Important
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. -
It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
Provisioning your own DNS records
Use your cluster name and base cluster domain to configure a CNAME record for the API service api.<cluster_name>.<base_domain>. with the API load balancer DNS name. Similarly, use the load balancer DNS name of the Ingress service to provision a CNAME record for the *.apps.<cluster_name>.<base_domain>. hostname by using your cluster name and base cluster domain.
Important
User-provisioned DNS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
You have installed the AWS CLI.
-
Add the
userProvisionedDNSparameter to theinstall-config.yamlfile and enable the parameter. For more information, see "Enabling a user-managed DNS". -
Install your cluster.
-
If you are installing a private cluster, set the
api_lb_namevariable by running the following command:$ api_lb_name="${INFRA_ID}-int" -
If you are installing a public cluster, set the
api_lb_namevariable by running the following command:$ api_lb_name="${INFRA_ID}-ext" -
To retrieve the DNS name of the API service, run the following command:
$ aws --region ${REGION} elbv2 describe-load-balancers --names ${api_lb_name} --query 'LoadBalancers[*].DNSName' --output text -
Use the DNS name and your cluster name and base cluster domain to configure your own DNS record with the
api.<cluster_name>.<base_domain>.hostname. -
To retrieve the DNS name of the Ingress service, run the following command:
$ ingress_lb_name=$(aws --region ${REGION} resourcegroupstaggingapi get-resources --resource-type-filters elasticloadbalancing:loadbalancer --tag-filters Key=kubernetes.io/cluster/${INFRA_ID},Values=owned Key=kubernetes.io/service-name,Values=openshift-ingress/router-default --query 'ResourceTagMappingList[*].ResourceARN | [0]' --output text | awk -F'/' '{print $2}') -
Run the following command, which uses the variable
ingress_lb_namegenerated from the previous command:$ aws --region ${REGION} elb describe-load-balancers --load-balancer-names ${ingress_lb_name} --query 'LoadBalancerDescriptions[].DNSName' --output text -
Use the DNS name and your cluster name and base cluster domain to configure your own DNS record with the
*.apps.<cluster_name>.<base_domain>.hostname.
Logging in to the cluster by using the CLI
To log in to your cluster as the default system user, export the kubeconfig file. This configuration enables the CLI to authenticate and connect to the specific API server created during OpenShift Container Platform installation.
The kubeconfig file is specific to a cluster and is created during OpenShift Container Platform installation.
-
You deployed an OpenShift Container Platform cluster.
-
You installed the OpenShift CLI (
oc).
-
Export the
kubeadmincredentials by running the following command:$ export KUBECONFIG=<installation_directory>/auth/kubeconfigwhere:
<installation_directory>-
Specifies the path to the directory that stores the installation files.
-
Verify you can run
occommands successfully using the exported configuration by running the following command:$ oc whoamiExample outputsystem:admin
Disabling the default software catalog sources
Operator catalogs that source content provided by Red Hat and community projects are configured for the software catalog by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator.
-
Disable the sources for the default catalogs by adding
disableAllDefaultSources: trueto theOperatorHubobject:$ oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'
Tip
Alternatively, you can use the web console to manage catalog sources. From the Administration → Cluster Settings → Configuration → OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources.
Next steps
-
Configure image streams for the Cluster Samples Operator and the
must-gathertool. -
Learn how to use Operator Lifecycle Manager in disconnected environments.
-
If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores.
-
If necessary, you can Remote health reporting.