Installing a cluster on AWS with customizations
In OpenShift Container Platform version 4.19, you can install a cluster on Amazon Web Services (AWS) by using installer-provisioned infrastructure with customizations, including network configuration options. In each, you modify parameters in the install-config.yaml file before you install the cluster. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. You must set most of the network configuration parameters during installation, and you can modify only kubeProxy configuration parameters in a running cluster.
Note
The scope of the OpenShift Container Platform installation configurations is intentionally narrow. It is designed for simplicity and ensured success. You can complete many more OpenShift Container Platform configuration tasks after an installation completes.
Prerequisites
-
You reviewed details about the OpenShift Container Platform installation and update processes.
-
You read the documentation on selecting a cluster installation method and preparing it for users.
-
You configured an AWS account to host the cluster.
Important
If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
-
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
Obtaining an AWS Marketplace image
If you are deploying an OpenShift Container Platform cluster using an AWS Marketplace image, you must first subscribe through AWS. Subscribing to the offer provides you with the AMI ID that the installation program uses to deploy compute nodes.
Note
You should only modify the RHCOS image for compute machines to use an AWS Marketplace image. Control plane machines and infrastructure nodes do not require an OpenShift Container Platform subscription and use the public RHCOS default image by default, which does not incur subscription costs on your AWS bill. Therefore, you should not modify the cluster default boot image or the control plane boot images. Applying the AWS Marketplace image to them will incur additional licensing costs that cannot be recovered.
-
You have an AWS account to purchase the offer. This account does not have to be the same account that is used to install the cluster.
-
Complete the OpenShift Container Platform subscription from the AWS Marketplace.
-
Record the AMI ID for your specific AWS Region. As part of the installation process, you must update the
install-config.yamlfile with this value before deploying the cluster.Sampleinstall-config.yamlfile with AWS Marketplace compute nodesapiVersion: v1 baseDomain: example.com compute: - hyperthreading: Enabled name: worker platform: aws: amiID: ami-06c4d345f7c207239 type: m5.4xlarge replicas: 3 metadata: name: test-cluster platform: aws: region: us-east-2 sshKey: ssh-ed25519 AAAA... pullSecret: '{"auths": ...}'- The AMI ID from your AWS Marketplace subscription.
- Your AMI ID is associated with a specific AWS Region. When creating the installation configuration file, ensure that you select the same AWS Region that you specified when configuring your subscription.
Network configuration phases
There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration.
- Phase 1
-
You can customize the following network-related fields in the
install-config.yamlfile before you create the manifest files:-
networking.networkType -
networking.clusterNetwork -
networking.serviceNetwork -
networking.machineNetwork -
nodeNetworkingFor more information, see "Installation configuration parameters".
Note
Set the
networking.machineNetworkto match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located.Important
The CIDR range
172.17.0.0/16is reserved bylibVirt. You cannot use any other CIDR range that overlaps with the172.17.0.0/16CIDR range for networks in your cluster.
-
- Phase 2
-
After creating the manifest files by running
openshift-install create manifests, you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration.
During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2.
Creating the installation configuration file
You can customize the OpenShift Container Platform cluster you install on Amazon Web Services (AWS).
-
You have the OpenShift Container Platform installation program and the pull secret for your cluster.
-
Create the
install-config.yamlfile.-
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>-
<installation_directory>: For<installation_directory>, specify the directory name to store the files that the installation program creates.When specifying the directory:
-
Verify that the directory has the
executepermission. This permission is required to run Terraform binaries under the installation directory. -
Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
-
-
At the prompts, provide the configuration details for your cloud:
-
Optional: Select an SSH key to use to access your cluster machines.
Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. -
Select AWS as the platform to target.
-
If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
-
Select the AWS region to deploy the cluster to.
-
Select the base domain for the Route 53 service that you configured for your cluster.
-
Enter a descriptive name for your cluster.
-
-
-
Modify the
install-config.yamlfile. You can find more information about the available parameters in the "Installation configuration parameters" section.Note
If you are installing a three-node cluster, be sure to set the
compute.replicasparameter to0. This ensures that the cluster’s control planes are schedulable. For more information, see "Installing a three-node cluster on AWS". -
Back up the
install-config.yamlfile so that you can use it to install multiple clusters.Important
The
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
Minimum resource requirements for cluster installation
Each created cluster must meet minimum requirements so that the cluster runs as expected.
| Machine | Operating System | vCPU [1] | Virtual RAM | Storage | Input/Output Per Second (IOPS)[2] |
|---|---|---|---|---|---|
Bootstrap |
RHCOS |
4 |
16 GB |
100 GB |
300 |
Control plane |
RHCOS |
4 |
16 GB |
100 GB |
300 |
Compute |
RHCOS |
2 |
8 GB |
100 GB |
300 |
-
One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core Ă— cores) Ă— sockets = vCPUs.
-
OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes which require a 10 ms p99 fsync duration. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance.
-
As with all user-provisioned installations, if you choose to use RHEL compute machines in your cluster, you take responsibility for all operating system life cycle management and maintenance, including performing system updates, applying patches, and completing all other required tasks. Use of RHEL 7 compute machines is deprecated and has been removed in OpenShift Container Platform 4.10 and later.
Note
For OpenShift Container Platform version 4.19, RHCOS is based on RHEL version 9.6, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires:
-
x86-64 architecture requires x86-64-v2 ISA
-
ARM64 architecture requires ARMv8.0-A ISA
-
IBM Power architecture requires Power 9 ISA
-
s390x architecture requires z14 ISA
For more information, see Architectures (RHEL documentation).
If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform.
Tested instance types for AWS
The following Amazon Web Services (AWS) instance types have been tested with OpenShift Container Platform.
Note
Use the machine types included in the following charts for your AWS instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in the section named "Minimum resource requirements for cluster installation".
Machine types based on 64-bit x86 architecture
Tested instance types for AWS on 64-bit ARM infrastructures
The following Amazon Web Services (AWS) 64-bit ARM instance types have been tested with OpenShift Container Platform.
Note
Use the machine types included in the following charts for your AWS ARM instances. If you use an instance type that is not listed in the chart, ensure that the instance size you use matches the minimum resource requirements that are listed in "Minimum resource requirements for cluster installation".
Machine types based on 64-bit ARM architecture
Enabling a user-managed DNS
You can install a cluster with a domain name server (DNS) solution that you manage instead of the default cluster-provisioned DNS solution that uses the Route 53 service for Amazon Web Services (AWS).
For example, your organization’s security policies might not allow the use of public DNS services such as Amazon Web Services DNS. In such scenarios, you can use your own DNS service to bypass the public DNS service and manage your own DNS for the IP addresses of the API and Ingress services.
If you enable user-managed DNS during installation, the installation program provisions DNS records for the API and Ingress services only within the cluster. To ensure access from outside the cluster, you must provision the DNS records in an external DNS service of your choice for the API and Ingress services after installation.
Important
User-provisioned DNS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
Before you deploy your cluster, use a text editor to open the
install-config.yamlfile and add the following stanza:-
To enable user-managed DNS:
featureSet: CustomNoUpgrade featureGates: ["AWSClusterHostedDNSInstall=true"] # ... platform: aws: userProvisionedDNS: Enabledwhere:
userProvisionedDNS-
Enables user-provisioned DNS management.
-
For information about provisioning your DNS records for the API server and the Ingress services, see "Provisioning your own DNS records".
Sample customized install-config.yaml file for AWS
You can customize the installation configuration file (install-config.yaml) to specify more details about your OpenShift Container Platform cluster’s platform or modify the values of the required parameters.
Important
This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it.
For a full list and description of all installation configuration parameters, see Installation configuration parameters for AWS.
install-config.yaml file for AWSapiVersion: v1
baseDomain: example.com
sshKey: ssh-ed25519 AAAA...
pullSecret: '{"auths": ...}'
metadata:
name: example-cluster
controlPlane:
name: master
platform:
aws:
type: m6i.xlarge
replicas: 3
compute:
- name: worker
platform:
aws:
type: c5.4xlarge
replicas: 3
networking:
clusterNetwork:
- cidr: 10.128.0.0/14
hostPrefix: 23
platform:
aws:
region: us-west-2
- Parameters at the first level of indentation apply to the cluster globally.
- The
controlPlanestanza applies to control plane machines. - The
computestanza applies to compute machines. - The
networkingstanza applies to the cluster networking configuration. If you do not provide networking values, the installation program provides default values. - The
platformstanza applies to the infrastructure platform that hosts the cluster.
Configuring the cluster-wide proxy during installation
To enable internet access in environments that deny direct connections, configure a cluster-wide proxy in the install-config.yaml file. This configuration ensures that the new OpenShift Container Platform cluster routes traffic through the specified HTTP or HTTPS proxy.
-
You have an existing
install-config.yamlfile. -
You have reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.Note
The
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud, Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).
-
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> httpsProxy: https://<username>:<pswd>@<ip>:<port> noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> # ...where:
proxy.httpProxy-
Specifies a proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. proxy.httpsProxy-
Specifies a proxy URL to use for creating HTTPS connections outside the cluster.
proxy.noProxy-
Specifies a comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. If you have added the AmazonEC2,Elastic Load Balancing, andS3VPC endpoints to your VPC, you must add these endpoints to thenoProxyfield. additionalTrustBundle-
If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. additionalTrustBundlePolicy-
Specifies the policy that determines the configuration of the
Proxyobject to reference theuser-ca-bundleconfig map in thetrustedCAfield. The allowed values areProxyonlyandAlways. UseProxyonlyto reference theuser-ca-bundleconfig map only whenhttp/httpsproxy is configured. UseAlwaysto always reference theuser-ca-bundleconfig map. The default value isProxyonly. Optional parameter.Note
The installation program does not support the proxy
readinessEndpointsfield.Note
If the installer times out, restart and then complete the deployment by using the
wait-forcommand of the installer. For example:+
$ ./openshift-install wait-for install-complete --log-level debug
-
Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named
clusterthat uses the proxy settings in the providedinstall-config.yamlfile. If no proxy settings are provided, aclusterProxyobject is still created, but it will have a nilspec.Note
Only the
Proxyobject namedclusteris supported, and no additional proxies can be created.
Alternatives to storing administrator-level secrets in the kube-system project
By default, administrator secrets are stored in the kube-system project. If you configured the credentialsMode parameter in the install-config.yaml file to Manual, you must use one of the following alternatives:
-
To manage long-term cloud credentials manually, follow the procedure in Manually creating long-term credentials.
-
To implement short-term credentials that are managed outside the cluster for individual components, follow the procedures in Configuring an AWS cluster to use short-term credentials.
Manually creating long-term credentials
The Cloud Credential Operator (CCO) can be put into manual mode prior to installation in environments where the cloud identity and access management (IAM) APIs are not reachable, or the administrator prefers not to store an administrator-level credential secret in the cluster kube-system namespace.
-
If you did not set the
credentialsModeparameter in theinstall-config.yamlconfiguration file toManual, modify the value as shown:Sample configuration file snippetapiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... -
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>where
<installation_directory>is the directory in which the installation program creates files. -
Set a
$RELEASE_IMAGEvariable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}') -
Extract the list of
CredentialsRequestcustom resources (CRs) from the OpenShift Container Platform release image by running the following command:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \ --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ --to=<path_to_directory_for_credentials_requests>- The
--includedparameter includes only the manifests that your specific cluster configuration requires. - Specify the location of the
install-config.yamlfile. - Specify the path to the directory where you want to store the
CredentialsRequestobjects. If the specified directory does not exist, this command creates it.This command creates a YAML file for each
CredentialsRequestobject.SampleCredentialsRequestobjectapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" ...
- The
-
Create YAML files for secrets in the
openshift-installmanifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in thespec.secretReffor eachCredentialsRequestobject.SampleCredentialsRequestobject with secretsapiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ...SampleSecretobjectapiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>
Important
Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state.
Configuring an AWS cluster to use short-term credentials
To install a cluster that is configured to use the AWS Security Token Service (STS), you must configure the CCO utility and create the required AWS resources for your cluster.
Configuring the Cloud Credential Operator utility
To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility (ccoctl) binary.
Note
The ccoctl utility is a Linux binary that must run in a Linux environment.
-
You have access to an OpenShift Container Platform account with cluster administrator access.
-
You have installed the OpenShift CLI (
oc).
-
You have created an AWS account for the
ccoctlutility to use with the following permissions:Required
iampermissions-
iam:CreateOpenIDConnectProvider -
iam:CreateRole -
iam:DeleteOpenIDConnectProvider -
iam:DeleteRole -
iam:DeleteRolePolicy -
iam:GetOpenIDConnectProvider -
iam:GetRole -
iam:GetUser -
iam:ListOpenIDConnectProviders -
iam:ListRolePolicies -
iam:ListRoles -
iam:PutRolePolicy -
iam:TagOpenIDConnectProvider -
iam:TagRole
Required
s3permissions-
s3:CreateBucket -
s3:DeleteBucket -
s3:DeleteObject -
s3:GetBucketAcl -
s3:GetBucketTagging -
s3:GetObject -
s3:GetObjectAcl -
s3:GetObjectTagging -
s3:ListBucket -
s3:PutBucketAcl -
s3:PutBucketPolicy -
s3:PutBucketPublicAccessBlock -
s3:PutBucketTagging -
s3:PutObject -
s3:PutObjectAcl -
s3:PutObjectTagging
Required
cloudfrontpermissions-
cloudfront:ListCloudFrontOriginAccessIdentities -
cloudfront:ListDistributions -
cloudfront:ListTagsForResource
-
-
If you plan to store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL, the AWS account that runs the
ccoctlutility requires the following additional permissions:-
cloudfront:CreateCloudFrontOriginAccessIdentity -
cloudfront:CreateDistribution -
cloudfront:DeleteCloudFrontOriginAccessIdentity -
cloudfront:DeleteDistribution -
cloudfront:GetCloudFrontOriginAccessIdentity -
cloudfront:GetCloudFrontOriginAccessIdentityConfig -
cloudfront:GetDistribution -
cloudfront:TagResource -
cloudfront:UpdateDistribution
Note
These additional permissions support the use of the
--create-private-s3-bucketoption when processing credentials requests with theccoctl aws create-allcommand. -
-
Set a variable for the OpenShift Container Platform release image by running the following command:
$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}') -
Obtain the CCO container image from the OpenShift Container Platform release image by running the following command:
$ CCO_IMAGE=$(oc adm release info --image-for='cloud-credential-operator' $RELEASE_IMAGE -a ~/.pull-secret)Note
Ensure that the architecture of the
$RELEASE_IMAGEmatches the architecture of the environment in which you will use theccoctltool. -
Extract the
ccoctlbinary from the CCO container image within the OpenShift Container Platform release image by running the following command:$ oc image extract $CCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ -a ~/.pull-secret- For
<rhel_version>, specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified,ccoctl.rhel8is used by default. The following values are valid:-
rhel8: Specify this value for hosts that use RHEL 8. -
rhel9: Specify this value for hosts that use RHEL 9.
-
Note
The
ccoctlbinary is created in the directory from where you executed the command and not in/usr/bin/. You must rename the directory or move theccoctl.<rhel_version>binary toccoctl. - For
-
Change the permissions to make
ccoctlexecutable by running the following command:$ chmod 775 ccoctl
-
To verify that
ccoctlis ready to use, display the help file. Use a relative file name when you run the command, for example:$ ./ccoctlExample outputOpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command.
Creating AWS resources with the Cloud Credential Operator utility
You have the following options when creating AWS resources:
-
You can use the
ccoctl aws create-allcommand to create the AWS resources automatically. This is the quickest way to create the resources. See Creating AWS resources with a single command. -
If you need to review the JSON files that the
ccoctltool creates before modifying AWS resources, or if the process theccoctltool uses to create AWS resources automatically does not meet the requirements of your organization, you can create the AWS resources individually. See Creating AWS resources individually.
Creating AWS resources with a single command
If the process the ccoctl tool uses to create AWS resources automatically meets the requirements of your organization, you can use the ccoctl aws create-all command to automate the creation of AWS resources.
Otherwise, you can create the AWS resources individually. For more information, see "Creating AWS resources individually".
Note
By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory.
You must have:
-
Extracted and prepared the
ccoctlbinary.
-
Set a
$RELEASE_IMAGEvariable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}') -
Extract the list of
CredentialsRequestobjects from the OpenShift Container Platform release image by running the following command:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \ --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ --to=<path_to_directory_for_credentials_requests>- The
--includedparameter includes only the manifests that your specific cluster configuration requires. - Specify the location of the
install-config.yamlfile. - Specify the path to the directory where you want to store the
CredentialsRequestobjects. If the specified directory does not exist, this command creates it.Note
This command might take a few moments to run.
- The
-
Use the
ccoctltool to process allCredentialsRequestobjects by running the following command:$ ccoctl aws create-all \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --output-dir=<path_to_ccoctl_output_dir> \ --create-private-s3-bucket \ --permissions-boundary-arn=<policy_arn>- Specify the name used to tag any cloud resources that are created for tracking.
- Specify the AWS region in which cloud resources will be created.
- Specify the directory containing the files for the component
CredentialsRequestobjects. - Optional: Specify the directory in which you want the
ccoctlutility to create objects. By default, the utility creates objects in the directory in which the commands are run. - Optional: By default, the
ccoctlutility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the--create-private-s3-bucketparameter. - Optional: Specify the Amazon Resource Name (ARN) of the AWS IAM policy to use as the permissions boundary for the IAM roles created by the
ccoctlutility.Note
If your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgradefeature set, you must include the--enable-tech-previewparameter.
-
To verify that the OpenShift Container Platform secrets are created, list the files in the
<path_to_ccoctl_output_dir>/manifestsdirectory:$ ls <path_to_ccoctl_output_dir>/manifestsExample outputcluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yamlYou can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles.
Creating AWS resources individually
You can use the ccoctl tool to create AWS resources individually. This option might be useful for an organization that shares the responsibility for creating these resources among different users or departments.
Otherwise, you can use the ccoctl aws create-all command to create the AWS resources automatically. For more information, see "Creating AWS resources with a single command".
Note
By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory.
Some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters.
-
Extract and prepare the
ccoctlbinary.
-
Generate the public and private RSA key files that are used to set up the OpenID Connect provider for the cluster by running the following command:
$ ccoctl aws create-key-pairExample output2021/04/13 11:01:02 Generating RSA keypair 2021/04/13 11:01:03 Writing private key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.private 2021/04/13 11:01:03 Writing public key to /<path_to_ccoctl_output_dir>/serviceaccount-signer.public 2021/04/13 11:01:03 Copying signing key for use by installerwhere
serviceaccount-signer.privateandserviceaccount-signer.publicare the generated key files.This command also creates a private key that the cluster requires during installation in
/<path_to_ccoctl_output_dir>/tls/bound-service-account-signing-key.key. -
Create an OpenID Connect identity provider and S3 bucket on AWS by running the following command:
$ ccoctl aws create-identity-provider \ --name=<name> \ --region=<aws_region> \ --public-key-file=<path_to_ccoctl_output_dir>/serviceaccount-signer.public<name>is the name used to tag any cloud resources that are created for tracking.<aws-region>is the AWS region in which cloud resources will be created.<path_to_ccoctl_output_dir>is the path to the public key file that theccoctl aws create-key-paircommand generated.Example output2021/04/13 11:16:09 Bucket <name>-oidc created 2021/04/13 11:16:10 OpenID Connect discovery document in the S3 bucket <name>-oidc at .well-known/openid-configuration updated 2021/04/13 11:16:10 Reading public key 2021/04/13 11:16:10 JSON web key set (JWKS) in the S3 bucket <name>-oidc at keys.json updated 2021/04/13 11:16:18 Identity Provider created with ARN: arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.comwhere
openid-configurationis a discovery document andkeys.jsonis a JSON web key set file.This command also creates a YAML configuration file in
/<path_to_ccoctl_output_dir>/manifests/cluster-authentication-02-config.yaml. This file sets the issuer URL field for the service account tokens that the cluster generates, so that the AWS IAM identity provider trusts the tokens.
-
Create IAM roles for each component in the cluster:
-
Set a
$RELEASE_IMAGEvariable with the release image from your installation file by running the following command:$ RELEASE_IMAGE=$(./openshift-install version | awk '/release image/ {print $3}') -
Extract the list of
CredentialsRequestobjects from the OpenShift Container Platform release image:$ oc adm release extract \ --from=$RELEASE_IMAGE \ --credentials-requests \ --included \ --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ --to=<path_to_directory_for_credentials_requests>- The
--includedparameter includes only the manifests that your specific cluster configuration requires. - Specify the location of the
install-config.yamlfile. - Specify the path to the directory where you want to store the
CredentialsRequestobjects. If the specified directory does not exist, this command creates it.
- The
-
Use the
ccoctltool to process allCredentialsRequestobjects by running the following command:$ ccoctl aws create-iam-roles \ --name=<name> \ --region=<aws_region> \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.comNote
For AWS environments that use alternative IAM API endpoints, such as GovCloud, you must also specify your region with the
--regionparameter.If your cluster uses Technology Preview features that are enabled by the
TechPreviewNoUpgradefeature set, you must include the--enable-tech-previewparameter.For each
CredentialsRequestobject,ccoctlcreates an IAM role with a trust policy that is tied to the specified OIDC identity provider, and a permissions policy as defined in eachCredentialsRequestobject from the OpenShift Container Platform release image.
-
-
To verify that the OpenShift Container Platform secrets are created, list the files in the
<path_to_ccoctl_output_dir>/manifestsdirectory:$ ls <path_to_ccoctl_output_dir>/manifestsExample outputcluster-authentication-02-config.yaml openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml openshift-cloud-network-config-controller-cloud-credentials-credentials.yaml openshift-cluster-api-capa-manager-bootstrap-credentials-credentials.yaml openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml openshift-image-registry-installer-cloud-credentials-credentials.yaml openshift-ingress-operator-cloud-credentials-credentials.yaml openshift-machine-api-aws-cloud-credentials-credentials.yamlYou can verify that the IAM roles are created by querying AWS. For more information, refer to AWS documentation on listing IAM roles.
Incorporating the Cloud Credential Operator utility manifests
To implement short-term security credentials managed outside the cluster for individual components, you must move the manifest files that the Cloud Credential Operator utility (ccoctl) created to the correct directories for the installation program.
-
You have configured an account with the cloud platform that hosts your cluster.
-
You have configured the Cloud Credential Operator utility (
ccoctl). -
You have created the cloud provider resources that are required for your cluster with the
ccoctlutility.
-
If you did not set the
credentialsModeparameter in theinstall-config.yamlconfiguration file toManual, modify the value as shown:Sample configuration file snippetapiVersion: v1 baseDomain: example.com credentialsMode: Manual # ... -
If you have not previously created installation manifest files, do so by running the following command:
$ openshift-install create manifests --dir <installation_directory>where
<installation_directory>is the directory in which the installation program creates files. -
Copy the manifests that the
ccoctlutility generated to themanifestsdirectory that the installation program created by running the following command:$ cp /<path_to_ccoctl_output_dir>/manifests/* ./manifests/ -
Copy the
tlsdirectory that contains the private key to the installation directory:$ cp -a /<path_to_ccoctl_output_dir>/tls .
Cluster Network Operator configuration
To manage cluster networking, configure the Cluster Network Operator (CNO) Network custom resource (CR) named cluster so the cluster uses the correct IP ranges and network plugin settings for reliable pod and service connectivity. Some settings and fields are inherited at the time of install or by the default.Network.type plugin, OVN-Kubernetes.
The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group:
clusterNetwork-
IP address pools from which pod IP addresses are allocated.
serviceNetwork-
IP address pool for services.
defaultNetwork.type-
Cluster network plugin.
OVNKubernetesis the only supported plugin during installation.
You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster.
Cluster Network Operator configuration object
The fields for the Cluster Network Operator (CNO) are described in the following table:
| Field | Type | Description |
|---|---|---|
|
|
The name of the CNO object. This name is always |
|
|
A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example:
|
|
|
A block of IP addresses for services. The OVN-Kubernetes network plugin supports only a single IP address block for the service network. For example:
You can customize this field only in the |
|
|
Configures the network plugin for the cluster network. |
|
|
This setting enables a dynamic routing provider. The FRR routing capability provider is required for the route advertisement feature. The only supported value is
|
Important
For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes.
defaultNetwork object configuration
The values for the defaultNetwork object are defined in the following table:
| Field | Type | Description |
|---|---|---|
|
|
Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. |
|
|
This object is only valid for the OVN-Kubernetes network plugin. |
Configuration for the OVN-Kubernetes network plugin
The following table describes the configuration fields for the OVN-Kubernetes network plugin:
| Field | Type | Description |
|---|---|---|
|
|
The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to |
|
|
The port to use for all Geneve packets. The default value is |
|
|
Specify a configuration object for customizing the IPsec configuration. |
|
|
Specifies a configuration object for IPv4 settings. |
|
|
Specifies a configuration object for IPv6 settings. |
|
|
Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. |
|
|
Specifies whether to advertise cluster network routes. The default value is
|
|
|
Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Valid values are Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. |
| Field | Type | Description |
|---|---|---|
|
string |
If your existing network infrastructure overlaps with the The default value is |
|
string |
If your existing network infrastructure overlaps with the The default value is |
| Field | Type | Description |
|---|---|---|
|
string |
If your existing network infrastructure overlaps with the The default value is |
|
string |
If your existing network infrastructure overlaps with the The default value is |
| Field | Type | Description |
|---|---|---|
|
integer |
The maximum number of messages to generate every second per node. The default value is |
|
integer |
The maximum size for the audit log in bytes. The default value is |
|
integer |
The maximum number of log files that are retained. |
|
string |
One of the following additional audit log targets:
|
|
string |
The syslog facility, such as |
| Field | Type | Description |
|---|---|---|
|
|
Set this field to This field has an interaction with the Open vSwitch hardware offloading feature.
If you set this field to |
|
|
You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the Note The default value of |
|
|
Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. |
|
|
Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. |
| Field | Type | Description |
|---|---|---|
|
|
The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is Important For OpenShift Container Platform 4.17 and later versions, clusters use |
| Field | Type | Description |
|---|---|---|
|
|
The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is Important For OpenShift Container Platform 4.17 and later versions, clusters use |
| Field | Type | Description |
|---|---|---|
|
|
Specifies the behavior of the IPsec implementation. Must be one of the following values:
|
defaultNetwork:
type: OVNKubernetes
ovnKubernetesConfig:
mtu: 1400
genevePort: 6081
ipsecConfig:
mode: Full
Specifying advanced network configuration
You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment.
You can specify advanced network configuration only before you install the cluster.
Important
Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported.
-
You have created the
install-config.yamlfile and completed any modifications to it.
-
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory><installation_directory>specifies the name of the directory that contains theinstall-config.yamlfile for your cluster.
-
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.ymlin the<installation_directory>/manifests/directory:apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: -
Specify the advanced network configuration for your cluster in the
cluster-network-03-config.ymlfile, such as in the following example:Enable IPsec for the OVN-Kubernetes network providerapiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full -
Optional: Back up the
manifests/cluster-network-03-config.ymlfile. The installation program consumes themanifests/directory when you create the Ignition config files. -
Remove the Kubernetes manifest files that define the control plane machines and compute
MachineSets:$ rm -f openshift/99_openshift-cluster-api_master-machines-*.yaml openshift/99_openshift-cluster-api_worker-machineset-*.yamlBecause you create and manage these resources yourself, you do not have to initialize them.
-
You can preserve the
MachineSetfiles to create compute machines by using the machine API, but you must update references to them to match your environment.
-
Note
For more information on using a Network Load Balancer (NLB) on AWS, see Configuring Ingress cluster traffic on AWS using a Network Load Balancer.
Configuring an Ingress Controller Network Load Balancer on a new AWS cluster
You can create an Ingress Controller backed by an Amazon Web Services Network Load Balancer (NLB) on a new cluster in situations where you need more transparent networking capabilities.
-
Create and edit the
install-config.yamlfile. For instructions, see "Creating the installation configuration file" in the Additonal resources section.
-
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>-
For
<installation_directory>, specify the name of the directory that contains theinstall-config.yamlfile for your cluster.
-
-
Create a file that is named
cluster-ingress-default-ingresscontroller.yamlin the<installation_directory>/manifests/directory:$ touch <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yaml<installation_directory>-
Specifies the directory name that contains the
manifests/directory for your cluster.
-
Check the several network configuration files that exist in the
manifests/directory by entering the following command:$ ls <installation_directory>/manifests/cluster-ingress-default-ingresscontroller.yamlExample outputcluster-ingress-default-ingresscontroller.yaml -
Open the
cluster-ingress-default-ingresscontroller.yamlfile in an editor and enter a custom resource (CR) that describes the Operator configuration you want:apiVersion: operator.openshift.io/v1 kind: IngressController metadata: creationTimestamp: null name: default namespace: openshift-ingress-operator spec: endpointPublishingStrategy: loadBalancer: scope: External providerParameters: type: AWS aws: type: NLB type: LoadBalancerService -
Save the
cluster-ingress-default-ingresscontroller.yamlfile and quit the text editor. -
Optional: Back up the
manifests/cluster-ingress-default-ingresscontroller.yamlfile because the installation program deletes themanifests/directory during cluster creation.
Configuring hybrid networking with OVN-Kubernetes
You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations.
Note
This configuration is necessary to run both Linux and Windows nodes in the same cluster.
-
You defined
OVNKubernetesfor thenetworking.networkTypeparameter in theinstall-config.yamlfile. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information.
-
Change to the directory that contains the installation program and create the manifests:
$ ./openshift-install create manifests --dir <installation_directory>where:
<installation_directory>-
Specifies the name of the directory that contains the
install-config.yamlfile for your cluster.
-
Create a stub manifest file for the advanced network configuration that is named
cluster-network-03-config.ymlin the<installation_directory>/manifests/directory:$ cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOFwhere:
<installation_directory>-
Specifies the directory name that contains the
manifests/directory for your cluster.
-
Open the
cluster-network-03-config.ymlfile in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example:Specify a hybrid networking configurationapiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898- Specify the CIDR configuration used for nodes on the additional overlay network. The
hybridClusterNetworkCIDR must not overlap with theclusterNetworkCIDR. - Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default
6081port. For more information on this requirement, see Pod-to-pod connectivity between hosts is broken in the Microsoft documentation.Note
Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom
hybridOverlayVXLANPortvalue because this Windows server version does not support selecting a custom VXLAN port.
- Specify the CIDR configuration used for nodes on the additional overlay network. The
-
Save the
cluster-network-03-config.ymlfile and quit the text editor. -
Optional: Back up the
manifests/cluster-network-03-config.ymlfile. The installation program deletes themanifests/directory when creating the cluster.
Note
For more information about using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads.
Deploying the cluster
You can install OpenShift Container Platform on a compatible cloud platform.
Important
You can run the create cluster command of the installation program only once, during initial installation.
-
You have configured an account with the cloud platform that hosts your cluster.
-
You have the OpenShift Container Platform installation program and the pull secret for your cluster.
-
You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
-
In the directory that contains the installation program, initialize the cluster deployment by running the following command:
$ ./openshift-install create cluster --dir <installation_directory> \ --log-level=info- For
<installation_directory>, specify the location of your customized./install-config.yamlfile. - To view different installation details, specify
warn,debug, orerrorinstead ofinfo.
- For
-
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.Note
The elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
When the cluster deployment completes successfully:
-
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadminuser. -
Credential information also outputs to
<installation_directory>/.openshift_install.log.
Important
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
Important
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. -
It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
Provisioning your own DNS records
Use your cluster name and base cluster domain to configure a CNAME record for the API service api.<cluster_name>.<base_domain>. with the API load balancer DNS name. Similarly, use the load balancer DNS name of the Ingress service to provision a CNAME record for the *.apps.<cluster_name>.<base_domain>. hostname by using your cluster name and base cluster domain.
Important
User-provisioned DNS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
You have installed the AWS CLI.
-
Add the
userProvisionedDNSparameter to theinstall-config.yamlfile and enable the parameter. For more information, see "Enabling a user-managed DNS". -
Install your cluster.
-
If you are installing a private cluster, set the
api_lb_namevariable by running the following command:$ api_lb_name="${INFRA_ID}-int" -
If you are installing a public cluster, set the
api_lb_namevariable by running the following command:$ api_lb_name="${INFRA_ID}-ext" -
To retrieve the DNS name of the API service, run the following command:
$ aws --region ${REGION} elbv2 describe-load-balancers --names ${api_lb_name} --query 'LoadBalancers[*].DNSName' --output text -
Use the DNS name and your cluster name and base cluster domain to configure your own DNS record with the
api.<cluster_name>.<base_domain>.hostname. -
To retrieve the DNS name of the Ingress service, run the following command:
$ ingress_lb_name=$(aws --region ${REGION} resourcegroupstaggingapi get-resources --resource-type-filters elasticloadbalancing:loadbalancer --tag-filters Key=kubernetes.io/cluster/${INFRA_ID},Values=owned Key=kubernetes.io/service-name,Values=openshift-ingress/router-default --query 'ResourceTagMappingList[*].ResourceARN | [0]' --output text | awk -F'/' '{print $2}') -
Run the following command, which uses the variable
ingress_lb_namegenerated from the previous command:$ aws --region ${REGION} elb describe-load-balancers --load-balancer-names ${ingress_lb_name} --query 'LoadBalancerDescriptions[].DNSName' --output text -
Use the DNS name and your cluster name and base cluster domain to configure your own DNS record with the
*.apps.<cluster_name>.<base_domain>.hostname.
Logging in to the cluster by using the CLI
To log in to your cluster as the default system user, export the kubeconfig file. This configuration enables the CLI to authenticate and connect to the specific API server created during OpenShift Container Platform installation.
The kubeconfig file is specific to a cluster and is created during OpenShift Container Platform installation.
-
You deployed an OpenShift Container Platform cluster.
-
You installed the OpenShift CLI (
oc).
-
Export the
kubeadmincredentials by running the following command:$ export KUBECONFIG=<installation_directory>/auth/kubeconfigwhere:
<installation_directory>-
Specifies the path to the directory that stores the installation files.
-
Verify you can run
occommands successfully using the exported configuration by running the following command:$ oc whoamiExample outputsystem:admin
Logging in to the cluster by using the web console
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
-
You have access to the installation host.
-
You completed a cluster installation and all cluster Operators are available.
-
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:$ cat <installation_directory>/auth/kubeadmin-passwordNote
Alternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host. -
List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'Note
Alternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example outputconsole console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None -
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
Next steps
-
If necessary, you can Remote health reporting.
-
If necessary, you can remove cloud provider credentials.