Installing a cluster on user-provisioned infrastructure in AWS by using CloudFormation templates
In OpenShift Container Platform version 4.19, you can install a cluster on Amazon Web Services (AWS) that uses infrastructure that you provide.
One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company’s policies.
Important
The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example.
Prerequisites
-
You reviewed details about the OpenShift Container Platform installation and update processes.
-
You read the documentation on selecting a cluster installation method and preparing it for users.
-
You configured an AWS account to host the cluster.
Important
If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
-
You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or UNIX) in the AWS documentation.
-
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
Note
Be sure to also review this site list if you are configuring a proxy.
-
If the cloud identity and access management (IAM) APIs are not accessible in your environment, or if you do not want to store an administrator-level credential secret in the
kube-systemnamespace, you can manually create and maintain long-term credentials.
Creating the installation files for AWS
To install OpenShift Container Platform on Amazon Web Services using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation.
Optional: Creating a separate /var partition
It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow.
OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var. For example:
-
/var/lib/containers: Holds container-related content that can grow as more images and containers are added to a system. -
/var/lib/etcd: Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. -
/var: Holds data that you might want to keep separate for purposes such as auditing.
Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems.
Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation.
Important
If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section.
-
Create a directory to hold the OpenShift Container Platform installation files:
$ mkdir $HOME/clusterconfig -
Run
openshift-installto create a set of files in themanifestandopenshiftsubdirectories. Answer the system questions as you are prompted:$ openshift-install create manifests --dir $HOME/clusterconfigExample output? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: $HOME/clusterconfig/manifests and $HOME/clusterconfig/openshift -
Optional: Confirm that the installation program created manifests in the
clusterconfig/openshiftdirectory:$ ls $HOME/clusterconfig/openshift/Example output99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... -
Create a Butane config that configures the additional partition. For example, name the file
$HOME/clusterconfig/98-var-partition.bu, change the disk device name to the name of the storage device on theworkersystems, and set the storage size as appropriate. This example places the/vardirectory on a separate partition:variant: openshift version: 4.19.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_name> partitions: - label: var start_mib: <partition_start_offset> size_mib: <partition_size> number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] with_mount_unit: true- The storage device name of the disk that you want to partition.
- When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition.
- The size of the data partition in mebibytes.
- The
prjquotamount option must be enabled for filesystems used for container storage.Note
When creating a separate
/varpartition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name.
-
Create a manifest from the Butane config and save it to the
clusterconfig/openshiftdirectory. For example, run the following command:$ butane $HOME/clusterconfig/98-var-partition.bu -o $HOME/clusterconfig/openshift/98-var-partition.yaml -
Run
openshift-installagain to create Ignition configs from a set of files in themanifestandopenshiftsubdirectories:$ openshift-install create ignition-configs --dir $HOME/clusterconfig$ ls $HOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ignYou can now use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems.
Creating the installation configuration file
Generate and customize the installation configuration file that the installation program needs to deploy your cluster.
-
You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster.
-
You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the
install-config.yamlfile manually.
-
Create the
install-config.yamlfile.-
Change to the directory that contains the installation program and run the following command:
$ ./openshift-install create install-config --dir <installation_directory>- For
<installation_directory>, specify the directory name to store the files that the installation program creates.Important
Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
- For
-
At the prompts, provide the configuration details for your cloud:
-
Optional: Select an SSH key to use to access your cluster machines.
Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. -
Select aws as the platform to target.
-
If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
Note
The AWS access key ID and secret access key are stored in
~/.aws/credentialsin the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. -
Select the AWS Region to deploy the cluster to.
-
Select the base domain for the Route 53 service that you configured for your cluster.
-
Enter a descriptive name for your cluster.
-
Paste the pull secret from Red Hat OpenShift Cluster Manager.
-
-
-
If you are installing a three-node cluster, modify the
install-config.yamlfile by setting thecompute.replicasparameter to0. This ensures that the cluster’s control planes are schedulable. For more information, see "Installing a three-node cluster on AWS". -
Optional: Back up the
install-config.yamlfile.Important
The
install-config.yamlfile is consumed during the installation process. If you want to reuse the file, you must back it up now.
-
See Configuration and credential file settings in the AWS documentation for more information about AWS profile and credential configuration.
Configuring the cluster-wide proxy during installation
To enable internet access in environments that deny direct connections, configure a cluster-wide proxy in the install-config.yaml file. This configuration ensures that the new OpenShift Container Platform cluster routes traffic through the specified HTTP or HTTPS proxy.
-
You have an existing
install-config.yamlfile. -
You have reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the
Proxyobject’sspec.noProxyfield to bypass the proxy if necessary.Note
The
Proxyobjectstatus.noProxyfield is populated with the values of thenetworking.machineNetwork[].cidr,networking.clusterNetwork[].cidr, andnetworking.serviceNetwork[]fields from your installation configuration.For installations on Amazon Web Services (AWS), Google Cloud, Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the
Proxyobjectstatus.noProxyfield is also populated with the instance metadata endpoint (169.254.169.254).
-
Edit your
install-config.yamlfile and add the proxy settings. For example:apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> httpsProxy: https://<username>:<pswd>@<ip>:<port> noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com additionalTrustBundle: | -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> # ...where:
proxy.httpProxy-
Specifies a proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be
http. proxy.httpsProxy-
Specifies a proxy URL to use for creating HTTPS connections outside the cluster.
proxy.noProxy-
Specifies a comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with
.to match subdomains only. For example,.y.commatchesx.y.com, but noty.com. Use*to bypass the proxy for all destinations. If you have added the AmazonEC2,Elastic Load Balancing, andS3VPC endpoints to your VPC, you must add these endpoints to thenoProxyfield. additionalTrustBundle-
If provided, the installation program generates a config map that is named
user-ca-bundlein theopenshift-confignamespace to hold the additional CA certificates. If you provideadditionalTrustBundleand at least one proxy setting, theProxyobject is configured to reference theuser-ca-bundleconfig map in thetrustedCAfield. The Cluster Network Operator then creates atrusted-ca-bundleconfig map that merges the contents specified for thetrustedCAparameter with the RHCOS trust bundle. TheadditionalTrustBundlefield is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle. additionalTrustBundlePolicy-
Specifies the policy that determines the configuration of the
Proxyobject to reference theuser-ca-bundleconfig map in thetrustedCAfield. The allowed values areProxyonlyandAlways. UseProxyonlyto reference theuser-ca-bundleconfig map only whenhttp/httpsproxy is configured. UseAlwaysto always reference theuser-ca-bundleconfig map. The default value isProxyonly. Optional parameter.Note
The installation program does not support the proxy
readinessEndpointsfield.Note
If the installer times out, restart and then complete the deployment by using the
wait-forcommand of the installer. For example:+
$ ./openshift-install wait-for install-complete --log-level debug
-
Save the file and reference it when installing OpenShift Container Platform.
The installation program creates a cluster-wide proxy that is named
clusterthat uses the proxy settings in the providedinstall-config.yamlfile. If no proxy settings are provided, aclusterProxyobject is still created, but it will have a nilspec.Note
Only the
Proxyobject namedclusteris supported, and no additional proxies can be created.
Creating the Kubernetes manifest and Ignition config files
To customize cluster definitions and manually start machines, generate the Kubernetes manifest and Ignition config files. These assets provide the necessary instructions to configure the cluster infrastructure according to your specific deployment requirements.
The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines.
Important
-
The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. -
It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
-
You obtained the OpenShift Container Platform installation program.
-
You created the
install-config.yamlinstallation configuration file.
-
Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster:
$ ./openshift-install create manifests --dir <installation_directory>where
<installation_directory>-
Specifies the installation directory that contains the
install-config.yamlfile you created.
-
Remove the Kubernetes manifest files that define the control plane machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yamlBy removing these files, you prevent the cluster from automatically generating control plane machines.
-
Remove the Kubernetes manifest files that define the control plane machine set:
$ rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml -
Remove the Kubernetes manifest files that define the worker machines:
$ rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yamlImportant
If you disabled the
MachineAPIcapability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install.Because you create and manage the worker machines yourself, you do not need to initialize these machines.
Warning
If you are installing a three-node cluster, skip the following step to allow the control plane nodes to be schedulable.
Important
When you configure control plane nodes from the default unschedulable to schedulable, additional subscriptions are required. This is because control plane nodes then become compute nodes.
-
Check that the
mastersSchedulableparameter in the<installation_directory>/manifests/cluster-scheduler-02-config.ymlKubernetes manifest file is set tofalse. This setting prevents pods from being scheduled on the control plane machines:-
Open the
<installation_directory>/manifests/cluster-scheduler-02-config.ymlfile. -
Locate the
mastersSchedulableparameter and ensure that it is set tofalse. -
Save and exit the file.
-
-
Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the
privateZoneandpublicZonesections from the<installation_directory>/manifests/cluster-dns-02-config.ymlDNS configuration file:apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: id: mycluster-100419-private-zone publicZone: id: example.openshift.com status: {}spec.privateZone: Remove this section completely.If you do so, you must add ingress DNS records manually in a later step.
-
To create the Ignition configuration files, run the following command from the directory that contains the installation program:
$ ./openshift-install create ignition-configs --dir <installation_directory>where:
<installation_directory>-
Specifies the same installation directory.
Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The
kubeadmin-passwordandkubeconfigfiles are created in the./<installation_directory>/authdirectory:. ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign
Extracting the infrastructure name
The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services. The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it.
-
You obtained the OpenShift Container Platform installation program and the pull secret for your cluster.
-
You generated the Ignition config files for your cluster.
-
You installed the
jqpackage.
-
To extract and view the infrastructure name from the Ignition config file metadata, run the following command:
$ jq -r .infraID <installation_directory>/metadata.json- For
<installation_directory>, specify the path to the directory that you stored the installation files in.Example outputopenshift-vw9j6 - The output of this command is your cluster name and a random string.
- For
Creating a VPC in AWS
You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC.
Note
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
-
You added your AWS keys and region to your local AWS profile by running
aws configure.
-
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "VpcCidr", "ParameterValue": "10.0.0.0/16" }, { "ParameterKey": "AvailabilityZoneCount", "ParameterValue": "1" }, { "ParameterKey": "SubnetBits", "ParameterValue": "12" } ]- The CIDR block for the VPC.
- Specify a CIDR block in the format
x.x.x.x/16-24. - The number of availability zones to deploy the VPC in.
- Specify an integer between
1and3. - The size of each subnet in each availability zone.
- Specify an integer between
5and13, where5is/27and13is/19.
-
Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires.
-
Launch the CloudFormation template to create a stack of AWS resources that represent the VPC:
Important
You must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> --template-body file://<template>.yaml --parameters file://<parameters>.json<name>is the name for the CloudFormation stack, such ascluster-vpc. You need the name of this stack if you remove the cluster.<template>is the relative path to and name of the CloudFormation template YAML file that you saved.<parameters>is the relative path to and name of the CloudFormation parameters JSON file.Example outputarn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f
-
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>After the
StackStatusdisplaysCREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:VpcIdThe ID of your VPC.
PublicSubnetIdsThe IDs of the new public subnets.
PrivateSubnetIdsThe IDs of the new private subnets.
CloudFormation template for the VPC
You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster.
CloudFormation template for the VPC
link:https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/01_vpc.yaml[role=include]
-
You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console.
Creating networking and load balancing components in AWS
You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags.
You can run the template multiple times within a single Virtual Private Cloud (VPC).
Note
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
-
You created and configured a VPC and associated subnets in AWS.
-
Obtain the hosted zone ID for the Route 53 base domain that you specified in the
install-config.yamlfile for your cluster. You can obtain details about your hosted zone by running the following command:$ aws route53 list-hosted-zones-by-name --dns-name <route53_domain>- For the
<route53_domain>, specify the Route 53 base domain that you used when you generated theinstall-config.yamlfile for the cluster.Example outputmycluster.example.com. False 100 HOSTEDZONES 65F8F38E-2268-B835-E15C-AB55336FCBFA /hostedzone/Z21IXYZABCZ2A4 mycluster.example.com. 10In the example output, the hosted zone ID is
Z21IXYZABCZ2A4.
- For the
-
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "ClusterName", "ParameterValue": "mycluster" }, { "ParameterKey": "InfrastructureName", "ParameterValue": "mycluster-<random_string>" }, { "ParameterKey": "HostedZoneId", "ParameterValue": "<random_string>" }, { "ParameterKey": "HostedZoneName", "ParameterValue": "example.com" }, { "ParameterKey": "PublicSubnets", "ParameterValue": "subnet-<random_string>" }, { "ParameterKey": "PrivateSubnets", "ParameterValue": "subnet-<random_string>" }, { "ParameterKey": "VpcId", "ParameterValue": "vpc-<random_string>" } ]- A short, representative cluster name to use for hostnames, etc.
- Specify the cluster name that you used when you generated the
install-config.yamlfile for the cluster. - The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- Specify the infrastructure name that you extracted from the Ignition config
file metadata, which has the format
<cluster-name>-<random-string>. - The Route 53 public zone ID to register the targets with.
- Specify the Route 53 public zone ID, which as a format similar to
Z21IXYZABCZ2A4. You can obtain this value from the AWS console. - The Route 53 zone to register the targets with.
- Specify the Route 53 base domain that you used when you generated the
install-config.yamlfile for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. - The public subnets that you created for your VPC.
- Specify the
PublicSubnetIdsvalue from the output of the CloudFormation template for the VPC. - The private subnets that you created for your VPC.
- Specify the
PrivateSubnetIdsvalue from the output of the CloudFormation template for the VPC. - The VPC that you created for the cluster.
- Specify the
VpcIdvalue from the output of the CloudFormation template for the VPC.
-
Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires.
Important
If you are deploying your cluster to an AWS government or secret region, you must update the
InternalApiServerRecordin the CloudFormation template to useCNAMErecords. Records of typeALIASare not supported for AWS government regions. -
Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components:
Important
You must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> --template-body file://<template>.yaml --parameters file://<parameters>.json --capabilities CAPABILITY_NAMED_IAM<name>is the name for the CloudFormation stack, such ascluster-dns. You need the name of this stack if you remove the cluster.<template>is the relative path to and name of the CloudFormation template YAML file that you saved.<parameters>is the relative path to and name of the CloudFormation parameters JSON file.- You must explicitly declare the
CAPABILITY_NAMED_IAMcapability because the provided template creates someAWS::IAM::Roleresources.Example outputarn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183
-
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>After the
StackStatusdisplaysCREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:PrivateHostedZoneIdHosted zone ID for the private DNS.
ExternalApiLoadBalancerNameFull name of the external API load balancer.
InternalApiLoadBalancerNameFull name of the internal API load balancer.
ApiServerDnsNameFull hostname of the API server.
RegisterNlbIpTargetsLambdaLambda ARN useful to help register/deregister IP targets for these load balancers.
ExternalApiTargetGroupArnARN of external API target group.
InternalApiTargetGroupArnARN of internal API target group.
InternalServiceTargetGroupArnARN of internal service target group.
CloudFormation template for the network and load balancers
You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster.
CloudFormation template for the network and load balancers
link:https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/02_cluster_infra.yaml[role=include]
Important
If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example:
Type: CNAME
TTL: 10
ResourceRecords:
- !GetAtt IntApiElb.DNSName
-
You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console.
-
You can view details about your hosted zones by navigating to the AWS Route 53 console.
Creating security group and roles in AWS
You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires.
Note
If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
-
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "InfrastructureName", "ParameterValue": "mycluster-<random_string>" }, { "ParameterKey": "VpcCidr", "ParameterValue": "10.0.0.0/16" }, { "ParameterKey": "PrivateSubnets", "ParameterValue": "subnet-<random_string>" }, { "ParameterKey": "VpcId", "ParameterValue": "vpc-<random_string>" } ]- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- Specify the infrastructure name that you extracted from the Ignition config
file metadata, which has the format
<cluster-name>-<random-string>. - The CIDR block for the VPC.
- Specify the CIDR block parameter that you used for the VPC that you defined
in the form
x.x.x.x/16-24. - The private subnets that you created for your VPC.
- Specify the
PrivateSubnetIdsvalue from the output of the CloudFormation template for the VPC. - The VPC that you created for the cluster.
- Specify the
VpcIdvalue from the output of the CloudFormation template for the VPC.
-
Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires.
-
Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles:
Important
You must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> --template-body file://<template>.yaml --parameters file://<parameters>.json --capabilities CAPABILITY_NAMED_IAM<name>is the name for the CloudFormation stack, such ascluster-sec. You need the name of this stack if you remove the cluster.<template>is the relative path to and name of the CloudFormation template YAML file that you saved.<parameters>is the relative path to and name of the CloudFormation parameters JSON file.- You must explicitly declare the
CAPABILITY_NAMED_IAMcapability because the provided template creates someAWS::IAM::RoleandAWS::IAM::InstanceProfileresources.Example outputarn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db
-
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>After the
StackStatusdisplaysCREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:MasterSecurityGroupIdMaster Security Group ID
WorkerSecurityGroupIdWorker Security Group ID
MasterInstanceProfileMaster IAM Instance Profile
WorkerInstanceProfileWorker IAM Instance Profile
CloudFormation template for security objects
You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster.
CloudFormation template for security objects
link:https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/03_cluster_security.yaml[role=include]
-
You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console.
Accessing RHCOS AMIs with stream metadata
In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation.
You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format.
For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI.
To parse the stream metadata, use one of the following methods:
-
From a Go program, use the official
stream-metadata-golibrary at https://github.com/coreos/stream-metadata-go. You can also view example code in the library. -
From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language.
-
From a command-line utility that handles JSON data, such as
jq:-
Print the current
x86_64oraarch64AMI for an AWS region, such asus-west-1:For x86_64$ openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image'Example outputami-0d3e625f84626bbdaFor aarch64$ openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image'Example outputami-0af1d3b7fa5be2131The output of this command is the AWS AMI ID for your designated architecture and the
us-west-1region. The AMI must belong to the same region as the cluster.
-
RHCOS AMIs for the AWS infrastructure
Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes.
Note
By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI.
| AWS zone | AWS AMI |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| AWS zone | AWS AMI |
|---|---|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
AWS regions without a published RHCOS AMI
You can deploy an OpenShift Container Platform cluster to Amazon Web Services (AWS) regions without native support for a Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) or the AWS software development kit (SDK). If a published AMI is not available for an AWS region, you can upload a custom AMI prior to installing the cluster.
If you are deploying to a region not supported by the AWS SDK
and you do not specify a custom AMI, the installation program
copies the us-east-1 AMI to the user account automatically. Then the
installation program creates the control plane machines with encrypted EBS
volumes using the default or user-specified Key Management Service (KMS) key.
This allows the AMI to follow the same process workflow as published RHCOS
AMIs.
A region without native support for an RHCOS AMI is not available to
select from the terminal during cluster creation because it is not published.
However, you can install to this region by configuring the custom AMI in the
install-config.yaml file.
Uploading a custom RHCOS AMI in AWS
If you are deploying to a custom Amazon Web Services (AWS) region, you must upload a custom Red Hat Enterprise Linux CoreOS (RHCOS) Amazon Machine Image (AMI) that belongs to that region.
-
You configured an AWS account.
-
You created an Amazon S3 bucket with the required IAM service role.
-
You uploaded your RHCOS VMDK file to Amazon S3.
-
You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer.
-
Export your AWS profile as an environment variable:
$ export AWS_PROFILE=<aws_profile>- The AWS profile name that holds your AWS credentials, like
govcloudorbeijingadmin.
- The AWS profile name that holds your AWS credentials, like
-
Export the region to associate with your custom AMI as an environment variable:
$ export AWS_DEFAULT_REGION=<aws_region>- The AWS region, like
us-gov-east-1orcn-north-1.
- The AWS region, like
-
Export the version of RHCOS you uploaded to Amazon S3 as an environment variable:
$ export RHCOS_VERSION=<version>- The RHCOS VMDK version, like
4.19.0.
- The RHCOS VMDK version, like
-
Export the Amazon S3 bucket name as an environment variable:
$ export VMIMPORT_BUCKET_NAME=<s3_bucket_name> -
Create the
containers.jsonfile and define your RHCOS VMDK file:$ cat <<EOF > containers.json { "Description": "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64", "Format": "vmdk", "UserBucket": { "S3Bucket": "${VMIMPORT_BUCKET_NAME}", "S3Key": "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64.vmdk" } } EOF -
Import the RHCOS disk as an Amazon EBS snapshot:
$ aws ec2 import-snapshot --region ${AWS_DEFAULT_REGION} \ --description "<description>" \ --disk-container "file://<file_path>/containers.json"- The description of your RHCOS disk being imported, like
rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64. - The file path to the JSON file describing your RHCOS disk. The JSON file should contain your Amazon S3 bucket name and key.
- The description of your RHCOS disk being imported, like
-
Check the status of the image import:
$ watch -n 5 aws ec2 describe-import-snapshot-tasks --region ${AWS_DEFAULT_REGION}Example output{ "ImportSnapshotTasks": [ { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "ImportTaskId": "import-snap-fh6i8uil", "SnapshotTaskDetail": { "Description": "rhcos-4.7.0-x86_64-aws.x86_64", "DiskImageSize": 819056640.0, "Format": "VMDK", "SnapshotId": "snap-06331325870076318", "Status": "completed", "UserBucket": { "S3Bucket": "external-images", "S3Key": "rhcos-4.7.0-x86_64-aws.x86_64.vmdk" } } } ] }Copy the
SnapshotIdto register the image. -
Create a custom RHCOS AMI from the RHCOS snapshot:
$ aws ec2 register-image \ --region ${AWS_DEFAULT_REGION} \ --architecture x86_64 \ --description "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64" \ --ena-support \ --name "rhcos-${RHCOS_VERSION}-x86_64-aws.x86_64" \ --virtualization-type hvm \ --root-device-name '/dev/xvda' \ --block-device-mappings 'DeviceName=/dev/xvda,Ebs={DeleteOnTermination=true,SnapshotId=<snapshot_ID>}'- The RHCOS VMDK architecture type, like
x86_64,aarch64,s390x, orppc64le. - The
Descriptionfrom the imported snapshot. - The name of the RHCOS AMI.
- The
SnapshotIDfrom the imported snapshot.
- The RHCOS VMDK architecture type, like
To learn more about these APIs, see the AWS documentation for importing snapshots and creating EBS-backed AMIs.
Creating the bootstrap node in AWS
You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by:
-
Providing a location to serve the
bootstrap.ignIgnition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates. -
Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires.
Note
If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
-
You created and configured DNS, load balancers, and listeners in AWS.
-
You created the security groups and roles required for your cluster in AWS.
-
Create the bucket by running the following command:
$ aws s3 mb s3://<cluster-name>-infra<cluster-name>-infrais the bucket name. When creating theinstall-config.yamlfile, replace<cluster-name>with the name specified for the cluster.You must use a presigned URL for your S3 bucket, instead of the
s3://schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. ** Providing your own custom endpoints.
-
Upload the
bootstrap.ignIgnition config file to the bucket by running the following command:$ aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign- For
<installation_directory>, specify the path to the directory that you stored the installation files in.
- For
-
Verify that the file uploaded by running the following command:
$ aws s3 ls s3://<cluster-name>-infra/Example output2019-04-03 16:15:16 314878 bootstrap.ignNote
The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach.
-
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "InfrastructureName", "ParameterValue": "mycluster-<random_string>" }, { "ParameterKey": "RhcosAmi", "ParameterValue": "ami-<random_string>" }, { "ParameterKey": "AllowedBootstrapSshCidr", "ParameterValue": "0.0.0.0/0" }, { "ParameterKey": "PublicSubnet", "ParameterValue": "subnet-<random_string>" }, { "ParameterKey": "MasterSecurityGroupId", "ParameterValue": "sg-<random_string>" }, { "ParameterKey": "VpcId", "ParameterValue": "vpc-<random_string>" }, { "ParameterKey": "BootstrapIgnitionLocation", "ParameterValue": "s3://<bucket_name>/bootstrap.ign" }, { "ParameterKey": "AutoRegisterELB", "ParameterValue": "yes" }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" }, { "ParameterKey": "ExternalApiTargetGroupArn", "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" }, { "ParameterKey": "InternalApiTargetGroupArn", "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" }, { "ParameterKey": "InternalServiceTargetGroupArn", "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" } ]- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- Specify the infrastructure name that you extracted from the Ignition config
file metadata, which has the format
<cluster-name>-<random-string>. - Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture.
- Specify a valid
AWS::EC2::Image::Idvalue. - CIDR block to allow SSH access to the bootstrap node.
- Specify a CIDR block in the format
x.x.x.x/16-24. - The public subnet that is associated with your VPC to launch the bootstrap node into.
- Specify the
PublicSubnetIdsvalue from the output of the CloudFormation template for the VPC. - The master security group ID (for registering temporary rules)
- Specify the
MasterSecurityGroupIdvalue from the output of the CloudFormation template for the security group and roles. - The VPC created resources will belong to.
- Specify the
VpcIdvalue from the output of the CloudFormation template for the VPC. - Location to fetch bootstrap Ignition config file from.
- Specify the S3 bucket and file name in the form
s3://<bucket_name>/bootstrap.ign. - Whether or not to register a network load balancer (NLB).
- Specify
yesorno. If you specifyyes, you must provide a Lambda Amazon Resource Name (ARN) value. - The ARN for NLB IP target registration lambda group.
- Specify the
RegisterNlbIpTargetsLambdavalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - The ARN for external API load balancer target group.
- Specify the
ExternalApiTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - The ARN for internal API load balancer target group.
- Specify the
InternalApiTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - The ARN for internal service load balancer target group.
- Specify the
InternalServiceTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region.
-
Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires.
-
Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the
ignition.config.proxyfields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to thenoProxyfield. -
Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node:
Important
You must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> --template-body file://<template>.yaml --parameters file://<parameters>.json --capabilities CAPABILITY_NAMED_IAM<name>is the name for the CloudFormation stack, such ascluster-bootstrap. You need the name of this stack if you remove the cluster.<template>is the relative path to and name of the CloudFormation template YAML file that you saved.<parameters>is the relative path to and name of the CloudFormation parameters JSON file.- You must explicitly declare the
CAPABILITY_NAMED_IAMcapability because the provided template creates someAWS::IAM::RoleandAWS::IAM::InstanceProfileresources.Example outputarn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83
-
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>After the
StackStatusdisplaysCREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:BootstrapInstanceIdThe bootstrap Instance ID.
BootstrapPublicIpThe bootstrap node public IP address.
BootstrapPrivateIpThe bootstrap node private IP address.
CloudFormation template for the bootstrap machine
You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster.
CloudFormation template for the bootstrap machine
link:https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/04_cluster_bootstrap.yaml[role=include]
-
You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console.
Creating the control plane machines in AWS
You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes.
Important
The CloudFormation template creates a stack that represents three control plane nodes.
Note
If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
-
You created the bootstrap machine.
-
Create a JSON file that contains the parameter values that the template requires:
[ { "ParameterKey": "InfrastructureName", "ParameterValue": "mycluster-<random_string>" }, { "ParameterKey": "RhcosAmi", "ParameterValue": "ami-<random_string>" }, { "ParameterKey": "AutoRegisterDNS", "ParameterValue": "yes" }, { "ParameterKey": "PrivateHostedZoneId", "ParameterValue": "<random_string>" }, { "ParameterKey": "PrivateHostedZoneName", "ParameterValue": "mycluster.example.com" }, { "ParameterKey": "Master0Subnet", "ParameterValue": "subnet-<random_string>" }, { "ParameterKey": "Master1Subnet", "ParameterValue": "subnet-<random_string>" }, { "ParameterKey": "Master2Subnet", "ParameterValue": "subnet-<random_string>" }, { "ParameterKey": "MasterSecurityGroupId", "ParameterValue": "sg-<random_string>" }, { "ParameterKey": "IgnitionLocation", "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" }, { "ParameterKey": "CertificateAuthorities", "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" }, { "ParameterKey": "MasterInstanceProfileName", "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" }, { "ParameterKey": "MasterInstanceType", "ParameterValue": "" }, { "ParameterKey": "AutoRegisterELB", "ParameterValue": "yes" }, { "ParameterKey": "RegisterNlbIpTargetsLambdaArn", "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" }, { "ParameterKey": "ExternalApiTargetGroupArn", "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" }, { "ParameterKey": "InternalApiTargetGroupArn", "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" }, { "ParameterKey": "InternalServiceTargetGroupArn", "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" } ]- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- Specify the infrastructure name that you extracted from the Ignition config
file metadata, which has the format
<cluster-name>-<random-string>. - Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture.
- Specify an
AWS::EC2::Image::Idvalue. - Whether or not to perform DNS etcd registration.
- Specify
yesorno. If you specifyyes, you must provide hosted zone information. - The Route 53 private zone ID to register the etcd targets with.
- Specify the
PrivateHostedZoneIdvalue from the output of the CloudFormation template for DNS and load balancing. - The Route 53 zone to register the targets with.
- Specify
<cluster_name>.<domain_name>where<domain_name>is the Route 53 base domain that you used when you generatedinstall-config.yamlfile for the cluster. Do not include the trailing period (.) that is displayed in the AWS console. - A subnet, preferably private, to launch the control plane machines on.
- Specify a subnet from the
PrivateSubnetsvalue from the output of the CloudFormation template for DNS and load balancing. - The master security group ID to associate with control plane nodes.
- Specify the
MasterSecurityGroupIdvalue from the output of the CloudFormation template for the security group and roles. - The location to fetch control plane Ignition config file from.
- Specify the generated Ignition config file location,
https://api-int.<cluster_name>.<domain_name>:22623/config/master. - The base64 encoded certificate authority string to use.
- Specify the value from the
master.ignfile that is in the installation directory. This value is the long string with the formatdata:text/plain;charset=utf-8;base64,ABC…xYz==. - The IAM profile to associate with control plane nodes.
- Specify the
MasterInstanceProfileparameter value from the output of the CloudFormation template for the security group and roles. - The type of AWS instance to use for the control plane machines based on your selected architecture.
- The instance type value corresponds to the minimum resource requirements for
control plane machines. For example
m6i.xlargeis a type for AMD64 andm6g.xlargeis a type for ARM64. - Whether or not to register a network load balancer (NLB).
- Specify
yesorno. If you specifyyes, you must provide a Lambda Amazon Resource Name (ARN) value. - The ARN for NLB IP target registration lambda group.
- Specify the
RegisterNlbIpTargetsLambdavalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - The ARN for external API load balancer target group.
- Specify the
ExternalApiTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - The ARN for internal API load balancer target group.
- Specify the
InternalApiTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region. - The ARN for internal service load balancer target group.
- Specify the
InternalServiceTargetGroupArnvalue from the output of the CloudFormation template for DNS and load balancing. Usearn:aws-us-govif deploying the cluster to an AWS GovCloud region.
-
Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires.
-
If you specified an
m5instance type as the value forMasterInstanceType, add that instance type to theMasterInstanceType.AllowedValuesparameter in the CloudFormation template. -
Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes:
Important
You must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> --template-body file://<template>.yaml --parameters file://<parameters>.json<name>is the name for the CloudFormation stack, such ascluster-control-plane. You need the name of this stack if you remove the cluster.<template>is the relative path to and name of the CloudFormation template YAML file that you saved.<parameters>is the relative path to and name of the CloudFormation parameters JSON file.Example outputarn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4bNote
The CloudFormation template creates a stack that represents three control plane nodes.
-
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name>
CloudFormation template for control plane machines
You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster.
CloudFormation template for control plane machines
link:https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/05_cluster_master_nodes.yaml[role=include]
-
You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console.
Creating the worker nodes in AWS
You can create worker nodes in Amazon Web Services (AWS) for your cluster to use.
Note
If you are installing a three-node cluster, skip this step. A three-node cluster consists of three control plane machines, which also act as compute machines.
You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node.
Important
The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node.
Note
If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.
-
You created the control plane machines.
-
Create a JSON file that contains the parameter values that the CloudFormation template requires:
[ { "ParameterKey": "InfrastructureName", "ParameterValue": "mycluster-<random_string>" }, { "ParameterKey": "RhcosAmi", "ParameterValue": "ami-<random_string>" }, { "ParameterKey": "Subnet", "ParameterValue": "subnet-<random_string>" }, { "ParameterKey": "WorkerSecurityGroupId", "ParameterValue": "sg-<random_string>" }, { "ParameterKey": "IgnitionLocation", "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" }, { "ParameterKey": "CertificateAuthorities", "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" }, { "ParameterKey": "WorkerInstanceProfileName", "ParameterValue": "<roles_stack>-WorkerInstanceProfile-<random_string>" }, { "ParameterKey": "WorkerInstanceType", "ParameterValue": "" } ]- The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
- Specify the infrastructure name that you extracted from the Ignition config
file metadata, which has the format
<cluster-name>-<random-string>. - Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture.
- Specify an
AWS::EC2::Image::Idvalue. - A subnet, preferably private, to start the worker nodes on.
- Specify a subnet from the
PrivateSubnetsvalue from the output of the CloudFormation template for DNS and load balancing. - The worker security group ID to associate with worker nodes.
- Specify the
WorkerSecurityGroupIdvalue from the output of the CloudFormation template for the security group and roles. - The location to fetch the bootstrap Ignition config file from.
- Specify the generated Ignition config location,
https://api-int.<cluster_name>.<domain_name>:22623/config/worker. - Base64 encoded certificate authority string to use.
- Specify the value from the
worker.ignfile that is in the installation directory. This value is the long string with the formatdata:text/plain;charset=utf-8;base64,ABC…xYz==. - The IAM profile to associate with worker nodes.
- Specify the
WorkerInstanceProfileparameter value from the output of the CloudFormation template for the security group and roles. - The type of AWS instance to use for the compute machines based on your selected architecture.
- The instance type value corresponds to the minimum resource requirements
for compute machines. For example
m6i.largeis a type for AMD64 andm6g.largeis a type for ARM64.
-
Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires.
-
Optional: If you specified an
m5instance type as the value forWorkerInstanceType, add that instance type to theWorkerInstanceType.AllowedValuesparameter in the CloudFormation template. -
Optional: If you are deploying with an AWS Marketplace image, update the
Worker0.type.properties.ImageIDparameter with the AMI ID that you obtained from your subscription. -
Use the CloudFormation template to create a stack of AWS resources that represent a worker node:
Important
You must enter the command on a single line.
$ aws cloudformation create-stack --stack-name <name> --template-body file://<template>.yaml \ --parameters file://<parameters>.json<name>is the name for the CloudFormation stack, such ascluster-worker-1. You need the name of this stack if you remove the cluster.<template>is the relative path to and name of the CloudFormation template YAML file that you saved.<parameters>is the relative path to and name of the CloudFormation parameters JSON file.Example outputarn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59Note
The CloudFormation template creates a stack that represents one worker node.
-
Confirm that the template components exist:
$ aws cloudformation describe-stacks --stack-name <name> -
Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name.
Important
You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template.
CloudFormation template for compute machines
You can deploy the compute machines that you need for your OpenShift Container Platform cluster by using the following CloudFormation template.
CloudFormation template for compute machines
link:https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/06_cluster_worker_node.yaml[role=include]
-
You can view details about the CloudFormation stacks that you create by navigating to the AWS CloudFormation console.
Creating the CloudFormation stack for compute machines
You can create a stack of AWS resources for the compute machines by using the CloudFormation template that was previously shared.
Important
When you use the CloudFormation template for the control plane machines, the template provisions all three control plane machines with a single stack; however, when you use the CloudFormation template to deploy the compute machines, you must create the number of stacks based on the number that you defined in the install-config.yaml file. Each stack is provisioned once for each machine. To provision a new compute machine, you must change the stack name.
-
To create the CloudFormation stack for compute machines, run the following command:
$ aws cloudformation create-stack --stack-name <name> \ --template-body file://<template>.yaml \ --parameters file://<parameters>.json- Specify the
<name>with the name for the CloudFormation stack, such ascluster-worker-1. You need the name of this stack if you remove the cluster. - Specify the relative path and the name of the CloudFormation template YAML file that you saved.
- Specify the relative path and the name of the JSON file for the CloudFormation parameters.
Example output
arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59
- Specify the
Initializing the bootstrap sequence on AWS with user-provisioned infrastructure
After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane.
-
You created the worker nodes.
-
Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane:
$ ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ --log-level=info- For
<installation_directory>, specify the path to the directory that you stored the installation files in. - To view different installation details, specify
warn,debug, orerrorinstead ofinfo.Example outputINFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443... INFO API v1.34.2 up INFO Waiting up to 30m0s for bootstrapping to complete... INFO It is now safe to remove the bootstrap resources INFO Time elapsed: 1sIf the command exits without a
FATALwarning, your OpenShift Container Platform control plane has initialized.Note
After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators.
- For
-
See Monitoring installation progress for details about monitoring the installation, bootstrap, and control plane logs as an OpenShift Container Platform installation progresses.
-
See Gathering bootstrap node diagnostic data for information about troubleshooting issues related to the bootstrap process.
-
You can view details about the running instances that are created by using the AWS EC2 console.
Logging in to the cluster by using the CLI
To log in to your cluster as the default system user, export the kubeconfig file. This configuration enables the CLI to authenticate and connect to the specific API server created during OpenShift Container Platform installation.
The kubeconfig file is specific to a cluster and is created during OpenShift Container Platform installation.
-
You deployed an OpenShift Container Platform cluster.
-
You installed the OpenShift CLI (
oc).
-
Export the
kubeadmincredentials by running the following command:$ export KUBECONFIG=<installation_directory>/auth/kubeconfigwhere:
<installation_directory>-
Specifies the path to the directory that stores the installation files.
-
Verify you can run
occommands successfully using the exported configuration by running the following command:$ oc whoamiExample outputsystem:admin
Approving the certificate signing requests for your machines
To add machines to a cluster, verify the status of the certificate signing requests (CSRs) generated for each machine. If manual approval is required, approve the client requests first, followed by the server requests.
-
You added machines to your cluster.
-
Confirm that the cluster recognizes the machines:
$ oc get nodesExample outputNAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.34.2 master-1 Ready master 63m v1.34.2 master-2 Ready master 64m v1.34.2The output lists all of the machines that you created.
Note
The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.
-
Review the pending CSRs and ensure that you see the client requests with the
PendingorApprovedstatus for each machine that you added to the cluster:$ oc get csrExample outputNAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ...In this example, two machines are joining the cluster. You might see more approved CSRs in the list.
-
If the CSRs were not approved, after all of the pending CSRs for the machines you added are in
Pendingstatus, approve the CSRs for your cluster machines:Note
Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the
machine-approverif the Kubelet requests a new certificate with identical parameters.Note
For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the
oc exec,oc rsh, andoc logscommands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by thenode-bootstrapperservice account in thesystem:nodeorsystem:admingroups, and confirm the identity of the node.-
To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name>where:
<csr_name>-
Specifies the name of a CSR from the list of current CSRs.
-
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approveNote
Some Operators might not become available until some CSRs are approved.
-
-
Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:
$ oc get csrExample outputNAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... -
If the remaining CSRs are not approved, and are in the
Pendingstatus, approve the CSRs for your cluster machines:-
To approve them individually, run the following command for each valid CSR:
$ oc adm certificate approve <csr_name>where:
<csr_name>-
Specifies the name of a CSR from the list of current CSRs.
-
To approve all pending CSRs, run the following command:
$ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
-
-
After all client and server CSRs have been approved, the machines have the
Readystatus. Verify this by running the following command:$ oc get nodesExample outputNAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.34.2 master-1 Ready master 73m v1.34.2 master-2 Ready master 74m v1.34.2 worker-0 Ready worker 11m v1.34.2 worker-1 Ready worker 11m v1.34.2Note
It can take a few minutes after approval of the server CSRs for the machines to transition to the
Readystatus.
Initial Operator configuration
To ensure all Operators become available, configure the required Operators immediately after the control plane initialises. This configuration is essential for stabilizing the cluster environment following the installation.
-
Your control plane has initialized.
-
Watch the cluster components come online:
$ watch -n5 oc get clusteroperatorsExample outputNAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.19.0 True False False 19m baremetal 4.19.0 True False False 37m cloud-credential 4.19.0 True False False 40m cluster-autoscaler 4.19.0 True False False 37m config-operator 4.19.0 True False False 38m console 4.19.0 True False False 26m csi-snapshot-controller 4.19.0 True False False 37m dns 4.19.0 True False False 37m etcd 4.19.0 True False False 36m image-registry 4.19.0 True False False 31m ingress 4.19.0 True False False 30m insights 4.19.0 True False False 31m kube-apiserver 4.19.0 True False False 26m kube-controller-manager 4.19.0 True False False 36m kube-scheduler 4.19.0 True False False 36m kube-storage-version-migrator 4.19.0 True False False 37m machine-api 4.19.0 True False False 29m machine-approver 4.19.0 True False False 37m machine-config 4.19.0 True False False 36m marketplace 4.19.0 True False False 37m monitoring 4.19.0 True False False 29m network 4.19.0 True False False 38m node-tuning 4.19.0 True False False 37m openshift-apiserver 4.19.0 True False False 32m openshift-controller-manager 4.19.0 True False False 30m openshift-samples 4.19.0 True False False 32m operator-lifecycle-manager 4.19.0 True False False 37m operator-lifecycle-manager-catalog 4.19.0 True False False 37m operator-lifecycle-manager-packageserver 4.19.0 True False False 32m service-ca 4.19.0 True False False 38m storage 4.19.0 True False False 37m -
Configure the Operators that are not available.
Image registry storage configuration
Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage.
Configure a persistent volume, which is required for production clusters. Where applicable, you can configure an empty directory as the storage location for non-production clusters.
You can also allow the image registry to use block storage types by using the Recreate rollout strategy during upgrades.
You can configure registry storage for user-provisioned infrastructure in AWS to deploy OpenShift Container Platform to hidden regions. See Configuring the registry for AWS user-provisioned infrastructure for more information.
Configuring registry storage for AWS with user-provisioned infrastructure
During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage.
If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure.
Warning
To secure your registry images in AWS, block public access to the S3 bucket.
-
You have a cluster on AWS with user-provisioned infrastructure.
-
For Amazon S3 storage, the secret is expected to contain two keys:
-
REGISTRY_STORAGE_S3_ACCESSKEY -
REGISTRY_STORAGE_S3_SECRETKEY
-
-
Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old.
-
Fill in the storage configuration in
configs.imageregistry.operator.openshift.io/cluster:$ oc edit configs.imageregistry.operator.openshift.io/clusterExample configurationapiVersion: imageregistry.operator.openshift.io/v1 kind: Config metadata: name: cluster spec: storage: s3: bucket: <bucket_name> region: <region_name>
Configuring storage for the image registry in non-production clusters
You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry.
-
To set the image registry storage to an empty directory:
$ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'Warning
Configure this option only for non-production clusters.
If you run this command before the Image Registry Operator initializes its components, the
oc patchcommand fails with the following error:Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not foundWait a few minutes and run the command again.
Deleting the bootstrap resources
After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS).
-
You completed the initial Operator configuration for your cluster.
-
Delete the bootstrap resources. If you used the CloudFormation template, delete its stack:
-
Delete the stack by using the AWS CLI:
$ aws cloudformation delete-stack --stack-name <name><name>is the name of your bootstrap stack.
-
Delete the stack by using the AWS CloudFormation console.
-
Creating the Ingress DNS Records
If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias.
-
You deployed an OpenShift Container Platform cluster on Amazon Web Services (AWS) that uses infrastructure that you provisioned.
-
You installed the OpenShift CLI (
oc). -
You installed the
jqpackage. -
You downloaded the AWS CLI and installed it on your computer. See Install the AWS CLI Using the Bundled Installer (Linux, macOS, or Unix).
-
Determine the routes to create.
-
To create a wildcard record, use
*.apps.<cluster_name>.<domain_name>, where<cluster_name>is your cluster name, and<domain_name>is the Route 53 base domain for your OpenShift Container Platform cluster. -
To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command:
$ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routesExample outputoauth-openshift.apps.<cluster_name>.<domain_name> console-openshift-console.apps.<cluster_name>.<domain_name> downloads-openshift-console.apps.<cluster_name>.<domain_name> alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>
-
-
Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the
EXTERNAL-IPcolumn:$ oc -n openshift-ingress get service router-defaultExample outputNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.62.215 ab3...28.us-east-2.elb.amazonaws.com 80:31499/TCP,443:30693/TCP 5m -
Locate the hosted zone ID for the load balancer:
$ aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID'- For
<external_ip>, specify the value of the external IP address of the Ingress Operator load balancer that you obtained.Example outputZ3AADJGX6KTTL2
The output of this command is the load balancer hosted zone ID.
- For
-
Obtain the public hosted zone ID for your cluster’s domain:
$ aws route53 list-hosted-zones-by-name \ --dns-name "<domain_name>" \ --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' --output text- For
<domain_name>, specify the Route 53 base domain for your OpenShift Container Platform cluster.Example output/hostedzone/Z3URY6TWQ91KVVThe public hosted zone ID for your domain is shown in the command output. In this example, it is
Z3URY6TWQ91KVV.
- For
-
Add the alias records to your private zone:
$ aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", > "DNSName": "<external_ip>.", > "EvaluateTargetHealth": false > } > } > } > ] > }'- For
<private_hosted_zone_id>, specify the value from the output of the CloudFormation template for DNS and load balancing. - For
<cluster_domain>, specify the domain or subdomain that you use with your OpenShift Container Platform cluster. - For
<hosted_zone_id>, specify the public hosted zone ID for the load balancer that you obtained. - For
<external_ip>, specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period (.) in this parameter value.
- For
-
Add the records to your public zone:
$ aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ > "Changes": [ > { > "Action": "CREATE", > "ResourceRecordSet": { > "Name": "\\052.apps.<cluster_domain>", > "Type": "A", > "AliasTarget":{ > "HostedZoneId": "<hosted_zone_id>", > "DNSName": "<external_ip>.", > "EvaluateTargetHealth": false > } > } > } > ] > }'- For
<public_hosted_zone_id>, specify the public hosted zone for your domain. - For
<cluster_domain>, specify the domain or subdomain that you use with your OpenShift Container Platform cluster. - For
<hosted_zone_id>, specify the public hosted zone ID for the load balancer that you obtained. - For
<external_ip>, specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period (.) in this parameter value.
- For
Completing an AWS installation on user-provisioned infrastructure
After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion.
-
You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure.
-
You installed the
ocCLI.
-
From the directory that contains the installation program, complete the cluster installation:
$ ./openshift-install --dir <installation_directory> wait-for install-complete- For
<installation_directory>, specify the path to the directory that you stored the installation files in.Example outputINFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize... INFO Waiting up to 10m0s for the openshift-console route to be created... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 1sImportant
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. -
It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
-
- For
Logging in to the cluster by using the web console
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
-
You have access to the installation host.
-
You completed a cluster installation and all cluster Operators are available.
-
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:$ cat <installation_directory>/auth/kubeadmin-passwordNote
Alternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host. -
List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'Note
Alternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example outputconsole console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None -
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
Next steps
-
If necessary, you can Remote health reporting.
-
If necessary, you can remove cloud provider credentials.