Skip to content

Installing a cluster on AWS in a disconnected environment with user-provisioned infrastructure

In OpenShift Container Platform version 4.19, you can install a cluster on Amazon Web Services (AWS) using infrastructure that you provide and an internal mirror of the installation release content.

Important

While you can install an OpenShift Container Platform cluster by using mirrored installation release content, your cluster still requires internet access to use the AWS APIs.

One way to create this infrastructure is to use the provided CloudFormation templates. You can modify the templates to customize your infrastructure or use the information that they contain to create AWS objects according to your company’s policies.

Important

The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several CloudFormation templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example.

Prerequisites

About installations in restricted networks

In OpenShift Container Platform 4.19, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster.

If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service’s Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere.

To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions.

Important

Because of the complexity of the configuration for user-provisioned installations, consider completing a standard user-provisioned infrastructure installation before you attempt a restricted network installation using user-provisioned infrastructure. Completing this test installation might make it easier to isolate and troubleshoot any issues that might arise during your installation in a restricted network.

Additional limits

Clusters in restricted networks have the following additional limitations and restrictions:

  • The ClusterVersion status includes an Unable to retrieve available updates error.

  • By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags.

Creating the installation files for AWS

To install OpenShift Container Platform on Amazon Web Services using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You generate and customize the install-config.yaml file, Kubernetes manifests, and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation.

Optional: Creating a separate /var partition

It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow.

OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var. For example:

  • /var/lib/containers: Holds container-related content that can grow as more images and containers are added to a system.

  • /var/lib/etcd: Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage.

  • /var: Holds data that you might want to keep separate for purposes such as auditing.

Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems.

Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation.

Important

If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section.

Procedure
  1. Create a directory to hold the OpenShift Container Platform installation files:

    $ mkdir $HOME/clusterconfig
  2. Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted:

    $ openshift-install create manifests --dir $HOME/clusterconfig
    Example output
    ? SSH Public Key ...
    INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials"
    INFO Consuming Install Config from target directory
    INFO Manifests created in: $HOME/clusterconfig/manifests and $HOME/clusterconfig/openshift
  3. Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory:

    $ ls $HOME/clusterconfig/openshift/
    Example output
    99_kubeadmin-password-secret.yaml
    99_openshift-cluster-api_master-machines-0.yaml
    99_openshift-cluster-api_master-machines-1.yaml
    99_openshift-cluster-api_master-machines-2.yaml
    ...
  4. Create a Butane config that configures the additional partition. For example, name the file $HOME/clusterconfig/98-var-partition.bu, change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition:

    variant: openshift
    version: 4.19.0
    metadata:
      labels:
        machineconfiguration.openshift.io/role: worker
      name: 98-var-partition
    storage:
      disks:
      - device: /dev/disk/by-id/<device_name> 
        partitions:
        - label: var
          start_mib: <partition_start_offset> 
          size_mib: <partition_size> 
          number: 5
      filesystems:
        - device: /dev/disk/by-partlabel/var
          path: /var
          format: xfs
          mount_options: [defaults, prjquota] 
          with_mount_unit: true
    1. The storage device name of the disk that you want to partition.
    2. When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition.
    3. The size of the data partition in mebibytes.
    4. The prjquota mount option must be enabled for filesystems used for container storage.

      Note

      When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name.

  5. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command:

    $ butane $HOME/clusterconfig/98-var-partition.bu -o $HOME/clusterconfig/openshift/98-var-partition.yaml
  6. Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories:

    $ openshift-install create ignition-configs --dir $HOME/clusterconfig
    $ ls $HOME/clusterconfig/
    auth  bootstrap.ign  master.ign  metadata.json  worker.ign

    You can now use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems.

Creating the installation configuration file

Generate and customize the installation configuration file that the installation program needs to deploy your cluster.

Prerequisites
  • You obtained the OpenShift Container Platform installation program for user-provisioned infrastructure and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host.

  • You checked that you are deploying your cluster to an AWS Region with an accompanying Red Hat Enterprise Linux CoreOS (RHCOS) AMI published by Red Hat. If you are deploying to an AWS Region that requires a custom AMI, such as an AWS GovCloud Region, you must create the install-config.yaml file manually.

Procedure
  1. Create the install-config.yaml file.

    1. Change to the directory that contains the installation program and run the following command:

      $ ./openshift-install create install-config --dir <installation_directory> 
      1. For <installation_directory>, specify the directory name to store the files that the installation program creates.

        Important

        Specify an empty directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.

    2. At the prompts, provide the configuration details for your cloud:

      1. Optional: Select an SSH key to use to access your cluster machines.

        Note

        For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses.

      2. Select aws as the platform to target.

      3. If you do not have an AWS profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.

        Note

        The AWS access key ID and secret access key are stored in ~/.aws/credentials in the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file.

      4. Select the AWS Region to deploy the cluster to.

      5. Select the base domain for the Route 53 service that you configured for your cluster.

      6. Enter a descriptive name for your cluster.

      7. Paste the pull secret from Red Hat OpenShift Cluster Manager.

  2. Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network.

    1. Update the pullSecret value to contain the authentication information for your registry:

      pullSecret: '{"auths":{"<local_registry>": {"auth": "<credentials>","email": "you@example.com"}}}'

      For <local_registry>, specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example registry.example.com or registry.example.com:5000. For <credentials>, specify the base64-encoded user name and password for your mirror registry.

    2. Add the additionalTrustBundle parameter and value. The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority or the self-signed certificate that you generated for the mirror registry.

      additionalTrustBundle: |
        -----BEGIN CERTIFICATE-----
        ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ
        -----END CERTIFICATE-----
    3. Add the image content resources:

      imageContentSources:
      - mirrors:
        - <local_registry>/<local_repository_name>/release
        source: quay.io/openshift-release-dev/ocp-release
      - mirrors:
        - <local_registry>/<local_repository_name>/release
        source: quay.io/openshift-release-dev/ocp-v4.0-art-dev

      Use the imageContentSources section from the output of the command to mirror the repository or the values that you used when you mirrored the content from the media that you brought into your restricted network.

    4. Optional: Set the publishing strategy to Internal:

      publish: Internal

      By setting this option, you create an internal Ingress Controller and a private load balancer.

  3. Optional: Back up the install-config.yaml file.

    Important

    The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now.

Additional resources

Configuring the cluster-wide proxy during installation

To enable internet access in environments that deny direct connections, configure a cluster-wide proxy in the install-config.yaml file. This configuration ensures that the new OpenShift Container Platform cluster routes traffic through the specified HTTP or HTTPS proxy.

Prerequisites
  • You have an existing install-config.yaml file.

  • You have reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object’s spec.noProxy field to bypass the proxy if necessary.

    Note

    The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr, networking.clusterNetwork[].cidr, and networking.serviceNetwork[] fields from your installation configuration.

    For installations on Amazon Web Services (AWS), Google Cloud, Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint (169.254.169.254).

Procedure
  1. Edit your install-config.yaml file and add the proxy settings. For example:

    apiVersion: v1
    baseDomain: my.domain.com
    proxy:
      httpProxy: http://<username>:<pswd>@<ip>:<port>
      httpsProxy: https://<username>:<pswd>@<ip>:<port>
      noProxy: ec2.<aws_region>.amazonaws.com,elasticloadbalancing.<aws_region>.amazonaws.com,s3.<aws_region>.amazonaws.com
    additionalTrustBundle: |
        -----BEGIN CERTIFICATE-----
        <MY_TRUSTED_CA_CERT>
        -----END CERTIFICATE-----
    additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle>
    # ...

    where:

    proxy.httpProxy

    Specifies a proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http.

    proxy.httpsProxy

    Specifies a proxy URL to use for creating HTTPS connections outside the cluster.

    proxy.noProxy

    Specifies a comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com, but not y.com. Use * to bypass the proxy for all destinations. If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field.

    additionalTrustBundle

    If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace to hold the additional CA certificates. If you provide additionalTrustBundle and at least one proxy setting, the Proxy object is configured to reference the user-ca-bundle config map in the trustedCA field. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges the contents specified for the trustedCA parameter with the RHCOS trust bundle. The additionalTrustBundle field is required unless the proxy’s identity certificate is signed by an authority from the RHCOS trust bundle.

    additionalTrustBundlePolicy

    Specifies the policy that determines the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always. Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly. Optional parameter.

    Note

    The installation program does not support the proxy readinessEndpoints field.

    Note

    If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example:

    +

    $ ./openshift-install wait-for install-complete --log-level debug
  2. Save the file and reference it when installing OpenShift Container Platform.

    The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec.

    Note

    Only the Proxy object named cluster is supported, and no additional proxies can be created.

Creating the Kubernetes manifest and Ignition config files

To customize cluster definitions and manually start machines, generate the Kubernetes manifest and Ignition config files. These assets provide the necessary instructions to configure the cluster infrastructure according to your specific deployment requirements.

The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines.

Important

  • The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.

  • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

Prerequisites
  • You obtained the OpenShift Container Platform installation program. For a restricted network installation, these files are on your mirror host.

  • You created the install-config.yaml installation configuration file.

Procedure
  1. Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster:

    $ ./openshift-install create manifests --dir <installation_directory>

    where

    <installation_directory>

    Specifies the installation directory that contains the install-config.yaml file you created.

  2. Remove the Kubernetes manifest files that define the control plane machines:

    $ rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml

    By removing these files, you prevent the cluster from automatically generating control plane machines.

  3. Remove the Kubernetes manifest files that define the control plane machine set:

    $ rm -f <installation_directory>/openshift/99_openshift-machine-api_master-control-plane-machine-set.yaml
  4. Remove the Kubernetes manifest files that define the worker machines:

    $ rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml

    Important

    If you disabled the MachineAPI capability when installing a cluster on user-provisioned infrastructure, you must remove the Kubernetes manifest files that define the worker machines. Otherwise, your cluster fails to install.

    Because you create and manage the worker machines yourself, you do not need to initialize these machines.

  5. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false. This setting prevents pods from being scheduled on the control plane machines:

    1. Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file.

    2. Locate the mastersSchedulable parameter and ensure that it is set to false.

    3. Save and exit the file.

  6. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file:

    apiVersion: config.openshift.io/v1
    kind: DNS
    metadata:
      creationTimestamp: null
      name: cluster
    spec:
      baseDomain: example.openshift.com
      privateZone:
        id: mycluster-100419-private-zone
      publicZone: 
        id: example.openshift.com
    status: {}

    spec.privateZone: Remove this section completely.

    If you do so, you must add ingress DNS records manually in a later step.

  7. To create the Ignition configuration files, run the following command from the directory that contains the installation program:

    $ ./openshift-install create ignition-configs --dir <installation_directory>

    where:

    <installation_directory>

    Specifies the same installation directory.

    Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory:

    .
    ├── auth
    │   ├── kubeadmin-password
    │   └── kubeconfig
    ├── bootstrap.ign
    ├── master.ign
    ├── metadata.json
    └── worker.ign

Extracting the infrastructure name

The Ignition config files contain a unique cluster identifier that you can use to uniquely identify your cluster in Amazon Web Services. The infrastructure name is also used to locate the appropriate AWS resources during an OpenShift Container Platform installation. The provided CloudFormation templates contain references to this infrastructure name, so you must extract it.

Prerequisites
  • You obtained the OpenShift Container Platform installation program and the pull secret for your cluster.

  • You generated the Ignition config files for your cluster.

  • You installed the jq package.

Procedure
  • To extract and view the infrastructure name from the Ignition config file metadata, run the following command:

    $ jq -r .infraID <installation_directory>/metadata.json 
    1. For <installation_directory>, specify the path to the directory that you stored the installation files in.
      Example output
      openshift-vw9j6 
    2. The output of this command is your cluster name and a random string.

Creating a VPC in AWS

You must create a Virtual Private Cloud (VPC) in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use. You can customize the VPC to meet your requirements, including VPN and route tables.

You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the VPC.

Note

If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • You added your AWS keys and region to your local AWS profile by running aws configure.

Procedure
  1. Create a JSON file that contains the parameter values that the template requires:

    [
      {
        "ParameterKey": "VpcCidr", 
        "ParameterValue": "10.0.0.0/16" 
      },
      {
        "ParameterKey": "AvailabilityZoneCount", 
        "ParameterValue": "1" 
      },
      {
        "ParameterKey": "SubnetBits", 
        "ParameterValue": "12" 
      }
    ]
    1. The CIDR block for the VPC.
    2. Specify a CIDR block in the format x.x.x.x/16-24.
    3. The number of availability zones to deploy the VPC in.
    4. Specify an integer between 1 and 3.
    5. The size of each subnet in each availability zone.
    6. Specify an integer between 5 and 13, where 5 is /27 and 13 is /19.
  2. Copy the template from the CloudFormation template for the VPC section of this topic and save it as a YAML file on your computer. This template describes the VPC that your cluster requires.

  3. Launch the CloudFormation template to create a stack of AWS resources that represent the VPC:

    Important

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> 
         --template-body file://<template>.yaml 
         --parameters file://<parameters>.json 
    1. <name> is the name for the CloudFormation stack, such as cluster-vpc. You need the name of this stack if you remove the cluster.
    2. <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3. <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
      Example output
      arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-vpc/dbedae40-2fd3-11eb-820e-12a48460849f
  4. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>

    After the StackStatus displays CREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:

    VpcId

    The ID of your VPC.

    PublicSubnetIds

    The IDs of the new public subnets.

    PrivateSubnetIds

    The IDs of the new private subnets.

CloudFormation template for the VPC

You can use the following CloudFormation template to deploy the VPC that you need for your OpenShift Container Platform cluster.

CloudFormation template for the VPC
link:https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/01_vpc.yaml[role=include]

Creating networking and load balancing components in AWS

You must configure networking and classic or network load balancing in Amazon Web Services (AWS) that your OpenShift Container Platform cluster can use.

You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the networking and load balancing components that your OpenShift Container Platform cluster requires. The template also creates a hosted zone and subnet tags.

You can run the template multiple times within a single Virtual Private Cloud (VPC).

Note

If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • You created and configured a VPC and associated subnets in AWS.

Procedure
  1. Obtain the hosted zone ID for the Route 53 base domain that you specified in the install-config.yaml file for your cluster. You can obtain details about your hosted zone by running the following command:

    $ aws route53 list-hosted-zones-by-name --dns-name <route53_domain> 
    1. For the <route53_domain>, specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster.
      Example output
      mycluster.example.com.	False	100
      HOSTEDZONES	65F8F38E-2268-B835-E15C-AB55336FCBFA	/hostedzone/Z21IXYZABCZ2A4	mycluster.example.com.	10

      In the example output, the hosted zone ID is Z21IXYZABCZ2A4.

  2. Create a JSON file that contains the parameter values that the template requires:

    [
      {
        "ParameterKey": "ClusterName", 
        "ParameterValue": "mycluster" 
      },
      {
        "ParameterKey": "InfrastructureName", 
        "ParameterValue": "mycluster-<random_string>" 
      },
      {
        "ParameterKey": "HostedZoneId", 
        "ParameterValue": "<random_string>" 
      },
      {
        "ParameterKey": "HostedZoneName", 
        "ParameterValue": "example.com" 
      },
      {
        "ParameterKey": "PublicSubnets", 
        "ParameterValue": "subnet-<random_string>" 
      },
      {
        "ParameterKey": "PrivateSubnets", 
        "ParameterValue": "subnet-<random_string>" 
      },
      {
        "ParameterKey": "VpcId", 
        "ParameterValue": "vpc-<random_string>" 
      }
    ]
    1. A short, representative cluster name to use for hostnames, etc.
    2. Specify the cluster name that you used when you generated the install-config.yaml file for the cluster.
    3. The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
    4. Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string>.
    5. The Route 53 public zone ID to register the targets with.
    6. Specify the Route 53 public zone ID, which as a format similar to Z21IXYZABCZ2A4. You can obtain this value from the AWS console.
    7. The Route 53 zone to register the targets with.
    8. Specify the Route 53 base domain that you used when you generated the install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console.
    9. The public subnets that you created for your VPC.
    10. Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC.
    11. The private subnets that you created for your VPC.
    12. Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC.
    13. The VPC that you created for the cluster.
    14. Specify the VpcId value from the output of the CloudFormation template for the VPC.
  3. Copy the template from the CloudFormation template for the network and load balancers section of this topic and save it as a YAML file on your computer. This template describes the networking and load balancing objects that your cluster requires.

    Important

    If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord in the CloudFormation template to use CNAME records. Records of type ALIAS are not supported for AWS government regions.

  4. Launch the CloudFormation template to create a stack of AWS resources that provide the networking and load balancing components:

    Important

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> 
         --template-body file://<template>.yaml 
         --parameters file://<parameters>.json 
         --capabilities CAPABILITY_NAMED_IAM 
    1. <name> is the name for the CloudFormation stack, such as cluster-dns. You need the name of this stack if you remove the cluster.
    2. <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3. <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
    4. You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role resources.
      Example output
      arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-dns/cd3e5de0-2fd4-11eb-5cf0-12be5c33a183
  5. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>

    After the StackStatus displays CREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:

    PrivateHostedZoneId

    Hosted zone ID for the private DNS.

    ExternalApiLoadBalancerName

    Full name of the external API load balancer.

    InternalApiLoadBalancerName

    Full name of the internal API load balancer.

    ApiServerDnsName

    Full hostname of the API server.

    RegisterNlbIpTargetsLambda

    Lambda ARN useful to help register/deregister IP targets for these load balancers.

    ExternalApiTargetGroupArn

    ARN of external API target group.

    InternalApiTargetGroupArn

    ARN of internal API target group.

    InternalServiceTargetGroupArn

    ARN of internal service target group.

CloudFormation template for the network and load balancers

You can use the following CloudFormation template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster.

CloudFormation template for the network and load balancers
link:https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/02_cluster_infra.yaml[role=include]

Important

If you are deploying your cluster to an AWS government or secret region, you must update the InternalApiServerRecord to use CNAME records. Records of type ALIAS are not supported for AWS government regions. For example:

Type: CNAME
TTL: 10
ResourceRecords:
- !GetAtt IntApiElb.DNSName

Creating security group and roles in AWS

You must create security groups and roles in Amazon Web Services (AWS) for your OpenShift Container Platform cluster to use.

You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the security groups and roles that your OpenShift Container Platform cluster requires.

Note

If you do not use the provided CloudFormation template to create your AWS infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Procedure
  1. Create a JSON file that contains the parameter values that the template requires:

    [
      {
        "ParameterKey": "InfrastructureName", 
        "ParameterValue": "mycluster-<random_string>" 
      },
      {
        "ParameterKey": "VpcCidr", 
        "ParameterValue": "10.0.0.0/16" 
      },
      {
        "ParameterKey": "PrivateSubnets", 
        "ParameterValue": "subnet-<random_string>" 
      },
      {
        "ParameterKey": "VpcId", 
        "ParameterValue": "vpc-<random_string>" 
      }
    ]
    1. The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
    2. Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string>.
    3. The CIDR block for the VPC.
    4. Specify the CIDR block parameter that you used for the VPC that you defined in the form x.x.x.x/16-24.
    5. The private subnets that you created for your VPC.
    6. Specify the PrivateSubnetIds value from the output of the CloudFormation template for the VPC.
    7. The VPC that you created for the cluster.
    8. Specify the VpcId value from the output of the CloudFormation template for the VPC.
  2. Copy the template from the CloudFormation template for security objects section of this topic and save it as a YAML file on your computer. This template describes the security groups and roles that your cluster requires.

  3. Launch the CloudFormation template to create a stack of AWS resources that represent the security groups and roles:

    Important

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> 
         --template-body file://<template>.yaml 
         --parameters file://<parameters>.json 
         --capabilities CAPABILITY_NAMED_IAM 
    1. <name> is the name for the CloudFormation stack, such as cluster-sec. You need the name of this stack if you remove the cluster.
    2. <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3. <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
    4. You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources.
      Example output
      arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-sec/03bd4210-2ed7-11eb-6d7a-13fc0b61e9db
  4. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>

    After the StackStatus displays CREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:

    MasterSecurityGroupId

    Master Security Group ID

    WorkerSecurityGroupId

    Worker Security Group ID

    MasterInstanceProfile

    Master IAM Instance Profile

    WorkerInstanceProfile

    Worker IAM Instance Profile

CloudFormation template for security objects

You can use the following CloudFormation template to deploy the security objects that you need for your OpenShift Container Platform cluster.

CloudFormation template for security objects
link:https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/03_cluster_security.yaml[role=include]

Accessing RHCOS AMIs with stream metadata

In OpenShift Container Platform, stream metadata provides standardized metadata about RHCOS in the JSON format and injects the metadata into the cluster. Stream metadata is a stable format that supports multiple architectures and is intended to be self-documenting for maintaining automation.

You can use the coreos print-stream-json sub-command of openshift-install to access information about the boot images in the stream metadata format. This command provides a method for printing stream metadata in a scriptable, machine-readable format.

For user-provisioned installations, the openshift-install binary contains references to the version of RHCOS boot images that are tested for use with OpenShift Container Platform, such as the AWS AMI.

Procedure

To parse the stream metadata, use one of the following methods:

  • From a Go program, use the official stream-metadata-go library at https://github.com/coreos/stream-metadata-go. You can also view example code in the library.

  • From another programming language, such as Python or Ruby, use the JSON library of your preferred programming language.

  • From a command-line utility that handles JSON data, such as jq:

    • Print the current x86_64 or aarch64 AMI for an AWS region, such as us-west-1:

      For x86_64
      $ openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image'
      Example output
      ami-0d3e625f84626bbda
      For aarch64
      $ openshift-install coreos print-stream-json | jq -r '.architectures.aarch64.images.aws.regions["us-west-1"].image'
      Example output
      ami-0af1d3b7fa5be2131

      The output of this command is the AWS AMI ID for your designated architecture and the us-west-1 region. The AMI must belong to the same region as the cluster.

RHCOS AMIs for the AWS infrastructure

Red Hat provides Red Hat Enterprise Linux CoreOS (RHCOS) AMIs that are valid for the various AWS regions and instance architectures that you can manually specify for your OpenShift Container Platform nodes.

Note

By importing your own AMI, you can also install to regions that do not have a published RHCOS AMI.

Table 1. x86_64 RHCOS AMIs
AWS zone AWS AMI

af-south-1

ami-0a3b22174319ad66e

ap-east-1

ami-09cde51703738523d

ap-east-2

ami-05478426f756db81c

ap-northeast-1

ami-0d6f0c1a044b6848f

ap-northeast-2

ami-0d19d79e52ad365c8

ap-northeast-3

ami-0a0391de700b812ae

ap-south-1

ami-097ffb6f644b7bad1

ap-south-2

ami-08f1c0c6caafcf2c5

ap-southeast-1

ami-09b223a7c699ecde8

ap-southeast-2

ami-0a44bb6d4903a93a1

ap-southeast-3

ami-01469b817e364700f

ap-southeast-4

ami-086cc002b6d450301

ap-southeast-5

ami-0b5b9be3ea6fc17de

ap-southeast-6

ami-03a6ddee59246ab62

ap-southeast-7

ami-03ce4d4bb4f67e777

ca-central-1

ami-0b23054e68ef5ec3b

ca-west-1

ami-0541a60892c677593

eu-central-1

ami-006a33223c87af648

eu-central-2

ami-05ddf59e283155ea1

eu-north-1

ami-054f384036093db98

eu-south-1

ami-0a1cc6a65238669f3

eu-south-2

ami-0fe02b801fea5edf6

eu-west-1

ami-0632ffa330e30aea1

eu-west-2

ami-05829d8d5c031e4a8

eu-west-3

ami-00be64f508df27900

il-central-1

ami-0eeea15b1c070e051

me-central-1

ami-090084b481adf23e9

me-south-1

ami-0569abe19529c8b10

mx-central-1

ami-05ab0cf33b0e946a7

sa-east-1

ami-04dd4c56b43a23fb5

us-east-1

ami-04018496b0a1da2d2

us-east-2

ami-0b264801b0e00009c

us-gov-east-1

ami-042feba6717887157

us-gov-west-1

ami-0b05862564ac8353d

us-west-1

ami-08f548f5be577ce2d

us-west-2

ami-04941543f3e575579

Table 2. aarch64 RHCOS AMIs
AWS zone AWS AMI

af-south-1

ami-0265ec024c31e7603

ap-east-1

ami-01d0891e1fa7c28fe

ap-east-2

ami-0de88496066d7428e

ap-northeast-1

ami-0d7c8859dc8f2ac02

ap-northeast-2

ami-00377959ff7cf0ed6

ap-northeast-3

ami-095386042a7673286

ap-south-1

ami-09f0f012b2c626f59

ap-south-2

ami-01c8221c9a56e0c74

ap-southeast-1

ami-0c27520bfa297e78a

ap-southeast-2

ami-0af72833671d615f1

ap-southeast-3

ami-085607af8f322e956

ap-southeast-4

ami-03d8bfd58de713367

ap-southeast-5

ami-0f6479fb82d8108d1

ap-southeast-6

ami-0a6a284e74f2e9afd

ap-southeast-7

ami-054eb3c4286f4dbca

ca-central-1

ami-032e08fde31f0a6cc

ca-west-1

ami-07d83e72beff3eb6c

eu-central-1

ami-0a62c879da82e8f99

eu-central-2

ami-09c91e670678ce2c1

eu-north-1

ami-0e896b8e4e7de42be

eu-south-1

ami-01718000bd7650956

eu-south-2

ami-02c48b6c8488542b2

eu-west-1

ami-06ea845fe728a8891

eu-west-2

ami-0e6c67a8674179e1b

eu-west-3

ami-0c4cb83cc7e4ec057

il-central-1

ami-00a88a6e634ac6676

me-central-1

ami-09aff5ac8bd25e9b8

me-south-1

ami-043e9d5894e68426d

mx-central-1

ami-00c0f1b2fc7ab92d4

sa-east-1

ami-0af3988f6b0fbee7f

us-east-1

ami-02ffd1d5ed7351ceb

us-east-2

ami-0d08ccc7bcf77433d

us-gov-east-1

ami-0b183fafd7eeae965

us-gov-west-1

ami-0411a4dd8cbf726db

us-west-1

ami-06149aec55f3e1d6c

us-west-2

ami-0c2bafedf2fc1dfc3

Creating the bootstrap node in AWS

You must create the bootstrap node in Amazon Web Services (AWS) to use during OpenShift Container Platform cluster initialization. You do this by:

  • Providing a location to serve the bootstrap.ign Ignition config file to your cluster. This file is located in your installation directory. The provided CloudFormation Template assumes that the Ignition config files for your cluster are served from an S3 bucket. If you choose to serve the files from another location, you must modify the templates.

  • Using the provided CloudFormation template and a custom parameter file to create a stack of AWS resources. The stack represents the bootstrap node that your OpenShift Container Platform installation requires.

Note

If you do not use the provided CloudFormation template to create your bootstrap node, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • You created and configured DNS, load balancers, and listeners in AWS.

  • You created the security groups and roles required for your cluster in AWS.

Procedure
  1. Create the bucket by running the following command:

    $ aws s3 mb s3://<cluster-name>-infra 
    1. <cluster-name>-infra is the bucket name. When creating the install-config.yaml file, replace <cluster-name> with the name specified for the cluster.

      You must use a presigned URL for your S3 bucket, instead of the s3:// schema, if you are: Deploying to a region that has endpoints that differ from the AWS SDK. Deploying a proxy. ** Providing your own custom endpoints.

  2. Upload the bootstrap.ign Ignition config file to the bucket by running the following command:

    $ aws s3 cp <installation_directory>/bootstrap.ign s3://<cluster-name>-infra/bootstrap.ign 
    1. For <installation_directory>, specify the path to the directory that you stored the installation files in.
  3. Verify that the file uploaded by running the following command:

    $ aws s3 ls s3://<cluster-name>-infra/
    Example output
    2019-04-03 16:15:16     314878 bootstrap.ign

    Note

    The bootstrap Ignition config file does contain secrets, like X.509 keys. The following steps provide basic security for the S3 bucket. To provide additional security, you can enable an S3 bucket policy to allow only certain users, such as the OpenShift IAM user, to access objects that the bucket contains. You can avoid S3 entirely and serve your bootstrap Ignition config file from any address that the bootstrap machine can reach.

  4. Create a JSON file that contains the parameter values that the template requires:

    [
      {
        "ParameterKey": "InfrastructureName", 
        "ParameterValue": "mycluster-<random_string>" 
      },
      {
        "ParameterKey": "RhcosAmi", 
        "ParameterValue": "ami-<random_string>" 
      },
      {
        "ParameterKey": "AllowedBootstrapSshCidr", 
        "ParameterValue": "0.0.0.0/0" 
      },
      {
        "ParameterKey": "PublicSubnet", 
        "ParameterValue": "subnet-<random_string>" 
      },
      {
        "ParameterKey": "MasterSecurityGroupId", 
        "ParameterValue": "sg-<random_string>" 
      },
      {
        "ParameterKey": "VpcId", 
        "ParameterValue": "vpc-<random_string>" 
      },
      {
        "ParameterKey": "BootstrapIgnitionLocation", 
        "ParameterValue": "s3://<bucket_name>/bootstrap.ign" 
      },
      {
        "ParameterKey": "AutoRegisterELB", 
        "ParameterValue": "yes" 
      },
      {
        "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 
        "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 
      },
      {
        "ParameterKey": "ExternalApiTargetGroupArn", 
        "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 
      },
      {
        "ParameterKey": "InternalApiTargetGroupArn", 
        "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 
      },
      {
        "ParameterKey": "InternalServiceTargetGroupArn", 
        "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 
      }
    ]
    1. The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
    2. Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string>.
    3. Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the bootstrap node based on your selected architecture.
    4. Specify a valid AWS::EC2::Image::Id value.
    5. CIDR block to allow SSH access to the bootstrap node.
    6. Specify a CIDR block in the format x.x.x.x/16-24.
    7. The public subnet that is associated with your VPC to launch the bootstrap node into.
    8. Specify the PublicSubnetIds value from the output of the CloudFormation template for the VPC.
    9. The master security group ID (for registering temporary rules)
    10. Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles.
    11. The VPC created resources will belong to.
    12. Specify the VpcId value from the output of the CloudFormation template for the VPC.
    13. Location to fetch bootstrap Ignition config file from.
    14. Specify the S3 bucket and file name in the form s3://<bucket_name>/bootstrap.ign.
    15. Whether or not to register a network load balancer (NLB).
    16. Specify yes or no. If you specify yes, you must provide a Lambda Amazon Resource Name (ARN) value.
    17. The ARN for NLB IP target registration lambda group.
    18. Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
    19. The ARN for external API load balancer target group.
    20. Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
    21. The ARN for internal API load balancer target group.
    22. Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
    23. The ARN for internal service load balancer target group.
    24. Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
  5. Copy the template from the CloudFormation template for the bootstrap machine section of this topic and save it as a YAML file on your computer. This template describes the bootstrap machine that your cluster requires.

  6. Optional: If you are deploying the cluster with a proxy, you must update the ignition in the template to add the ignition.config.proxy fields. Additionally, If you have added the Amazon EC2, Elastic Load Balancing, and S3 VPC endpoints to your VPC, you must add these endpoints to the noProxy field.

  7. Launch the CloudFormation template to create a stack of AWS resources that represent the bootstrap node:

    Important

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> 
         --template-body file://<template>.yaml 
         --parameters file://<parameters>.json 
         --capabilities CAPABILITY_NAMED_IAM 
    1. <name> is the name for the CloudFormation stack, such as cluster-bootstrap. You need the name of this stack if you remove the cluster.
    2. <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3. <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
    4. You must explicitly declare the CAPABILITY_NAMED_IAM capability because the provided template creates some AWS::IAM::Role and AWS::IAM::InstanceProfile resources.
      Example output
      arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-bootstrap/12944486-2add-11eb-9dee-12dace8e3a83
  8. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>

    After the StackStatus displays CREATE_COMPLETE, the output displays values for the following parameters. You must provide these parameter values to the other CloudFormation templates that you run to create your cluster:

    BootstrapInstanceId

    The bootstrap Instance ID.

    BootstrapPublicIp

    The bootstrap node public IP address.

    BootstrapPrivateIp

    The bootstrap node private IP address.

CloudFormation template for the bootstrap machine

You can use the following CloudFormation template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster.

CloudFormation template for the bootstrap machine
link:https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/04_cluster_bootstrap.yaml[role=include]

Creating the control plane machines in AWS

You must create the control plane machines in Amazon Web Services (AWS) that your cluster will use.

You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent the control plane nodes.

Important

The CloudFormation template creates a stack that represents three control plane nodes.

Note

If you do not use the provided CloudFormation template to create your control plane nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • You created the bootstrap machine.

Procedure
  1. Create a JSON file that contains the parameter values that the template requires:

    [
      {
        "ParameterKey": "InfrastructureName", 
        "ParameterValue": "mycluster-<random_string>" 
      },
      {
        "ParameterKey": "RhcosAmi", 
        "ParameterValue": "ami-<random_string>" 
      },
      {
        "ParameterKey": "AutoRegisterDNS", 
        "ParameterValue": "yes" 
      },
      {
        "ParameterKey": "PrivateHostedZoneId", 
        "ParameterValue": "<random_string>" 
      },
      {
        "ParameterKey": "PrivateHostedZoneName", 
        "ParameterValue": "mycluster.example.com" 
      },
      {
        "ParameterKey": "Master0Subnet", 
        "ParameterValue": "subnet-<random_string>" 
      },
      {
        "ParameterKey": "Master1Subnet", 
        "ParameterValue": "subnet-<random_string>" 
      },
      {
        "ParameterKey": "Master2Subnet", 
        "ParameterValue": "subnet-<random_string>" 
      },
      {
        "ParameterKey": "MasterSecurityGroupId", 
        "ParameterValue": "sg-<random_string>" 
      },
      {
        "ParameterKey": "IgnitionLocation", 
        "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/master" 
      },
      {
        "ParameterKey": "CertificateAuthorities", 
        "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 
      },
      {
        "ParameterKey": "MasterInstanceProfileName", 
        "ParameterValue": "<roles_stack>-MasterInstanceProfile-<random_string>" 
      },
      {
        "ParameterKey": "MasterInstanceType", 
        "ParameterValue": "" 
      },
      {
        "ParameterKey": "AutoRegisterELB", 
        "ParameterValue": "yes" 
      },
      {
        "ParameterKey": "RegisterNlbIpTargetsLambdaArn", 
        "ParameterValue": "arn:aws:lambda:<aws_region>:<account_number>:function:<dns_stack_name>-RegisterNlbIpTargets-<random_string>" 
      },
      {
        "ParameterKey": "ExternalApiTargetGroupArn", 
        "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Exter-<random_string>" 
      },
      {
        "ParameterKey": "InternalApiTargetGroupArn", 
        "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 
      },
      {
        "ParameterKey": "InternalServiceTargetGroupArn", 
        "ParameterValue": "arn:aws:elasticloadbalancing:<aws_region>:<account_number>:targetgroup/<dns_stack_name>-Inter-<random_string>" 
      }
    ]
    1. The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
    2. Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string>.
    3. Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the control plane machines based on your selected architecture.
    4. Specify an AWS::EC2::Image::Id value.
    5. Whether or not to perform DNS etcd registration.
    6. Specify yes or no. If you specify yes, you must provide hosted zone information.
    7. The Route 53 private zone ID to register the etcd targets with.
    8. Specify the PrivateHostedZoneId value from the output of the CloudFormation template for DNS and load balancing.
    9. The Route 53 zone to register the targets with.
    10. Specify <cluster_name>.<domain_name> where <domain_name> is the Route 53 base domain that you used when you generated install-config.yaml file for the cluster. Do not include the trailing period (.) that is displayed in the AWS console.
    11. A subnet, preferably private, to launch the control plane machines on.
    12. Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing.
    13. The master security group ID to associate with control plane nodes.
    14. Specify the MasterSecurityGroupId value from the output of the CloudFormation template for the security group and roles.
    15. The location to fetch control plane Ignition config file from.
    16. Specify the generated Ignition config file location, https://api-int.<cluster_name>.<domain_name>:22623/config/master.
    17. The base64 encoded certificate authority string to use.
    18. Specify the value from the master.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC…​xYz==.
    19. The IAM profile to associate with control plane nodes.
    20. Specify the MasterInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles.
    21. The type of AWS instance to use for the control plane machines based on your selected architecture.
    22. The instance type value corresponds to the minimum resource requirements for control plane machines. For example m6i.xlarge is a type for AMD64 and m6g.xlarge is a type for ARM64.
    23. Whether or not to register a network load balancer (NLB).
    24. Specify yes or no. If you specify yes, you must provide a Lambda Amazon Resource Name (ARN) value.
    25. The ARN for NLB IP target registration lambda group.
    26. Specify the RegisterNlbIpTargetsLambda value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
    27. The ARN for external API load balancer target group.
    28. Specify the ExternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
    29. The ARN for internal API load balancer target group.
    30. Specify the InternalApiTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
    31. The ARN for internal service load balancer target group.
    32. Specify the InternalServiceTargetGroupArn value from the output of the CloudFormation template for DNS and load balancing. Use arn:aws-us-gov if deploying the cluster to an AWS GovCloud region.
  2. Copy the template from the CloudFormation template for control plane machines section of this topic and save it as a YAML file on your computer. This template describes the control plane machines that your cluster requires.

  3. If you specified an m5 instance type as the value for MasterInstanceType, add that instance type to the MasterInstanceType.AllowedValues parameter in the CloudFormation template.

  4. Launch the CloudFormation template to create a stack of AWS resources that represent the control plane nodes:

    Important

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> 
         --template-body file://<template>.yaml 
         --parameters file://<parameters>.json 
    1. <name> is the name for the CloudFormation stack, such as cluster-control-plane. You need the name of this stack if you remove the cluster.
    2. <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3. <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
      Example output
      arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-control-plane/21c7e2b0-2ee2-11eb-c6f6-0aa34627df4b

      Note

      The CloudFormation template creates a stack that represents three control plane nodes.

  5. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>

CloudFormation template for control plane machines

You can use the following CloudFormation template to deploy the control plane machines that you need for your OpenShift Container Platform cluster.

CloudFormation template for control plane machines
link:https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/05_cluster_master_nodes.yaml[role=include]

Creating the worker nodes in AWS

You can create worker nodes in Amazon Web Services (AWS) for your cluster to use.

You can use the provided CloudFormation template and a custom parameter file to create a stack of AWS resources that represent a worker node.

Important

The CloudFormation template creates a stack that represents one worker node. You must create a stack for each worker node.

Note

If you do not use the provided CloudFormation template to create your worker nodes, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs.

Prerequisites
  • You created the control plane machines.

Procedure
  1. Create a JSON file that contains the parameter values that the CloudFormation template requires:

    [
      {
        "ParameterKey": "InfrastructureName", 
        "ParameterValue": "mycluster-<random_string>" 
      },
      {
        "ParameterKey": "RhcosAmi", 
        "ParameterValue": "ami-<random_string>" 
      },
      {
        "ParameterKey": "Subnet", 
        "ParameterValue": "subnet-<random_string>" 
      },
      {
        "ParameterKey": "WorkerSecurityGroupId", 
        "ParameterValue": "sg-<random_string>" 
      },
      {
        "ParameterKey": "IgnitionLocation", 
        "ParameterValue": "https://api-int.<cluster_name>.<domain_name>:22623/config/worker" 
      },
      {
        "ParameterKey": "CertificateAuthorities", 
        "ParameterValue": "data:text/plain;charset=utf-8;base64,ABC...xYz==" 
      },
      {
        "ParameterKey": "WorkerInstanceProfileName", 
        "ParameterValue": "<roles_stack>-WorkerInstanceProfile-<random_string>" 
      },
      {
        "ParameterKey": "WorkerInstanceType", 
        "ParameterValue": "" 
      }
    ]
    1. The name for your cluster infrastructure that is encoded in your Ignition config files for the cluster.
    2. Specify the infrastructure name that you extracted from the Ignition config file metadata, which has the format <cluster-name>-<random-string>.
    3. Current Red Hat Enterprise Linux CoreOS (RHCOS) AMI to use for the worker nodes based on your selected architecture.
    4. Specify an AWS::EC2::Image::Id value.
    5. A subnet, preferably private, to start the worker nodes on.
    6. Specify a subnet from the PrivateSubnets value from the output of the CloudFormation template for DNS and load balancing.
    7. The worker security group ID to associate with worker nodes.
    8. Specify the WorkerSecurityGroupId value from the output of the CloudFormation template for the security group and roles.
    9. The location to fetch the bootstrap Ignition config file from.
    10. Specify the generated Ignition config location, https://api-int.<cluster_name>.<domain_name>:22623/config/worker.
    11. Base64 encoded certificate authority string to use.
    12. Specify the value from the worker.ign file that is in the installation directory. This value is the long string with the format data:text/plain;charset=utf-8;base64,ABC…​xYz==.
    13. The IAM profile to associate with worker nodes.
    14. Specify the WorkerInstanceProfile parameter value from the output of the CloudFormation template for the security group and roles.
    15. The type of AWS instance to use for the compute machines based on your selected architecture.
    16. The instance type value corresponds to the minimum resource requirements for compute machines. For example m6i.large is a type for AMD64 and m6g.large is a type for ARM64.
  2. Copy the template from the CloudFormation template for worker machines section of this topic and save it as a YAML file on your computer. This template describes the networking objects and load balancers that your cluster requires.

  3. Optional: If you specified an m5 instance type as the value for WorkerInstanceType, add that instance type to the WorkerInstanceType.AllowedValues parameter in the CloudFormation template.

  4. Optional: If you are deploying with an AWS Marketplace image, update the Worker0.type.properties.ImageID parameter with the AMI ID that you obtained from your subscription.

  5. Use the CloudFormation template to create a stack of AWS resources that represent a worker node:

    Important

    You must enter the command on a single line.

    $ aws cloudformation create-stack --stack-name <name> 
         --template-body file://<template>.yaml \ 
         --parameters file://<parameters>.json 
    1. <name> is the name for the CloudFormation stack, such as cluster-worker-1. You need the name of this stack if you remove the cluster.
    2. <template> is the relative path to and name of the CloudFormation template YAML file that you saved.
    3. <parameters> is the relative path to and name of the CloudFormation parameters JSON file.
      Example output
      arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59

      Note

      The CloudFormation template creates a stack that represents one worker node.

  6. Confirm that the template components exist:

    $ aws cloudformation describe-stacks --stack-name <name>
  7. Continue to create worker stacks until you have created enough worker machines for your cluster. You can create additional worker stacks by referencing the same template and parameter files and specifying a different stack name.

    Important

    You must create at least two worker machines, so you must create at least two stacks that use this CloudFormation template.

CloudFormation template for compute machines

You can deploy the compute machines that you need for your OpenShift Container Platform cluster by using the following CloudFormation template.

CloudFormation template for compute machines
link:https://raw.githubusercontent.com/openshift/installer/release-4.21/upi/aws/cloudformation/06_cluster_worker_node.yaml[role=include]

Creating the CloudFormation stack for compute machines

You can create a stack of AWS resources for the compute machines by using the CloudFormation template that was previously shared.

Important

When you use the CloudFormation template for the control plane machines, the template provisions all three control plane machines with a single stack; however, when you use the CloudFormation template to deploy the compute machines, you must create the number of stacks based on the number that you defined in the install-config.yaml file. Each stack is provisioned once for each machine. To provision a new compute machine, you must change the stack name.

Procedure
  • To create the CloudFormation stack for compute machines, run the following command:

    $ aws cloudformation create-stack --stack-name <name> \
         --template-body file://<template>.yaml \
         --parameters file://<parameters>.json 
    1. Specify the <name> with the name for the CloudFormation stack, such as cluster-worker-1. You need the name of this stack if you remove the cluster.
    2. Specify the relative path and the name of the CloudFormation template YAML file that you saved.
    3. Specify the relative path and the name of the JSON file for the CloudFormation parameters.
      Example output
      arn:aws:cloudformation:us-east-1:269333783861:stack/cluster-worker-1/729ee301-1c2a-11eb-348f-sd9888c65b59

Initializing the bootstrap sequence on AWS with user-provisioned infrastructure

After you create all of the required infrastructure in Amazon Web Services (AWS), you can start the bootstrap sequence that initializes the OpenShift Container Platform control plane.

Prerequisites
  • You created the worker nodes.

Procedure
  1. Change to the directory that contains the installation program and start the bootstrap process that initializes the OpenShift Container Platform control plane:

    $ ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 
        --log-level=info 
    1. For <installation_directory>, specify the path to the directory that you stored the installation files in.
    2. To view different installation details, specify warn, debug, or error instead of info.
      Example output
      INFO Waiting up to 20m0s for the Kubernetes API at https://api.mycluster.example.com:6443...
      INFO API v1.34.2 up
      INFO Waiting up to 30m0s for bootstrapping to complete...
      INFO It is now safe to remove the bootstrap resources
      INFO Time elapsed: 1s

      If the command exits without a FATAL warning, your OpenShift Container Platform control plane has initialized.

      Note

      After the control plane initializes, it sets up the compute nodes and installs additional services in the form of Operators.

Additional resources

Approving the certificate signing requests for your machines

To add machines to a cluster, verify the status of the certificate signing requests (CSRs) generated for each machine. If manual approval is required, approve the client requests first, followed by the server requests.

Prerequisites
  • You added machines to your cluster.

Procedure
  1. Confirm that the cluster recognizes the machines:

    $ oc get nodes
    Example output
    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  63m  v1.34.2
    master-1  Ready     master  63m  v1.34.2
    master-2  Ready     master  64m  v1.34.2

    The output lists all of the machines that you created.

    Note

    The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved.

  2. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster:

    $ oc get csr
    Example output
    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-8b2br   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    csr-8vnps   15m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Pending
    ...

    In this example, two machines are joining the cluster. You might see more approved CSRs in the list.

  3. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines:

    Note

    Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters.

    Note

    For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec, oc rsh, and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node.

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name>

      where:

      <csr_name>

      Specifies the name of a CSR from the list of current CSRs.

    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve

      Note

      Some Operators might not become available until some CSRs are approved.

  4. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster:

    $ oc get csr
    Example output
    NAME        AGE     REQUESTOR                                                                   CONDITION
    csr-bfd72   5m26s   system:node:ip-10-0-50-126.us-east-2.compute.internal                       Pending
    csr-c57lv   5m26s   system:node:ip-10-0-95-157.us-east-2.compute.internal                       Pending
    ...
  5. If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines:

    • To approve them individually, run the following command for each valid CSR:

      $ oc adm certificate approve <csr_name>

      where:

      <csr_name>

      Specifies the name of a CSR from the list of current CSRs.

    • To approve all pending CSRs, run the following command:

      $ oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve
  6. After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command:

    $ oc get nodes
    Example output
    NAME      STATUS    ROLES   AGE  VERSION
    master-0  Ready     master  73m  v1.34.2
    master-1  Ready     master  73m  v1.34.2
    master-2  Ready     master  74m  v1.34.2
    worker-0  Ready     worker  11m  v1.34.2
    worker-1  Ready     worker  11m  v1.34.2

    Note

    It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status.

Initial Operator configuration

To ensure all Operators become available, configure the required Operators immediately after the control plane initialises. This configuration is essential for stabilizing the cluster environment following the installation.

Prerequisites
  • Your control plane has initialized.

Procedure
  1. Watch the cluster components come online:

    $ watch -n5 oc get clusteroperators
    Example output
    NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
    authentication                             4.19.0    True        False         False      19m
    baremetal                                  4.19.0    True        False         False      37m
    cloud-credential                           4.19.0    True        False         False      40m
    cluster-autoscaler                         4.19.0    True        False         False      37m
    config-operator                            4.19.0    True        False         False      38m
    console                                    4.19.0    True        False         False      26m
    csi-snapshot-controller                    4.19.0    True        False         False      37m
    dns                                        4.19.0    True        False         False      37m
    etcd                                       4.19.0    True        False         False      36m
    image-registry                             4.19.0    True        False         False      31m
    ingress                                    4.19.0    True        False         False      30m
    insights                                   4.19.0    True        False         False      31m
    kube-apiserver                             4.19.0    True        False         False      26m
    kube-controller-manager                    4.19.0    True        False         False      36m
    kube-scheduler                             4.19.0    True        False         False      36m
    kube-storage-version-migrator              4.19.0    True        False         False      37m
    machine-api                                4.19.0    True        False         False      29m
    machine-approver                           4.19.0    True        False         False      37m
    machine-config                             4.19.0    True        False         False      36m
    marketplace                                4.19.0    True        False         False      37m
    monitoring                                 4.19.0    True        False         False      29m
    network                                    4.19.0    True        False         False      38m
    node-tuning                                4.19.0    True        False         False      37m
    openshift-apiserver                        4.19.0    True        False         False      32m
    openshift-controller-manager               4.19.0    True        False         False      30m
    openshift-samples                          4.19.0    True        False         False      32m
    operator-lifecycle-manager                 4.19.0    True        False         False      37m
    operator-lifecycle-manager-catalog         4.19.0    True        False         False      37m
    operator-lifecycle-manager-packageserver   4.19.0    True        False         False      32m
    service-ca                                 4.19.0    True        False         False      38m
    storage                                    4.19.0    True        False         False      37m
  2. Configure the Operators that are not available.

Disabling the default software catalog sources

Operator catalogs that source content provided by Red Hat and community projects are configured for the software catalog by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator.

Procedure
  • Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object:

    $ oc patch OperatorHub cluster --type json \
        -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'

Tip

Alternatively, you can use the web console to manage catalog sources. From the AdministrationCluster SettingsConfigurationOperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources.

Image registry storage configuration

Amazon Web Services provides default storage, which means the Image Registry Operator is available after installation. However, if the Registry Operator cannot create an S3 bucket and automatically configure storage, you must manually configure registry storage.

Configure a persistent volume, which is required for production clusters. Where applicable, you can configure an empty directory as the storage location for non-production clusters.

You can also allow the image registry to use block storage types by using the Recreate rollout strategy during upgrades.

Configuring registry storage for AWS with user-provisioned infrastructure

During installation, your cloud credentials are sufficient to create an Amazon S3 bucket and the Registry Operator will automatically configure storage.

If the Registry Operator cannot create an S3 bucket and automatically configure storage, you can create an S3 bucket and configure storage with the following procedure.

Warning

To secure your registry images in AWS, block public access to the S3 bucket.

Prerequisites
  • You have a cluster on AWS with user-provisioned infrastructure.

  • For Amazon S3 storage, the secret is expected to contain two keys:

    • REGISTRY_STORAGE_S3_ACCESSKEY

    • REGISTRY_STORAGE_S3_SECRETKEY

Procedure
  1. Set up a Bucket Lifecycle Policy to abort incomplete multipart uploads that are one day old.

  2. Fill in the storage configuration in configs.imageregistry.operator.openshift.io/cluster:

    $ oc edit configs.imageregistry.operator.openshift.io/cluster
    Example configuration
    apiVersion: imageregistry.operator.openshift.io/v1
    kind: Config
    metadata:
      name: cluster
    spec:
      storage:
        s3:
          bucket: <bucket_name>
          region: <region_name>

Configuring storage for the image registry in non-production clusters

You must configure storage for the Image Registry Operator. For non-production clusters, you can set the image registry to an empty directory. If you do so, all images are lost if you restart the registry.

Procedure
  • To set the image registry storage to an empty directory:

    $ oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'

    Warning

    Configure this option only for non-production clusters.

    If you run this command before the Image Registry Operator initializes its components, the oc patch command fails with the following error:

    Error from server (NotFound): configs.imageregistry.operator.openshift.io "cluster" not found

    Wait a few minutes and run the command again.

Deleting the bootstrap resources

After you complete the initial Operator configuration for the cluster, remove the bootstrap resources from Amazon Web Services (AWS).

Prerequisites
  • You completed the initial Operator configuration for your cluster.

Procedure
  1. Delete the bootstrap resources. If you used the CloudFormation template, delete its stack:

    • Delete the stack by using the AWS CLI:

      $ aws cloudformation delete-stack --stack-name <name> 
      1. <name> is the name of your bootstrap stack.
    • Delete the stack by using the AWS CloudFormation console.

Creating the Ingress DNS Records

If you removed the DNS Zone configuration, manually create DNS records that point to the Ingress load balancer. You can create either a wildcard record or specific records. While the following procedure uses A records, you can use other record types that you require, such as CNAME or alias.

Prerequisites
Procedure
  1. Determine the routes to create.

    • To create a wildcard record, use *.apps.<cluster_name>.<domain_name>, where <cluster_name> is your cluster name, and <domain_name> is the Route 53 base domain for your OpenShift Container Platform cluster.

    • To create specific records, you must create a record for each route that your cluster uses, as shown in the output of the following command:

      $ oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes
      Example output
      oauth-openshift.apps.<cluster_name>.<domain_name>
      console-openshift-console.apps.<cluster_name>.<domain_name>
      downloads-openshift-console.apps.<cluster_name>.<domain_name>
      alertmanager-main-openshift-monitoring.apps.<cluster_name>.<domain_name>
      prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<domain_name>
  2. Retrieve the Ingress Operator load balancer status and note the value of the external IP address that it uses, which is shown in the EXTERNAL-IP column:

    $ oc -n openshift-ingress get service router-default
    Example output
    NAME             TYPE           CLUSTER-IP      EXTERNAL-IP                            PORT(S)                      AGE
    router-default   LoadBalancer   172.30.62.215   ab3...28.us-east-2.elb.amazonaws.com   80:31499/TCP,443:30693/TCP   5m
  3. Locate the hosted zone ID for the load balancer:

    $ aws elb describe-load-balancers | jq -r '.LoadBalancerDescriptions[] | select(.DNSName == "<external_ip>").CanonicalHostedZoneNameID' 
    1. For <external_ip>, specify the value of the external IP address of the Ingress Operator load balancer that you obtained.
      Example output
      Z3AADJGX6KTTL2

    The output of this command is the load balancer hosted zone ID.

  4. Obtain the public hosted zone ID for your cluster’s domain:

    $ aws route53 list-hosted-zones-by-name \
                --dns-name "<domain_name>" \ 
                --query 'HostedZones[? Config.PrivateZone != `true` && Name == `<domain_name>.`].Id' 
                --output text
    1. For <domain_name>, specify the Route 53 base domain for your OpenShift Container Platform cluster.
      Example output
      /hostedzone/Z3URY6TWQ91KVV

      The public hosted zone ID for your domain is shown in the command output. In this example, it is Z3URY6TWQ91KVV.

  5. Add the alias records to your private zone:

    $ aws route53 change-resource-record-sets --hosted-zone-id "<private_hosted_zone_id>" --change-batch '{ 
    >   "Changes": [
    >     {
    >       "Action": "CREATE",
    >       "ResourceRecordSet": {
    >         "Name": "\\052.apps.<cluster_domain>", 
    >         "Type": "A",
    >         "AliasTarget":{
    >           "HostedZoneId": "<hosted_zone_id>", 
    >           "DNSName": "<external_ip>.", 
    >           "EvaluateTargetHealth": false
    >         }
    >       }
    >     }
    >   ]
    > }'
    1. For <private_hosted_zone_id>, specify the value from the output of the CloudFormation template for DNS and load balancing.
    2. For <cluster_domain>, specify the domain or subdomain that you use with your OpenShift Container Platform cluster.
    3. For <hosted_zone_id>, specify the public hosted zone ID for the load balancer that you obtained.
    4. For <external_ip>, specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period (.) in this parameter value.
  6. Add the records to your public zone:

    $ aws route53 change-resource-record-sets --hosted-zone-id "<public_hosted_zone_id>"" --change-batch '{ 
    >   "Changes": [
    >     {
    >       "Action": "CREATE",
    >       "ResourceRecordSet": {
    >         "Name": "\\052.apps.<cluster_domain>", 
    >         "Type": "A",
    >         "AliasTarget":{
    >           "HostedZoneId": "<hosted_zone_id>", 
    >           "DNSName": "<external_ip>.", 
    >           "EvaluateTargetHealth": false
    >         }
    >       }
    >     }
    >   ]
    > }'
    1. For <public_hosted_zone_id>, specify the public hosted zone for your domain.
    2. For <cluster_domain>, specify the domain or subdomain that you use with your OpenShift Container Platform cluster.
    3. For <hosted_zone_id>, specify the public hosted zone ID for the load balancer that you obtained.
    4. For <external_ip>, specify the value of the external IP address of the Ingress Operator load balancer. Ensure that you include the trailing period (.) in this parameter value.

Completing an AWS installation on user-provisioned infrastructure

After you start the OpenShift Container Platform installation on Amazon Web Service (AWS) user-provisioned infrastructure, monitor the deployment to completion.

Prerequisites
  • You removed the bootstrap node for an OpenShift Container Platform cluster on user-provisioned AWS infrastructure.

  • You installed the oc CLI.

Procedure
  1. From the directory that contains the installation program, complete the cluster installation:

    $ ./openshift-install --dir <installation_directory> wait-for install-complete 
    1. For <installation_directory>, specify the path to the directory that you stored the installation files in.
      Example output
      INFO Waiting up to 40m0s for the cluster at https://api.mycluster.example.com:6443 to initialize...
      INFO Waiting up to 10m0s for the openshift-console route to be created...
      INFO Install complete!
      INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
      INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
      INFO Login to the console with user: "kubeadmin", and password: "password"
      INFO Time elapsed: 1s

      Important

      • The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information.

      • It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.

  2. Register your cluster on the Cluster registration page.

Logging in to the cluster by using the CLI

To log in to your cluster as the default system user, export the kubeconfig file. This configuration enables the CLI to authenticate and connect to the specific API server created during OpenShift Container Platform installation.

The kubeconfig file is specific to a cluster and is created during OpenShift Container Platform installation.

Prerequisites
  • You deployed an OpenShift Container Platform cluster.

  • You installed the OpenShift CLI (oc).

Procedure
  1. Export the kubeadmin credentials by running the following command:

    $ export KUBECONFIG=<installation_directory>/auth/kubeconfig

    where:

    <installation_directory>

    Specifies the path to the directory that stores the installation files.

  2. Verify you can run oc commands successfully using the exported configuration by running the following command:

    $ oc whoami
    Example output
    system:admin

Logging in to the cluster by using the web console

The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.

Prerequisites
  • You have access to the installation host.

  • You completed a cluster installation and all cluster Operators are available.

Procedure
  1. Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host:

    $ cat <installation_directory>/auth/kubeadmin-password

    Note

    Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host.

  2. List the OpenShift Container Platform web console route:

    $ oc get routes -n openshift-console | grep 'console-openshift'

    Note

    Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host.

    Example output
    console     console-openshift-console.apps.<cluster_name>.<base_domain>            console     https   reencrypt/Redirect   None
  3. Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user.

Additional resources
Additional resources

Next steps