Skip to content

Rosa sts aws prereqs

OpenShift Container Platform provides a model that allows Red Hat to deploy clusters into a customer’s existing Amazon Web Service (AWS) account.

Ensure that the following prerequisites are met before installing your cluster.

AWS account

You must have an AWS account with the following considerations to deploy a OpenShift Container Platform cluster.

  • Your AWS account must allow sufficient quota to deploy your cluster.

  • If your organization applies and enforces SCP policies, these policies must not be more restrictive than the roles and policies required by the cluster.

  • You can deploy native AWS services within the same AWS account.

  • Your account must have a service-linked role to allow the installation program to configure Elastic Load Balancing (ELB). See "Creating the Elastic Load Balancing (ELB) service-linked role" for more information.

Support requirements

To receive Red Hat support, your account must use a specific AWS plan and have the required permissions on your account.

  • Red Hat recommends that the customer have at least Business Support from AWS.

  • Red Hat may have permission from the customer to request AWS support on their behalf.

  • Red Hat may have permission from the customer to request AWS resource limit increases on the customer’s account.

  • Red Hat manages the restrictions, limitations, expectations, and defaults for all OpenShift Container Platform clusters in the same manner, unless otherwise specified in this requirements section.

Security requirements

Before deploying your cluster, ensure that you plan for your egresses and ingresses to have access to certain domains and IP addresses.

  • Red Hat must have ingress access to EC2 hosts and the API server from allow-listed IP addresses.

  • Red Hat must have egress allowed to the domains documented in the "AWS Firewall prerequisites" section.

Requirements for using OpenShift Cluster Manager

Additional resources

The following configuration details are required only if you use OpenShift Cluster Manager to manage your clusters. If you use the CLI tools exclusively, then you can disregard these requirements.

AWS account association

When you provision OpenShift Container Platform using OpenShift Cluster Manager (console.redhat.com), you must associate the ocm-role and user-role IAM roles with your AWS account using your Amazon Resource Name (ARN). This association process is also known as account linking.

The ocm-role ARN is stored as a label in your Red Hat organization while the user-role ARN is stored as a label inside your Red Hat user account. Red Hat uses these ARN labels to confirm that the user is a valid account holder and that the correct permissions are available to perform provisioning tasks in the AWS account.

Associating your AWS account with IAM roles

You can associate or link your AWS account with existing IAM roles by using the ROSA command-line interface (CLI) (rosa).

Prerequisites
  • You have an AWS account.

  • You have the permissions required to install AWS account-wide roles. See the "Additional resources" of this section for more information.

  • You have installed and configured the latest AWS CLI (aws) and ROSA CLI on your installation host.

  • You have created the ocm-role and user-role IAM roles, but have not yet linked them to your AWS account. You can check whether your IAM roles are already linked by running the following commands:

    $ rosa list ocm-role
    $ rosa list user-role

    If Yes is displayed in the Linked column for both roles, you have already linked the roles to an AWS account.

Procedure
  1. In the ROSA CLI, link your ocm-role resource to your Red Hat organization by using your Amazon Resource Name (ARN):

    Note

    You must have Red Hat Organization Administrator privileges to run the rosa link command. After you link the ocm-role resource with your AWS account, it takes effect and is visible to all users in the organization.

    $ rosa link ocm-role --role-arn <arn>

    For example:

    I: Linking OCM role
    ? Link the '<AWS ACCOUNT ID>` role with organization '<ORG ID>'? Yes
    I: Successfully linked role-arn '<AWS ACCOUNT ID>' with organization account '<ORG ID>'
  2. In the ROSA CLI, link your user-role resource to your Red Hat user account by using your Amazon Resource Name (ARN):

    $ rosa link user-role --role-arn <arn>

    For example:

    I: Linking User role
    ? Link the 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' role with organization '<AWS ID>'? Yes
    I: Successfully linked role-arn 'arn:aws:iam::<ARN>:role/ManagedOpenShift-User-Role-125' with organization account '<AWS ID>'

Associating multiple AWS accounts with your Red Hat organization

You can associate multiple AWS accounts with your Red Hat organization. Associating multiple accounts lets you create OpenShift Container Platform clusters on any of the associated AWS accounts from your Red Hat organization.

With this capability, you can create clusters on different AWS profiles according to characteristics that make sense for your business, for example, by using one AWS profile for each region to create region-bound environments.

Prerequisites
  • You have an AWS account.

  • You are using OpenShift Cluster Manager to create clusters.

  • You have the permissions required to install AWS account-wide roles.

  • You have installed and configured the latest AWS CLI (aws) and ROSA command-line interface (CLI) (rosa) on your installation host.

  • You have created the ocm-role and user-role IAM roles for OpenShift Container Platform.

Procedure
  • To specify an AWS account profile when creating an OpenShift Cluster Manager role:

    $ rosa create --profile <aws_profile> ocm-role
  • To specify an AWS account profile when creating a user role:

    $ rosa create --profile <aws_profile> user-role
  • To specify an AWS account profile when creating the account roles:

    $ rosa create --profile <aws_profile> account-roles

    Note

    If you do not specify a profile, the default AWS profile and its associated AWS region are used.

Requirements for deploying a cluster in an opt-in region

An AWS opt-in region is a region that is not enabled in your AWS account by default. If you want to deploy a OpenShift Container Platform cluster that uses the AWS Security Token Service (STS) in an opt-in region, you must meet the following requirements:

  • The region must be enabled in your AWS account. For more information about enabling opt-in regions, see Managing AWS Regions in the AWS documentation.

  • The security token version in your AWS account must be set to version 2. You cannot use version 1 security tokens for opt-in regions.

    Important

    Updating to security token version 2 can impact the systems that store the tokens, due to the increased token length. For more information, see the AWS documentation on setting STS preferences.

Setting the AWS security token version

If you want to create a OpenShift Container Platform cluster with the AWS Security Token Service (STS) in an AWS opt-in region, you must set the security token version to version 2 in your AWS account.

Prerequisites
  • You have installed and configured the latest AWS CLI on your installation host.

Procedure
  1. List the ID of the AWS account that is defined in your AWS CLI configuration:

    $ aws sts get-caller-identity --query Account --output json

    Ensure that the output matches the ID of the relevant AWS account.

  2. List the security token version that is set in your AWS account:

    $ aws iam get-account-summary --query SummaryMap.GlobalEndpointTokenVersion --output json

    For example:

    1
  3. To update the security token version to version 2 for all regions in your AWS account, run the following command:

    $ aws iam set-security-token-service-preferences --global-endpoint-token-version v2Token

    Important

    Updating to security token version 2 can impact the systems that store the tokens, due to the increased token length. For more information, see the AWS documentation on setting STS preferences.

Red Hat managed IAM references for AWS

When you use STS as your cluster credential method, Red Hat is not responsible for creating and managing Amazon Web Services (AWS) IAM policies, IAM users, or IAM roles. For information on creating these roles and policies, see the following sections on IAM roles.

  • To use the ocm CLI, you must have an ocm-role and user-role resource.

Provisioned AWS Infrastructure

This is an overview of the provisioned Amazon Web Services (AWS) components on a deployed OpenShift Container Platform cluster.

EC2 instances

AWS EC2 instances are required to deploy the control plane and data plane functions for OpenShift Container Platform. Instance types can vary for control plane and infrastructure nodes, depending on the worker node count.

At a minimum, the following EC2 instances are deployed:

  • Three m5.2xlarge control plane nodes

  • Two r5.xlarge infrastructure nodes

  • Two m5.xlarge worker nodes

The instance type shown for worker nodes is the default value, but you can customize the instance type for worker nodes according to the needs of your workload.

Amazon Elastic Block Store storage

Amazon Elastic Block Store (Amazon EBS) block storage is used for both local node storage and persistent volume storage. By default, the following storage is provisioned for each EC2 instance:

  • Control Plane Volume

    • Size: 350GB

    • Type: gp3

    • Input/Output Operations Per Second: 1000

  • Infrastructure Volume

    • Size: 300GB

    • Type: gp3

    • Input/Output Operations Per Second: 900

  • Worker Volume

    • Default size: 300 GiB (adjustable at creation time)

    • Minimum size: 128GB

    • Type: gp3

    • Input/Output Operations Per Second: 900

Note

Clusters deployed before the release of OpenShift Container Platform 4.11 use gp2 type storage by default.

Elastic Load Balancing

Each cluster can use up to two Classic Load Balancers for application router and up to two Network Load Balancers for API. For more information, see the ELB documentation for AWS.

S3 storage

The image registry is backed by AWS S3 storage. Resources are pruned regularly to optimize S3 usage and cluster performance.

Note

Two buckets are required with a typical size of 2TB each.

VPC

Configure your VPC according to the following requirements:

  • Subnets: Every cluster requires a minimum of one private subnet for every availability zone. For example, 1 private subnet is required for a single-zone cluster, and 3 private subnets are required for a cluster with 3 availability zones.

    If your cluster needs direct access to a network that is external to the cluster, including the public internet, you require at least one public subnet.

    Red Hat strongly recommends using unique subnets for each cluster. Sharing subnets between multiple clusters is not recommended.

    Note

    A public subnet connects directly to the internet through an internet gateway.

    A private subnet connects to the internet through a network address translation (NAT) gateway.

  • Route tables: One route table per private subnet, and one additional table per cluster.

  • Internet gateways: One Internet Gateway per cluster.

  • NAT gateways: One NAT Gateway per public subnet.

Figure 1. Sample VPC Architecture
VPC Reference Architecture

Security groups

AWS security groups provide security at the protocol and port access level; they are associated with EC2 instances and Elastic Load Balancing (ELB) load balancers. Each security group contains a set of rules that filter traffic coming in and out of one or more EC2 instances.

Ensure that the ports required for cluster installation and operation are open on your network and configured to allow access between hosts. The requirements for the default security groups are listed in Required ports for default security groups.

Table 1. Required ports for default security groups
Group Type IP Protocol Port range

MasterSecurityGroup

AWS::EC2::SecurityGroup

icmp

0

tcp

22

tcp

6443

tcp

22623

WorkerSecurityGroup

AWS::EC2::SecurityGroup

icmp

0

tcp

22

BootstrapSecurityGroup

AWS::EC2::SecurityGroup

tcp

22

tcp

19531

Additional custom security groups

When you create a cluster using an existing non-managed VPC, you can add additional custom security groups during cluster creation. Custom security groups are subject to the following limitations:

  • You must create the custom security groups in AWS before you create the cluster. For more information, see Amazon EC2 security groups for Linux instances.

  • You must associate the custom security groups with the VPC that the cluster will be installed into. Your custom security groups cannot be associated with another VPC.

  • You might need to request additional quota for your VPC if you are adding additional custom security groups. For information on AWS quota requirements for OpenShift Container Platform see Required AWS service quotas in Prepare your environment. For information on requesting an AWS quota increase, see Requesting a quota increase.

Networking prerequisites

The following sections detail the requirements to create your cluster.

Minimum bandwidth

During cluster deployment, OpenShift Container Platform requires a minimum bandwidth of 120 Mbps between cluster infrastructure and the public internet or private network locations that provide deployment artifacts and resources. When network connectivity is slower than 120 Mbps (for example, when connecting through a proxy) the cluster installation process times out and deployment fails.

After cluster deployment, network requirements are determined by your workload. However, a minimum bandwidth of 120 Mbps helps to ensure timely cluster and operator upgrades.

If you are using a firewall to control egress traffic from your OpenShift Container Platform cluster, you must configure your firewall to grant access to the certain domain and port combinations below. OpenShift Container Platform requires this access to provide a fully managed OpenShift service. You must configure an Amazon S3 gateway endpoint in your AWS Virtual Private Cloud (VPC). This endpoint is required to complete requests from the cluster to the Amazon S3 service.