Skip to content

Architecture models

OpenShift Container Platform has a classic architecture cluster topology meaning the control plane and the worker nodes are deployed in the customer’s AWS account.

Comparing Red Hat OpenShift Service on AWS and Red Hat OpenShift Service on AWS (classic architecture)

Table 1. Red Hat OpenShift Service on AWS and Red Hat OpenShift Service on AWS (classic architecture) architectures comparison table
 
Hosted Control Plane (HCP) Classic

Control plane hosting

Control plane components, such as the API server etcd database, are hosted in a Red Hat-owned AWS account.

Control plane components, such as the API server etcd database, are hosted in a customer-owned AWS account.

Virtual Private Cloud (VPC)

Worker nodes communicate with the control plane over AWS PrivateLink.

Worker nodes and control plane nodes are deployed in the customer’s VPC.

Multi-zone deployment

The control plane is always deployed across multiple availability zones (AZs).

The control plane can be deployed within a single AZ or across multiple AZs.

Machine pools

Each machine pool is deployed in a single AZ (private subnet).

Machine pools can be deployed in single AZ or across multiple AZs.

Infrastructure nodes

Does not use any dedicated infrastructure nodes to host platform components, such as ingress and image registry.

Uses 2 (single-AZ) or 3 (multi-AZ) dedicated infrastructure nodes to host platform components.

OpenShift capabilities

Platform monitoring, image registry, and the ingress controller are deployed in the worker nodes.

Platform monitoring, image registry, and the ingress controller are deployed in the dedicated infrastructure nodes.

Cluster upgrades

The control plane and each machine pool can be upgraded separately.

The entire cluster must be upgraded at the same time.

Minimum EC2 footprint

2 EC2 instances are needed to create a cluster.

7 (single-AZ) or 9 (multi-AZ) EC2 instances are needed to create a cluster.

Additional resources

The Red Hat managed infrastructure that creates AWS PrivateLink clusters is hosted on private subnets. The connection between Red Hat and the customer-provided infrastructure is created through AWS PrivateLink VPC endpoints.

Note

AWS PrivateLink is supported on existing VPCs only.

The following diagram shows network connectivity of a PrivateLink cluster.

Figure 1. Multi-AZ AWS PrivateLink cluster deployed on private subnets
Multi-AZ AWS PrivateLink cluster deployed on private subnets

AWS reference architectures

AWS provides multiple reference architectures that can be useful to customers when planning how to set up a configuration that uses AWS PrivateLink. Here are three examples:

Note

A public subnet connects directly to the internet through an internet gateway. A private subnet connects to the internet through a network address translation (NAT) gateway.

  • VPC with a private subnet and AWS Site-to-Site VPN access.

    This configuration enables you to extend your network into the cloud without exposing your network to the internet.

    To enable communication with your network over an Internet Protocol Security (IPsec) VPN tunnel, this configuration contains a virtual private cloud (VPC) with a single private subnet and a virtual private gateway. Communication over the internet does not use an internet gateway.

    For more information, see VPC with a private subnet only and AWS Site-to-Site VPN access in the AWS documentation.

  • VPC with public and private subnets (NAT)

    This configuration enables you to isolate your network so that the public subnet is reachable from the internet but the private subnet is not.

    Only the public subnet can send outbound traffic directly to the internet. The private subnet can access the internet by using a network address translation (NAT) gateway that resides in the public subnet. This allows database servers to connect to the internet for software updates using the NAT gateway, but does not allow connections to be made directly from the internet to the database servers.

    For more information, see VPC with public and private subnets (NAT) in the AWS documentation.

  • VPC with public and private subnets and AWS Site-to-Site VPN access

    This configuration enables you to extend your network into the cloud and to directly access the internet from your VPC.

    You can run a multi-tiered application with a scalable web front end in a public subnet, and house your data in a private subnet that is connected to your network by an IPsec AWS Site-to-Site VPN connection.

    For more information, see VPC with public and private subnets and AWS Site-to-Site VPN access in the AWS documentation.

OpenShift Container Platform with Local Zones

OpenShift Container Platform supports the use of AWS Local Zones, which are metropolis-centralized availability zones where customers can place latency-sensitive application workloads within a VPC. Local Zones are extensions of AWS Regions and are not enabled by default. When Local Zones are enabled and configured, the traffic is extended into the Local Zones for greater flexibility and lower latency. For more information, see "Configuring machine pools in Local Zones".

The following diagram displays a OpenShift Container Platform cluster without traffic routed into a Local Zone.

Figure 2. OpenShift Container Platform cluster without traffic routed into Local Zones
OpenShift Container Platform cluster without traffic routed into Local Zones

The following diagram displays a OpenShift Container Platform cluster with traffic routed into a Local Zone.

Figure 3. OpenShift Container Platform cluster with traffic routed into Local Zones
OpenShift Container Platform cluster with traffic routed into Local Zones