OpenShift Container Platform service definition
This documentation outlines the service definition for the OpenShift Container Platform managed service.
Account management
This section provides information about the service definition for OpenShift Container Platform account management.
Billing and pricing
OpenShift Container Platform is billed directly to your Amazon Web Services (AWS) account. ROSA pricing is consumption based, with annual commitments or three-year commitments for greater discounting. The total cost of ROSA consists of two components:
-
ROSA service fees
-
AWS infrastructure fees
Visit the OpenShift Container Platform Pricing page on the AWS website for more details.
Cluster self-service
Customers can self-service their clusters, including, but not limited to:
-
Create a cluster
-
Delete a cluster
-
Add or remove an identity provider
-
Add or remove a user from an elevated group
-
Configure cluster privacy
-
Add or remove machine pools and configure autoscaling
-
Define upgrade policies
You can perform these self-service tasks using the OpenShift Container Platform (ROSA) CLI, rosa.
Instance types
Single availability zone clusters require a minimum of 3 control plane nodes, 2 infrastructure nodes, and 2 worker nodes deployed to a single availability zone.
Multiple availability zone clusters require a minimum of 3 control plane nodes, 3 infrastructure nodes, and 3 worker nodes.
Consider the following limitations when deploying and managing workloads:
-
You must deploy workloads on worker nodes that exist in the cluster by using OpenShift Container Platform machine pools.
-
Run workloads that you consider essential on the control plane and infrastructure nodes as daemonsets.
-
You must ensure that any workloads running on these nodes are secure, scalable, and compatible with a version of OpenShift Container Platform, so that the Service Level Agreement (SLA) for API server availability is not impacted.
Red Hat might notify you and resize the control plane or infrastructure nodes if the OpenShift Container Platform components are impacted.
Control plane and infrastructure nodes are deployed and managed by Red Hat. These nodes are automatically resized based on the resource use. If you need to resize these nodes to meet cluster demands, open a support case.
Warning
Shutting down the underlying infrastructure through the cloud provider console is unsupported and can lead to data loss.
See the following Red Hat Operator support section for more information about Red Hat workloads that must be deployed on worker nodes.
Note
Approximately one vCPU core and 1 GiB of memory are reserved on each worker node and removed from allocatable resources. This reservation of resources is necessary to run processes required by the underlying platform. These processes include system daemons such as udev, kubelet, and container runtime among others. The reserved resources also account for kernel reservations.
OpenShift/ROSA core systems such as audit log aggregation, metrics collection, DNS, image registry, CNI/OVN-Kubernetes, and others might consume additional allocatable resources to maintain the stability and maintainability of the cluster. The additional resources consumed might vary based on usage.
For additional information, see the Kubernetes documentation.
Regions and availability zones
The following AWS regions are currently available for Red Hat OpenShift 4 and are supported for OpenShift Container Platform.
Note
Regions in China are not supported, regardless of their support on OpenShift Container Platform.
Note
For GovCloud (US) regions, you must submit an Access request for Red Hat OpenShift Service on AWS (ROSA) FedRAMP.
The following AWS GovCloud regions are supported:
-
us-gov-west-1 -
us-gov-east-1
For more information about AWS GovCloud regions, see the The AWS GovCloud (US) User Guide.
| Region | Location | Minimum ROSA version required | AWS opt-in required |
|---|---|---|---|
us-east-1 |
N. Virginia |
4.14 |
No |
us-east-2 |
Ohio |
4.14 |
No |
us-west-1 |
N. California |
4.14 |
No |
us-west-2 |
Oregon |
4.14 |
No |
af-south-1 |
Cape Town |
4.14 |
Yes |
ap-east-1 |
Hong Kong |
4.14 |
Yes |
ap-south-2 |
Hyderabad |
4.14 |
Yes |
ap-southeast-3 |
Jakarta |
4.14 |
Yes |
ap-southeast-4 |
Melbourne |
4.14 |
Yes |
ap-south-1 |
Mumbai |
4.14 |
No |
ap-northeast-3 |
Osaka |
4.14 |
No |
ap-northeast-2 |
Seoul |
4.14 |
No |
ap-southeast-1 |
Singapore |
4.14 |
No |
ap-southeast-2 |
Sydney |
4.14 |
No |
ap-northeast-1 |
Tokyo |
4.14 |
No |
ca-central-1 |
Central Canada |
4.14 |
No |
eu-central-1 |
Frankfurt |
4.14 |
No |
eu-north-1 |
Stockholm |
4.14 |
No |
eu-west-1 |
Ireland |
4.14 |
No |
eu-west-2 |
London |
4.14 |
No |
eu-south-1 |
Milan |
4.14 |
Yes |
eu-west-3 |
Paris |
4.14 |
No |
eu-south-2 |
Spain |
4.14 |
Yes |
eu-central-2 |
Zurich |
4.14 |
Yes |
me-south-1 |
Bahrain |
4.14 |
Yes |
me-central-1 |
UAE |
4.14 |
Yes |
sa-east-1 |
São Paulo |
4.14 |
No |
il-central-1 |
Tel Aviv |
4.15 |
Yes |
ca-west-1 |
Calgary |
4.14 |
Yes |
us-gov-east-1 |
AWS GovCloud - US-East |
4.14 |
No |
us-gov-west-1 |
AWS GovCloud - US-West |
4.14 |
No |
Clusters can only be deployed in regions with at least 3 availability zones. For more information, see the Regions and Availability Zones section in the AWS documentation.
Each new OpenShift Container Platform cluster is installed within an installer-created or preexisting Virtual Private Cloud (VPC) in a single region, with the option to deploy into a single availability zone (Single-AZ) or across multiple availability zones (Multi-AZ). This provides cluster-level network and resource isolation, and enables cloud-provider VPC settings, such as VPN connections and VPC Peering. Persistent volumes (PVs) are backed by Amazon Elastic Block Storage (Amazon EBS), and are specific to the availability zone in which they are provisioned. Persistent volume claims (PVCs) do not bind to a volume until the associated pod resource is assigned into a specific availability zone to prevent unschedulable pods. Availability zone-specific resources are only usable by resources in the same availability zone.
Warning
The region and the choice of single or multiple availability zone cannot be changed after a cluster has been deployed.
Local Zones
OpenShift Container Platform supports the use of AWS Local Zones, which are metropolis-centralized availability zones where customers can place latency-sensitive application workloads. Local Zones are extensions of AWS Regions that have their own internet connection. For more information about AWS Local Zones, see the AWS documentation How Local Zones work.
Service Level Agreement (SLA)
Any SLAs for the service itself are defined in Appendix 4 of the Red Hat Enterprise Agreement Appendix 4 (Online Subscription Services).
Limited support status
When a cluster transitions to a Limited Support status, Red Hat no longer proactively monitors the cluster, the SLA is no longer applicable, and credits requested against the SLA are denied. It does not mean that you no longer have product support. In some cases, the cluster can return to a fully-supported status if you remediate the violating factors. However, in other cases, you might have to delete and recreate the cluster.
A cluster might move to a Limited Support status for many reasons, including the following scenarios:
- If you do not upgrade a cluster to a supported version before the end-of-life date
-
Red Hat does not make any runtime or SLA guarantees for versions after their end-of-life date. To receive continued support, upgrade the cluster to a supported version prior to the end-of-life date. If you do not upgrade the cluster prior to the end-of-life date, the cluster transitions to a Limited Support status until it is upgraded to a supported version.
Red Hat provides commercially reasonable support to upgrade from an unsupported version to a supported version. However, if a supported upgrade path is no longer available, you might have to create a new cluster and migrate your workloads.
- If you remove or replace any native OpenShift Container Platform components or any other component that is installed and managed by Red Hat
-
If cluster administrator permissions were used, Red Hat is not responsible for any of your or your authorized users’ actions, including those that affect infrastructure services, service availability, or data loss. If Red Hat detects any such actions, the cluster might transition to a Limited Support status. Red Hat notifies you of the status change and you should either revert the action or create a support case to explore remediation steps that might require you to delete and recreate the cluster.
If you have questions about a specific action that might cause a cluster to move to a Limited Support status or need further assistance, open a support ticket.
Support
OpenShift Container Platform includes Red Hat Premium Support, which can be accessed by using the Red Hat Customer Portal.
See the Red Hat Production Support Terms of Service for support response times.
AWS support is subject to a customer’s existing support contract with AWS.
Logging
OpenShift Container Platform provides optional integrated log forwarding to Amazon (AWS) CloudWatch.
Cluster audit logging
Cluster audit logs are available through AWS CloudWatch, if the integration is enabled. If the integration is not enabled, you can request the audit logs by opening a support case.
Application logging
Application logs sent to STDOUT are collected by Fluentd and forwarded to AWS CloudWatch through the cluster logging stack, if it is installed.
Monitoring
This section provides information about the service definition for OpenShift Container Platform monitoring.
Cluster metrics
OpenShift Container Platform clusters come with an integrated Prometheus stack for cluster monitoring including CPU, memory, and network-based metrics. This is accessible through the web console. These metrics also allow for horizontal pod autoscaling based on CPU or memory metrics provided by a ROSA user.
Cluster notifications
Cluster notifications (sometimes referred to as service logs) are messages about the status, health, or performance of your cluster.
Cluster notifications are the primary way that Red Hat Site Reliability Engineering (SRE) communicates with you about the health of your managed cluster. Red Hat SRE may also use cluster notifications to prompt you to perform an action in order to resolve or prevent an issue with your cluster.
Cluster owners and administrators must regularly review and action cluster notifications to ensure clusters remain healthy and supported.
You can view cluster notifications in the Red Hat Hybrid Cloud Console, in the Cluster history tab for your cluster. By default, only the cluster owner receives cluster notifications as emails. If other users need to receive cluster notification emails, add each user as a notification contact for your cluster.
Networking
This section provides information about the service definition for ROSA networking.
Custom domains for applications
Warning
Starting with OpenShift Container Platform 4.14, the Custom Domain Operator is deprecated. To manage Ingress in ROSA 4.14 or later, use the Ingress Operator.
To use a custom hostname for a route, you must update your DNS provider by creating a canonical name (CNAME) record. Your CNAME record should map the OpenShift canonical router hostname to your custom domain. The OpenShift canonical router hostname is shown on the Route Details page after a route is created. Alternatively, a wildcard CNAME record can be created once to route all subdomains for a given hostname to the cluster’s router.
Domain validated certificates
ROSA includes TLS security certificates needed for both internal and external services on the cluster. For external routes, there are two separate TLS wildcard certificates that are provided and installed on each cluster: one is for the web console and route default hostnames, and the other is for the API endpoint. Let’s Encrypt is the certificate authority used for certificates. Routes within the cluster, such as the internal API endpoint, use TLS certificates signed by the cluster’s built-in certificate authority and require the CA bundle available in every pod for trusting the TLS certificate.
Custom certificate authorities for builds
ROSA supports the use of custom certificate authorities to be trusted by builds when pulling images from an image registry.
Load balancers
OpenShift Container Platform uses up to five different load balancers:
-
An internal control plane load balancer that is internal to the cluster and used to balance traffic for internal cluster communications.
-
An external control plane load balancer that is used for accessing the OpenShift and Kubernetes APIs. This load balancer can be disabled in OpenShift Cluster Manager. If this load balancer is disabled, Red Hat reconfigures the API DNS to point to the internal control plane load balancer.
-
An external control plane load balancer for Red Hat that is reserved for cluster management by Red Hat. Access is strictly controlled, and communication is only possible from whitelisted bastion hosts.
-
A default external router/ingress load balancer that is the default application load balancer, denoted by
appsin the URL. The default load balancer can be configured in OpenShift Cluster Manager to be either publicly accessible over the Internet or only privately accessible over a pre-existing private connection. All application routes on the cluster are exposed on this default router load balancer, including cluster services such as the logging UI, metrics API, and registry. -
Optional: A secondary router/ingress load balancer that is a secondary application load balancer, denoted by
apps2in the URL. The secondary load balancer can be configured in OpenShift Cluster Manager to be either publicly accessible over the Internet or only privately accessible over a pre-existing private connection. If aLabel matchis configured for this router load balancer, then only application routes matching this label are exposed on this router load balancer; otherwise, all application routes are also exposed on this router load balancer. -
Optional: Load balancers for services. Enable non-HTTP/SNI traffic and non-standard ports for services. These load balancers can be mapped to a service running on OpenShift Container Platform to enable advanced ingress features, such as non-HTTP/SNI traffic or the use of non-standard ports. Each AWS account has a quota which limits the number of Classic Load Balancers that can be used within each cluster.
Cluster ingress
Project administrators can add route annotations for many different purposes, including ingress control through IP allow-listing.
Ingress policies can also be changed by using NetworkPolicy objects, which leverage the ovs-networkpolicy plugin. This allows for full control over the ingress network policy down to the pod level, including between pods on the same cluster and even in the same namespace.
All cluster ingress traffic will go through the defined load balancers. Direct access to all nodes is blocked by cloud configuration.
Cluster egress
Pod egress traffic control through EgressNetworkPolicy objects can be used to prevent or limit outbound traffic in
OpenShift Container Platform.
Public outbound traffic from the control plane and infrastructure nodes is required and necessary to maintain cluster image security and cluster monitoring. This requires that the 0.0.0.0/0 route belongs only to the Internet gateway; it is not possible to route this range over private connections.
OpenShift 4 clusters use NAT gateways to present a public, static IP for any public outbound traffic leaving the cluster. Each availability zone a cluster is deployed into receives a distinct NAT gateway, therefore up to 3 unique static IP addresses can exist for cluster egress traffic. Any traffic that remains inside the cluster, or that does not go out to the public Internet, will not pass through the NAT gateway and will have a source IP address belonging to the node that the traffic originated from. Node IP addresses are dynamic; therefore, a customer must not rely on whitelisting individual IP addresses when accessing private resources.
Customers can determine their public static IP addresses by running a pod on the cluster and then querying an external service. For example:
$ oc run ip-lookup --image=busybox -i -t --restart=Never --rm -- /bin/sh -c "/bin/nslookup -type=a myip.opendns.com resolver1.opendns.com | grep -E 'Address: [0-9.]+'"
Cloud network configuration
OpenShift Container Platform allows for the configuration of a private network connection through AWS-managed technologies, such as:
-
VPN connections
-
VPC peering
-
Transit Gateway
-
Direct Connect
Important
Red Hat site reliability engineers (SREs) do not monitor private network connections. Monitoring of these connections is the responsibility of the customer.
DNS forwarding
For ROSA clusters that have a private cloud network configuration, a customer can specify internal DNS servers available on that private connection that should be queried for explicitly provided domains.
Network verification
Network verification checks run automatically when you deploy a ROSA cluster into an existing Virtual Private Cloud (VPC) or create an additional machine pool with a subnet that is new to your cluster. The checks validate your network configuration and highlight errors, enabling you to resolve configuration issues prior to deployment.
You can also run the network verification checks manually to validate the configuration for an existing cluster.
Storage
This section provides information about the service definition for OpenShift Container Platform storage.
Encrypted-at-rest OS and node storage
Control plane, infrastructure, and worker nodes use encrypted-at-rest Amazon Elastic Block Store (Amazon EBS) storage.
Encrypted-at-rest PV
EBS volumes that are used for PVs are encrypted-at-rest by default.
Block storage (RWO)
Persistent volumes (PVs) are backed by Amazon Elastic Block Store (Amazon EBS), which is Read-Write-Once.
PVs can be attached only to a single node at a time and are specific to the availability zone in which they were provisioned. However, PVs can be attached to any node in the availability zone.
Each cloud provider has its own limits for how many PVs can be attached to a single node. See AWS instance type limits for details.
Shared Storage (RWX)
The AWS CSI Driver can be used to provide RWX support for OpenShift Container Platform. A community Operator is provided to simplify setup. See Amazon Elastic File Storage Setup for Red Hat OpenShift Service on AWS for details.
Platform
This section provides information about the service definition for the OpenShift Container Platform (ROSA) platform.
Autoscaling
Node autoscaling is available on OpenShift Container Platform. You can configure the autoscaler option to automatically scale the number of machines in a cluster.
Multiple availability zone
In a multiple availability zone cluster, control plane nodes are distributed across availability zones and at least one worker node is required in each availability zone.
Node labels
Custom node labels are created by Red Hat during node creation and cannot be changed on OpenShift Container Platform clusters at this time. However, custom labels are supported when creating new machine pools.
Node lifecycle
Worker nodes are not guaranteed longevity, and may be replaced at any time as part of the normal operation and management of OpenShift.
A worker node might be replaced in the following circumstances:
-
Machine health checks are deployed and configured to ensure that a worker node with a
NotReadystatus is replaced to ensure smooth operation of the cluster. -
AWS EC2 instances may be terminated when AWS detects irreparable failure of the underlying hardware that hosts the instance.
For all containerized workloads running on a Kubernetes based system, it is best practice to configure applications to be resilient of node replacements.
Cluster backup policy
Red Hat recommends object-level backup solutions for ROSA clusters. OpenShift API for Data Protection (OADP) is included in OpenShift but not enabled by default. Customers can configure OADP on their clusters to achieve object-level backup and restore capabilities.
Red Hat does not back up customer applications or application data. Customers are solely responsible for applications and their data, and must put their own backup and restore capabilities in place.
Warning
Customers are solely responsible for backing up and restoring their applications and application data. For more information about customer responsibilities, see "Shared responsibility matrix".
OpenShift version
OpenShift Container Platform Upgrade scheduling to the latest version is available.
Upgrades
Upgrades can be scheduled using the ROSA CLI, rosa, or through OpenShift Cluster Manager.
See the OpenShift Container Platform Life Cycle for more information on the upgrade policy and procedures.
Windows Containers
Red Hat OpenShift support for Windows Containers is not available on OpenShift Container Platform at this time.
Container engine
OpenShift Container Platform runs on OpenShift 4 and uses CRI-O as the only available container engine
Operating system
OpenShift Container Platform runs on OpenShift 4 and uses Red Hat CoreOS (RHCOS) as the operating system for all cluster nodes.
Red Hat Operator support
Red Hat workloads typically refer to Red Hat-provided Operators made available through Operator Hub. Red Hat workloads are not managed by the Red Hat SRE team, and must be deployed on worker nodes. These Operators may require additional Red Hat subscriptions, and may incur additional cloud infrastructure costs. Examples of these Red Hat-provided Operators are:
-
Red Hat Quay
-
Red Hat Advanced Cluster Management
-
Red Hat Advanced Cluster Security
-
Red Hat OpenShift Service Mesh
-
OpenShift Serverless
-
Red Hat OpenShift Logging
-
Red Hat OpenShift Pipelines
-
OpenShift Virtualization
Kubernetes Operator support
All Operators listed in the software catalog marketplace should be available for installation. These Operators are considered customer workloads, and are not monitored nor managed by Red Hat SRE. Operators authored by Red Hat are supported by Red Hat.
Security
This section provides information about the service definition for OpenShift Container Platform security.
Authentication provider
Authentication for the cluster can be configured using either OpenShift Cluster Manager or cluster creation process or using the ROSA CLI, rosa. ROSA is not an identity provider, and all access to the cluster must be managed by the customer as part of their integrated solution. The use of multiple identity providers provisioned at the same time is supported. The following identity providers are supported:
-
GitHub or GitHub Enterprise
-
GitLab
-
Google
-
LDAP
-
OpenID Connect
-
htpasswd
Privileged containers
Privileged containers are available for users with the cluster-admin role. Usage of privileged containers as cluster-admin is subject to the responsibilities and exclusion notes in the Red Hat Enterprise Agreement Appendix 4 (Online Subscription Services).
Customer administrator user
In addition to normal users,
OpenShift Container Platform
provides access to a
ROSA-specific
group called dedicated-admin. Any users on the cluster that are members of the dedicated-admin group:
-
Have administrator access to all customer-created projects on the cluster.
-
Can manage resource quotas and limits on the cluster.
-
Can add and manage
NetworkPolicyobjects. -
Are able to view information about specific nodes and PVs in the cluster, including scheduler information.
-
Can access the reserved
dedicated-adminproject on the cluster, which allows for the creation of service accounts with elevated privileges and also gives the ability to update default limits and quotas for projects on the cluster. -
Can install Operators from the software catalog and perform all verbs in all
*.operators.coreos.comAPI groups.
Cluster administration role
The administrator of
OpenShift Container Platform
has default access to the cluster-admin role for your organization’s cluster. While logged into an account with the cluster-admin role, users have increased permissions to run privileged security contexts.
Project self-service
By default, all users have the ability to create, update, and delete their projects. This can be restricted if a member of the dedicated-admin group removes the self-provisioner role from authenticated users:
$ oc adm policy remove-cluster-role-from-group self-provisioner system:authenticated:oauth
Restrictions can be reverted by applying:
$ oc adm policy add-cluster-role-to-group self-provisioner system:authenticated:oauth
Regulatory compliance
See the Compliance table in Understanding process and security for ROSA for the latest compliance information.
Network security
With OpenShift Container Platform, AWS provides a standard DDoS protection on all load balancers, called AWS Shield. This provides 95% protection against most commonly used level 3 and 4 attacks on all the public facing load balancers used for ROSA. A 10-second timeout is added for HTTP requests coming to the haproxy router to receive a response or the connection is closed to provide additional protection.
etcd encryption
In OpenShift Container Platform, the control plane storage is encrypted at rest by default, including encryption of the etcd volumes. This storage-level encryption is provided through the storage layer of the cloud provider.
You can also enable etcd encryption, which encrypts the key values in etcd, but not the keys. If you enable etcd encryption, the following Kubernetes API server and OpenShift API server resources are encrypted:
-
Secrets
-
Config maps
-
Routes
-
OAuth access tokens
-
OAuth authorize tokens
The etcd encryption feature is not enabled by default and it can be enabled only at cluster installation time. Even with etcd encryption enabled, the etcd key values are accessible to anyone with access to the control plane nodes or cluster-admin privileges.
Important
By enabling etcd encryption for the key values in etcd, you will incur a performance overhead of approximately 20%. The overhead is a result of introducing this second layer of encryption, in addition to the default control plane storage encryption that encrypts the etcd volumes. Red Hat recommends that you enable etcd encryption only if you specifically require it for your use case.