Provisioning real-time and low latency workloads
Many organizations need high performance computing and low, predictable latency, especially in the financial and telecommunications industries.
OpenShift Container Platform provides the Node Tuning Operator to implement automatic tuning to achieve low latency performance and consistent response time for OpenShift Container Platform applications. You use the performance profile configuration to make these changes. You can update the kernel to kernel-rt, reserve CPUs for cluster and operating system housekeeping duties, including pod infra containers, isolate CPUs for application containers to run the workloads, and disable unused CPUs to reduce power consumption.
Note
When writing your applications, follow the general recommendations described in RHEL for Real Time processes and threads.
Scheduling a low latency workload onto a worker with real-time capabilities
You can schedule low latency workloads onto a worker node where a performance profile that configures real-time capabilities is applied.
Note
To schedule the workload on specific nodes, use label selectors in the Pod custom resource (CR).
The label selectors must match the nodes that are attached to the machine config pool that was configured for low latency by the Node Tuning Operator.
-
You have installed the OpenShift CLI (
oc). -
You have logged in as a user with
cluster-adminprivileges. -
You have applied a performance profile in the cluster that tunes worker nodes for low latency workloads.
-
Create a
PodCR for the low latency workload and apply it in the cluster, for example:ExamplePodspec configured to use real-time processingapiVersion: v1 kind: Pod metadata: name: dynamic-low-latency-pod annotations: cpu-quota.crio.io: "disable" cpu-load-balancing.crio.io: "disable" irq-load-balancing.crio.io: "disable" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: dynamic-low-latency-pod image: "registry.redhat.io/openshift4/cnf-tests-rhel8:v4.19" command: ["sleep", "10h"] resources: requests: cpu: 2 memory: "200M" limits: cpu: 2 memory: "200M" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] nodeSelector: node-role.kubernetes.io/worker-cnf: "" runtimeClassName: performance-dynamic-low-latency-profile # ...- Disables the CPU completely fair scheduler (CFS) quota at the pod run time.
- Disables CPU load balancing.
- Opts the pod out of interrupt handling on the node.
- The
nodeSelectorlabel must match the label that you specify in theNodeCR. runtimeClassNamemust match the name of the performance profile configured in the cluster.
-
Enter the pod
runtimeClassNamein the form performance-<profile_name>, where <profile_name> is thenamefrom thePerformanceProfileYAML. In the previous example, thenameisperformance-dynamic-low-latency-profile. -
Ensure the pod is running correctly. Status should be
running, and the correct cnf-worker node should be set:$ oc get pod -o wideExpected outputNAME READY STATUS RESTARTS AGE IP NODE dynamic-low-latency-pod 1/1 Running 0 5h33m 10.131.0.10 cnf-worker.example.com -
Get the CPUs that the pod configured for IRQ dynamic load balancing runs on:
$ oc exec -it dynamic-low-latency-pod -- /bin/bash -c "grep Cpus_allowed_list /proc/self/status | awk '{print $2}'"Expected outputCpus_allowed_list: 2-3
Ensure the node configuration is applied correctly.
-
Log in to the node to verify the configuration.
$ oc debug node/<node-name> -
Verify that you can use the node file system:
sh-4.4# chroot /hostExpected outputsh-4.4# -
Ensure the default system CPU affinity mask does not include the
dynamic-low-latency-podCPUs, for example, CPUs 2 and 3.sh-4.4# cat /proc/irq/default_smp_affinityExample output33 -
Ensure the system IRQs are not configured to run on the
dynamic-low-latency-podCPUs:sh-4.4# find /proc/irq/ -name smp_affinity_list -exec sh -c 'i="$1"; mask=$(cat $i); file=$(echo $i); echo $file: $mask' _ {} \;Example output/proc/irq/0/smp_affinity_list: 0-5 /proc/irq/1/smp_affinity_list: 5 /proc/irq/2/smp_affinity_list: 0-5 /proc/irq/3/smp_affinity_list: 0-5 /proc/irq/4/smp_affinity_list: 0 /proc/irq/5/smp_affinity_list: 0-5 /proc/irq/6/smp_affinity_list: 0-5 /proc/irq/7/smp_affinity_list: 0-5 /proc/irq/8/smp_affinity_list: 4 /proc/irq/9/smp_affinity_list: 4 /proc/irq/10/smp_affinity_list: 0-5 /proc/irq/11/smp_affinity_list: 0 /proc/irq/12/smp_affinity_list: 1 /proc/irq/13/smp_affinity_list: 0-5 /proc/irq/14/smp_affinity_list: 1 /proc/irq/15/smp_affinity_list: 0 /proc/irq/24/smp_affinity_list: 1 /proc/irq/25/smp_affinity_list: 1 /proc/irq/26/smp_affinity_list: 1 /proc/irq/27/smp_affinity_list: 5 /proc/irq/28/smp_affinity_list: 1 /proc/irq/29/smp_affinity_list: 0 /proc/irq/30/smp_affinity_list: 0-5
Warning
When you tune nodes for low latency, the usage of execution probes in conjunction with applications that require guaranteed CPUs can cause latency spikes. Use other probes, such as a properly configured set of network probes, as an alternative.
Creating a pod with a guaranteed QoS class
You can create a pod with a quality of service (QoS) class of Guaranteed for high-performance workloads. Configuring a pod with a QoS class of Guaranteed ensures that the pod has priority access to the specified CPU and memory resources.
To create a pod with a QoS class of Guaranteed, you must apply the following specifications:
-
Set identical values for the memory limit and memory request fields for each container in the pod.
-
Set identical values for CPU limit and CPU request fields for each container in the pod.
In general, a pod with a QoS class of Guaranteed will not be evicted from a node. One exception is during resource contention caused by system daemons exceeding reserved resources. In this scenario, the kubelet might evict pods to preserve node stability, starting with the lowest priority pods.
-
Access to the cluster as a user with the
cluster-adminrole -
The OpenShift CLI (
oc)
-
Create a namespace for the pod by running the following command:
$ oc create namespace qos-example- This example uses the
qos-examplenamespace.Example outputnamespace/qos-example created
- This example uses the
-
Create the
Podresource:-
Create a YAML file that defines the
Podresource:Exampleqos-example.yamlfileapiVersion: v1 kind: Pod metadata: name: qos-demo namespace: qos-example spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: qos-demo-ctr image: quay.io/openshifttest/hello-openshift:openshift resources: limits: memory: "200Mi" cpu: "1" requests: memory: "200Mi" cpu: "1" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]- This example uses a public
hello-openshiftimage. - Sets the memory limit to 200 MB.
- Sets the CPU limit to 1 CPU.
- Sets the memory request to 200 MB.
- Sets the CPU request to 1 CPU.
Note
If you specify a memory limit for a container, but do not specify a memory request, OpenShift Container Platform automatically assigns a memory request that matches the limit. Similarly, if you specify a CPU limit for a container, but do not specify a CPU request, OpenShift Container Platform automatically assigns a CPU request that matches the limit.
- This example uses a public
-
Create the
Podresource by running the following command:$ oc apply -f qos-example.yaml --namespace=qos-exampleExample outputpod/qos-demo created
-
-
View the
qosClassvalue for the pod by running the following command:$ oc get pod qos-demo --namespace=qos-example --output=yaml | grep qosClassExample outputqosClass: Guaranteed
Disabling CPU load balancing in a Pod
Functionality to disable or enable CPU load balancing is implemented on the CRI-O level. The code under the CRI-O disables or enables CPU load balancing only when the following requirements are met.
-
The pod must use the
performance-<profile-name>runtime class. You can get the proper name by looking at the status of the performance profile, as shown here:apiVersion: performance.openshift.io/v2 kind: PerformanceProfile ... status: ... runtimeClass: performance-manual
The Node Tuning Operator is responsible for the creation of the high-performance runtime handler config snippet under relevant nodes and for creation of the high-performance runtime class under the cluster. It will have the same content as the default runtime handler except that it enables the CPU load balancing configuration functionality.
To disable the CPU load balancing for the pod, the Pod specification must include the following fields:
apiVersion: v1
kind: Pod
metadata:
#...
annotations:
#...
cpu-load-balancing.crio.io: "disable"
#...
#...
spec:
#...
runtimeClassName: performance-<profile_name>
#...
Note
Only disable CPU load balancing when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. Otherwise, disabling CPU load balancing can affect the performance of other containers in the cluster.
Disabling power saving mode for high priority pods
You can configure pods to ensure that high priority workloads are unaffected when you configure power saving for the node that the workloads run on.
When you configure a node with a power saving configuration, you must configure high priority workloads with performance configuration at the pod level, which means that the configuration applies to all the cores used by the pod.
By disabling P-states and C-states at the pod level, you can configure high priority workloads for best performance and lowest latency.
| Annotation | Possible Values | Description |
|---|---|---|
|
|
This annotation allows you to enable or disable C-states for each CPU. Alternatively, you can also specify a maximum latency in microseconds for the C-states. For example, enable C-states with a maximum latency of 10 microseconds with the setting |
|
Any supported |
Sets the |
-
You have configured power saving in the performance profile for the node where the high priority workload pods are scheduled.
-
Add the required annotations to your high priority workload pods. The annotations override the
defaultsettings.Example high priority workload annotationapiVersion: v1 kind: Pod metadata: #... annotations: #... cpu-c-states.crio.io: "disable" cpu-freq-governor.crio.io: "performance" #... #... spec: #... runtimeClassName: performance-<profile_name> #... -
Restart the pods to apply the annotation.
Disabling CPU CFS quota
To eliminate CPU throttling for pinned pods, create a pod with the cpu-quota.crio.io: "disable" annotation. This annotation disables the CPU completely fair scheduler (CFS) quota when the pod runs.
cpu-quota.crio.io disabledapiVersion: v1
kind: Pod
metadata:
annotations:
cpu-quota.crio.io: "disable"
spec:
runtimeClassName: performance-<profile_name>
#...
Note
Only disable CPU CFS quota when the CPU manager static policy is enabled and for pods with guaranteed QoS that use whole CPUs. For example, pods that contain CPU-pinned containers. Otherwise, disabling CPU CFS quota can affect the performance of other containers in the cluster.
Configuring interrupt processing for individual pods
To achieve low latency for workloads, some containers require that the CPUs they are pinned to do not process device interrupts. You can use the irq-load-balancing.crio.io pod annotation to control whether device interrupts are processed on CPUs where the pinned containers are running.
The annotation supports the following values:
disable-
Disables IRQ load balancing for all CPUs allocated to the container. Use this value for latency-sensitive workloads when you want to exclude container CPUs from interrupt handling.
housekeeping-
Preserves IRQ handling on the first CPU that is allocated to the container, including that CPU’s thread siblings. The subsequent CPUs allocated to the container are excluded from interrupt processing. This configuration also injects the
OPENSHIFT_HOUSEKEEPING_CPUSenvironment variable into the container. Use this variable to see which CPUs are designated for housekeeping tasks.You can use the
housekeepingvalue to reduce the overall CPU footprint by allowing a small subset of container CPUs to handle both application housekeeping work and system interrupts.
Note
When using the housekeeping value, the CPUs designated for housekeeping handle interrupts for the entire system.
-
You configured a performance profile for the node.
-
You set the
globallyDisableIrqLoadBalancingfield tofalsein the performance profile.
-
Create the
Podresource and configure theirq-load-balancing.crio.ioannotation:Example pod specificationapiVersion: v1 kind: Pod metadata: name: dpdk-workload annotations: irq-load-balancing.crio.io: "disable" spec: runtimeClassName: performance-<profile_name> containers: - name: app image: example-image resources: requests: cpu: "8" memory: "4Gi" limits: cpu: "8" memory: "4Gi"-
annotations.irq-load-balancing.crio.iodefines whether device interrupts are processed on the container CPUs. Set todisableto prevent all container CPUs from handling IRQs, or set tohousekeepingto allow the first allocated CPU and its thread siblings to handle IRQs while excluding the remaining CPUs from IRQ handling. -
spec.runtimeClassNamesets the runtime class to the performance profile. Replace<profile_name>with the name of your performance profile.
-
-
Apply the
Podresource by running the following command:$ oc apply -f pod.yaml
-
Verify the CPUs assigned to the pod:
$ oc exec <pod_name> -- cat /sys/fs/cgroup/cpuset.cpus -
For pods using the
housekeepingannotation, verify the housekeeping CPU environment variable:$ oc exec <pod_name> -- printenv OPENSHIFT_HOUSEKEEPING_CPUSReplace
<pod_name>with the name of the pod. -
On the worker node, verify the CPUs excluded from IRQ handling:
$ grep IRQBALANCE_BANNED_CPUS /etc/sysconfig/irqbalanceThe output is a hexadecimal bitmask representing the CPUs excluded from IRQ handling.