Managing hosted control planes on OpenStack
After you deploy hosted control planes on Red Hat OpenStack Platform (RHOSP) agent machines, you can manage a hosted cluster by completing the following tasks.
Accessing the hosted cluster
You can access hosted clusters on Red Hat OpenStack Platform (RHOSP) by extracting the kubeconfig secret directly from resources by using the oc CLI.
The hosted cluster (hosting) namespace contains hosted cluster resources and the access secrets. The hosted control plane namespace is where the hosted control plane runs.
The secret name formats are as follows:
-
kubeconfigsecret:<hosted_cluster_namespace>-<name>-admin-kubeconfig. For example,clusters-hypershift-demo-admin-kubeconfig. -
kubeadminpassword secret:<hosted_cluster_namespace>-<name>-kubeadmin-password. For example,clusters-hypershift-demo-kubeadmin-password.
The kubeconfig secret contains a Base64-encoded kubeconfig field. The kubeadmin password secret is also Base64-encoded; you can extract it and then use the password to log in to the API server or console of the hosted cluster.
-
The
ocCLI is installed.
-
Extract the
admin-kubeconfigsecret by entering the following command:$ oc extract -n <hosted_cluster_namespace> \ secret/<hosted_cluster_name>-admin-kubeconfig \ --to=./hostedcluster-secrets --confirmExample outputhostedcluster-secrets/kubeconfig
-
View a list of nodes of the hosted cluster to verify your access by entering the following command:
$ oc --kubeconfig ./hostedcluster-secrets/kubeconfig get nodes
Enabling node auto-scaling for the hosted cluster
When you need more capacity in your hosted cluster on Red Hat OpenStack Platform (RHOSP) and spare agents are available, you can enable auto-scaling to install new worker nodes.
-
To enable auto-scaling, enter the following command:
$ oc -n <hosted_cluster_namespace> patch nodepool <hosted_cluster_name> \ --type=json \ -p '[{"op": "remove", "path": "/spec/replicas"},{"op":"add", "path": "/spec/autoScaling", "value": { "max": 5, "min": 2 }}]' -
Create a workload that requires a new node.
-
Create a YAML file that contains the workload configuration, by using the following example:
apiVersion: apps/v1 kind: Deployment metadata: labels: app: reversewords name: reversewords namespace: default spec: replicas: 40 selector: matchLabels: app: reversewords template: metadata: labels: app: reversewords spec: containers: - image: quay.io/mavazque/reversewords:latest name: reversewords resources: requests: memory: 2Gi -
Save the file with the name
workload-config.yaml. -
Apply the YAML by entering the following command:
$ oc apply -f workload-config.yaml
-
-
Extract the
admin-kubeconfigsecret by entering the following command:$ oc extract -n <hosted_cluster_namespace> \ secret/<hosted_cluster_name>-admin-kubeconfig \ --to=./hostedcluster-secrets --confirmExample outputhostedcluster-secrets/kubeconfig
-
You can check if new nodes are in the
Readystatus by entering the following command:$ oc --kubeconfig ./hostedcluster-secrets get nodes -
To remove the node, delete the workload by entering the following command:
$ oc --kubeconfig ./hostedcluster-secrets -n <namespace> \ delete deployment <deployment_name> -
Wait for several minutes to pass without requiring the additional capacity. You can confirm that the node was removed by entering the following command:
$ oc --kubeconfig ./hostedcluster-secrets get nodes
Configuring node pools for availability zones
You can distribute node pools across multiple Red Hat OpenStack Platform (RHOSP) Nova availability zones to improve the high availability of your hosted cluster.
Note
Availability zones do not necessarily correspond to fault domains and do not inherently provide high availability benefits.-
You created a hosted cluster.
-
You have access to the management cluster.
-
The
hcpandocCLIs are installed.
-
Set environment variables that are appropriate for your needs. For example, if you want to create two additional machines in the
az1availability zone, you could enter:$ export NODEPOOL_NAME="${CLUSTER_NAME}-az1" \ && export WORKER_COUNT="2" \ && export FLAVOR="m1.xlarge" \ && export AZ="az1" -
Create the node pool by using your environment variables by entering the following command:
$ hcp create nodepool openstack \ --cluster-name <cluster_name> \ --name $NODEPOOL_NAME \ --replicas $WORKER_COUNT \ --openstack-node-flavor $FLAVOR \ --openstack-node-availability-zone $AZwhere:
<cluster_name>-
Specifies the name of your hosted cluster.
-
Check the status of the node pool by listing
nodepoolresources in the clusters namespace by running the following command:$ oc get nodepools --namespace clustersExample outputNAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE example example 5 5 False False 4.17.0 example-az1 example 2 False False True True Minimum availability requires 2 replicas, current 0 available -
Observe the notes as they start on your hosted cluster by running the following command:
$ oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodesExample outputNAME STATUS ROLES AGE VERSION ... example-extra-az-zh9l5 Ready worker 2m6s v1.27.4+18eadca example-extra-az-zr8mj Ready worker 102s v1.27.4+18eadca ... -
Verify that the node pool is created by running the following command:
$ oc get nodepools --namespace clustersExample outputNAME CLUSTER DESIRED CURRENT AVAILABLE PROGRESSING MESSAGE <node_pool_name> <cluster_name> 2 2 2 False All replicas are available
Configuring additional ports for node pools
You can configure additional ports for node pools to support advanced networking scenarios, such as SR-IOV or multiple networks.
Use cases for additional ports for node pools
Common reasons to configure additional ports for node pools include the following:
- SR-IOV (Single Root I/O Virtualization)
-
Enables a physical network device to appear as multiple virtual functions (VFs). By attaching additional ports to node pools, workloads can use SR-IOV interfaces to achieve low-latency, high-performance networking.
- DPDK (Data Plane Development Kit)
-
Provides fast packet processing in user space, bypassing the kernel. Node pools with additional ports can expose interfaces for workloads that use DPDK to improve network performance.
- Manila RWX volumes on NFS
-
Supports
ReadWriteMany(RWX) volumes over NFS, allowing multiple nodes to access shared storage. Attaching additional ports to node pools enables workloads to reach the NFS network used by Manila. - Multus CNI
-
Enables pods to connect to multiple network interfaces. Node pools with additional ports support use cases that require secondary network interfaces, including dual-stack connectivity and traffic separation.
Options for additional ports for node pools
You can use the --openstack-node-additional-port flag to attach additional ports to nodes in a hosted cluster on OpenStack. The flag takes a list of comma-separated parameters. Parameters can be used multiple times to attach multiple additional ports to the nodes.
The parameters are:
| Parameter | Description | Required | Default |
|---|---|---|---|
|
The ID of the network to attach to the node. |
Yes |
N/A |
|
The VNIC type to use for the port. If not specified, Neutron uses the default type |
No |
N/A |
|
Whether to disable port security for the port. If not specified, Neutron enables port security unless it is explicitly disabled at the network level. |
No |
N/A |
|
A list of IP address pairs to assign to the port. The format is |
No |
N/A |
Creating additional ports for node pools
You can configure additional ports for node pools for hosted clusters that run on Red Hat OpenStack Platform (RHOSP).
-
You created a hosted cluster.
-
You have access to the management cluster.
-
The
hcpCLI is installed. -
Additional networks are created in RHOSP.
-
The project that is used by the hosted cluster must have access to the additional networks.
-
You reviewed the options that are listed in "Options for additional ports for node pools".
-
Create a hosted cluster with additional ports attached to it by running the
hcp create nodepool openstackcommand with the--openstack-node-additional-portoptions. For example:$ hcp create nodepool openstack \ --cluster-name <cluster_name> \ --name <nodepool_name> \ --replicas <replica_count> \ --openstack-node-flavor <flavor> \ --openstack-node-additional-port "network-id=<sriov_net_id>,vnic-type=direct,disable-port-security=true" \ --openstack-node-additional-port "network-id=<lb_net_id>,address-pairs:192.168.0.1-192.168.0.2"where:
<cluster_name>-
Specifies the name of the hosted cluster.
<nodepool_name>-
Specifies the name of the node pool.
<replica_count>-
Specifies the desired number of replicas.
<flavor>-
Specifies the RHOSP flavor to use.
<sriov_net_id>-
Specifies a SR-IOV network ID.
<lb_net_id>-
Specifies a load balancer network ID.
Configuring additional ports for node pools
You can tune hosted cluster node performance on RHOSP for high-performance workloads, such as cloud-native network functions (CNFs). Performance tuning includes configuring RHOSP resources, creating a performance profile, deploying a tuned NodePool resource, and enabling SR-IOV device support.
CNFs are designed to run in cloud-native environments. They can provide network services such as routing, firewalling, and load balancing. You can configure the node pool to use high-performance computing and networking devices to run CNFs.
Tuning performance for hosted cluster nodes
Create a performance profile and deploy a tuned NodePool resource to run high-performance workloads on Red Hat OpenStack Platform (RHOSP) hosted control planes.
-
You have RHOSP flavor that has the necessary resources to run your workload, including dedicated CPU, memory, and host aggregate information.
-
You have an RHOSP network that is attached to SR-IOV or DPDK-capable NICs. The network must be available to the project used by hosted clusters.
-
Create a performance profile that meets your requirements in a file called
perfprofile.yaml. For example:Example performance profile in a config mapapiVersion: v1 kind: ConfigMap metadata: name: perfprof-1 namespace: clusters data: tuning: | apiVersion: v1 kind: ConfigMap metadata: name: cnf-performanceprofile namespace: "${HYPERSHIFT_NAMESPACE}" data: tuning: | apiVersion: performance.openshift.io/v2 kind: PerformanceProfile metadata: name: cnf-performanceprofile spec: additionalKernelArgs: - nmi_watchdog=0 - audit=0 - mce=off - processor.max_cstate=1 - idle=poll - intel_idle.max_cstate=0 - amd_iommu=on cpu: isolated: "${CPU_ISOLATED}" reserved: "${CPU_RESERVED}" hugepages: defaultHugepagesSize: "1G" pages: - count: ${HUGEPAGES} node: 0 size: 1G nodeSelector: node-role.kubernetes.io/worker: '' realTimeKernel: enabled: false globallyDisableIrqLoadBalancing: trueImportant
If you do not already have environment variables set for the HyperShift Operator namespace, isolated and reserved CPUs, and huge pages count, create them before applying the performance profile. -
Apply the performance profile configuration by running the following command:
$ oc apply -f perfprof.yaml -
If you do not already have a
CLUSTER_NAMEenvironment variable set for the name of your cluster, define it. -
Set a node pool name environment variable by running the following command:
$ export NODEPOOL_NAME=$CLUSTER_NAME-cnf -
Set a flavor environment variable by running the following command:
$ export FLAVOR="m1.xlarge.nfv" -
Create a node pool that uses the performance profile by running the following command:
$ hcp create nodepool openstack \ --cluster-name $CLUSTER_NAME \ --name $NODEPOOL_NAME \ --node-count 0 \ --openstack-node-flavor $FLAVOR -
Patch the node pool to reference the
PerformanceProfileresource by running the following command:$ oc patch nodepool -n ${HYPERSHIFT_NAMESPACE} ${CLUSTER_NAME} \ -p '{"spec":{"tuningConfig":[{"name":"cnf-performanceprofile"}]}}' --type=merge -
Scale the node pool by running the following command:
$ oc scale nodepool/$CLUSTER_NAME --namespace ${HYPERSHIFT_NAMESPACE} --replicas=1 -
Wait for the nodes to be ready:
-
Wait for the nodes to be ready by running the following command:
$ oc wait --for=condition=UpdatingConfig=True nodepool \ -n ${HYPERSHIFT_NAMESPACE} ${CLUSTER_NAME} \ --timeout=5m -
Wait for the configuration update to finish by running the following command:
$ oc wait --for=condition=UpdatingConfig=False nodepool \ -n ${HYPERSHIFT_NAMESPACE} ${CLUSTER_NAME} \ --timeout=30m -
Wait until all nodes are healthy by running the following command:
$ oc wait --for=condition=AllNodesHealthy nodepool \ -n ${HYPERSHIFT_NAMESPACE} ${CLUSTER_NAME} \ --timeout=5m
-
Note
You can make an SSH connection into the nodes or use theoc debug command to verify performance configurations.
Enabling the SR-IOV Network Operator in a hosted cluster
You can enable the SR-IOV Network Operator to manage SR-IOV-capable devices on nodes deployed by the NodePool resource. The operator runs in the hosted cluster and requires labeled worker nodes.
-
Generate a
kubeconfigfile for the hosted cluster by running the following command:$ hcp create kubeconfig --name $CLUSTER_NAME > $CLUSTER_NAME-kubeconfig -
Create a
kubeconfigresource environment variable by running the following command:$ export KUBECONFIG=$CLUSTER_NAME-kubeconfig -
Label each worker node to indicate SR-IOV capability by running the following command:
$ oc label node <worker_node_name> feature.node.kubernetes.io/network-sriov.capable=truewhere:
<worker_node_name>-
Specifies the name of a worker node in the hosted cluster.
-
Install the SR-IOV Network Operator in the hosted cluster by following the instructions in "Installing the SR-IOV Network Operator".
-
After installation, configure SR-IOV workloads in the hosted cluster by using the same process as for a standalone cluster.