Deploying hosted control planes on OpenShift Virtualization
With hosted control planes and OpenShift Virtualization, you can create OpenShift Container Platform clusters with worker nodes that are hosted by KubeVirt virtual machines. Hosted control planes on OpenShift Virtualization provides several benefits:
-
Enhances resource usage by packing hosted control planes and hosted clusters in the same underlying bare-metal infrastructure
-
Separates hosted control planes and hosted clusters to provide strong isolation
-
Reduces cluster provision time by eliminating the bare-metal node bootstrapping process
-
Manages many releases under the same base OpenShift Container Platform cluster
The hosted control planes feature is enabled by default.
You can use the hosted control plane command-line interface, hcp, to create an OpenShift Container Platform hosted cluster. The hosted cluster is automatically imported as a managed cluster. If you want to disable this automatic import feature, see "Disabling the automatic import of hosted clusters into multicluster engine Operator".
Requirements to deploy hosted control planes on OpenShift Virtualization
As you prepare to deploy hosted control planes on OpenShift Virtualization, consider the following information:
-
Run the management cluster on bare metal.
-
Each hosted cluster must have a cluster-wide unique name.
-
Do not use
clustersas a hosted cluster name. -
A hosted cluster cannot be created in the namespace of a multicluster engine Operator managed cluster.
-
When you configure storage for hosted control planes, consider the recommended etcd practices. To ensure that you meet the latency requirements, dedicate a fast storage device to all hosted control plane etcd instances that run on each control-plane node. You can use LVM storage to configure a local storage class for hosted etcd pods. For more information, see "Recommended etcd practices" and "Persistent storage using Logical Volume Manager storage".
Prerequisites
You must meet the following prerequisites to create an OpenShift Container Platform cluster on OpenShift Virtualization:
-
You have administrator access to an OpenShift Container Platform cluster, version 4.14 or later, specified in the
KUBECONFIGenvironment variable. -
The OpenShift Container Platform management cluster must have wildcard DNS routes enabled, as shown in the following command:
$ oc patch ingresscontroller -n openshift-ingress-operator default \ --type=json \ -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]' -
The OpenShift Container Platform management cluster has OpenShift Virtualization, version 4.14 or later, installed on it. For more information, see "Installing OpenShift Virtualization using the web console".
-
The OpenShift Container Platform management cluster is on-premise bare metal.
-
The OpenShift Container Platform management cluster must be configured with
OVNKubernetesas the default pod network Container Network Interface (CNI). Live migration is supported for nodes only if the CNI is OVN-Kubernetes. -
The OpenShift Container Platform management cluster has a default storage class. For more information, see "Postinstallation storage configuration". The following example shows how to set a default storage class:
$ oc patch storageclass ocs-storagecluster-ceph-rbd \ -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' -
You have a valid pull secret file for the
quay.io/openshift-release-devrepository. For more information, see "Install OpenShift on any x86_64 platform with user-provisioned infrastructure". -
You have installed the hosted control plane command-line interface.
-
You have configured a load balancer. For more information, see "Configuring MetalLB".
-
For optimal network performance, you are using a network maximum transmission unit (MTU) of 9000 or greater on the OpenShift Container Platform cluster that hosts the KubeVirt virtual machines. If you use a lower MTU setting, network latency and the throughput of the hosted pods are affected. Enable multiqueue on node pools only when the MTU is 9000 or greater.
Important
You cannot change the MTU value for your cluster as a postinstallation task.
-
The multicluster engine Operator has at least one managed OpenShift Container Platform cluster. The
local-clusteris automatically imported. For more information about thelocal-cluster, see "Advanced configuration" in the multicluster engine Operator documentation. You can check the status of your hub cluster by running the following command:$ oc get managedclusters local-cluster -
On the OpenShift Container Platform cluster that hosts the OpenShift Virtualization virtual machines, you are using a
ReadWriteMany(RWX) storage class so that live migration can be enabled.
Firewall and port requirements
Ensure that you meet the firewall and port requirements so that ports can communicate between the management cluster, the control plane, and hosted clusters:
-
The
kube-apiserverservice runs on port 6443 by default and requires ingress access for communication between the control plane components.-
If you use the
NodePortpublishing strategy, ensure that the node port that is assigned to thekube-apiserverservice is exposed. -
If you use MetalLB load balancing, allow ingress access to the IP range that is used for load balancer IP addresses.
-
-
If you use the
NodePortpublishing strategy, use a firewall rule for theignition-serverandOauth-serversettings. -
The
konnectivityagent, which establishes a reverse tunnel to allow bi-directional communication on the hosted cluster, requires egress access to the cluster API server address on port 6443. With that egress access, the agent can reach thekube-apiserverservice.-
If the cluster API server address is an internal IP address, allow access from the workload subnets to the IP address on port 6443.
-
If the address is an external IP address, allow egress on port 6443 to that external IP address from the nodes.
-
-
If you change the default port of 6443, adjust the rules to reflect that change.
-
Ensure that you open any ports that are required by the workloads that run in the clusters.
-
Use firewall rules, security groups, or other access controls to restrict access to only required sources. Avoid exposing ports publicly unless necessary.
-
For production deployments, use a load balancer to simplify access through a single IP address.
Live migration for compute nodes
While the management cluster for hosted cluster virtual machines (VMs) is undergoing updates or maintenance, the hosted cluster VMs can be automatically live migrated to prevent disrupting hosted cluster workloads. As a result, the management cluster can be updated without affecting the availability and operation of the KubeVirt platform hosted clusters.
Important
The live migration of KubeVirt VMs is enabled by default provided that the VMs use ReadWriteMany (RWX) storage for both the root volume and the storage classes that are mapped to the kubevirt-csi CSI provider.
You can verify that the VMs in a node pool are capable of live migration by checking the KubeVirtNodesLiveMigratable condition in the status section of a NodePool object.
In the following example, the VMs cannot be live migrated because RWX storage is not used.
- lastTransitionTime: "2024-10-08T15:38:19Z"
message: |
3 of 3 machines are not live migratable
Machine user-np-ngst4-gw2hz: DisksNotLiveMigratable: user-np-ngst4-gw2hz is not a live migratable machine: cannot migrate VMI: PVC user-np-ngst4-gw2hz-rhcos is not shared, live migration requires that all PVCs must be shared (using ReadWriteMany access mode)
Machine user-np-ngst4-npq7x: DisksNotLiveMigratable: user-np-ngst4-npq7x is not a live migratable machine: cannot migrate VMI: PVC user-np-ngst4-npq7x-rhcos is not shared, live migration requires that all PVCs must be shared (using ReadWriteMany access mode)
Machine user-np-ngst4-q5nkb: DisksNotLiveMigratable: user-np-ngst4-q5nkb is not a live migratable machine: cannot migrate VMI: PVC user-np-ngst4-q5nkb-rhcos is not shared, live migration requires that all PVCs must be shared (using ReadWriteMany access mode)
observedGeneration: 1
reason: DisksNotLiveMigratable
status: "False"
type: KubeVirtNodesLiveMigratable
In the next example, the VMs meet the requirements to be live migrated.
- lastTransitionTime: "2024-10-08T15:38:19Z"
message: "All is well"
observedGeneration: 1
reason: AsExpected
status: "True"
type: KubeVirtNodesLiveMigratable
While live migration can protect VMs from disruption in normal circumstances, events such as infrastructure node failure can result in a hard restart of any VMs that are hosted on the failed node. For live migration to be successful, the source node that a VM is hosted on must be working correctly.
When the VMs in a node pool cannot be live migrated, workload disruption might occur on the hosted cluster during maintenance on the management cluster. By default, the hosted control planes controllers try to drain the workloads that are hosted on KubeVirt VMs that cannot be live migrated before the VMs are stopped. Draining the hosted cluster nodes before stopping the VMs allows pod disruption budgets to protect workload availability within the hosted cluster.
Creating a hosted cluster with the KubeVirt platform
With OpenShift Container Platform 4.14 and later, you can create a cluster with KubeVirt, to include creating with an external infrastructure.
Creating a hosted cluster with the KubeVirt platform by using the CLI
To create a hosted cluster, you can use the hosted control plane command-line interface (CLI), hcp.
-
Create a hosted cluster with the KubeVirt platform by entering the following command:
$ hcp create cluster kubevirt \ --name <hosted_cluster_name> \ --node-pool-replicas <node_pool_replica_count> \ --pull-secret <path_to_pull_secret> \ --memory <value_for_memory> \ --cores <value_for_cpu> \ --etcd-storage-class=<etcd_storage_class>- Specify the name of your hosted cluster, for example,
my-hosted-cluster. - Specify the node pool replica count, for example,
3. You must specify the replica count as0or greater to create the same number of replicas. Otherwise, no node pools are created. - Specify the path to your pull secret, for example,
/user/name/pullsecret. - Specify a value for memory, for example,
6Gi. - Specify a value for CPU, for example,
2. - Specify the etcd storage class name, for example,
lvm-storageclass.Note
You can use the
--release-imageflag to set up the hosted cluster with a specific OpenShift Container Platform release.A default node pool is created for the cluster with two virtual machine worker replicas according to the
--node-pool-replicasflag.
- Specify the name of your hosted cluster, for example,
-
After a few moments, verify that the hosted control plane pods are running by entering the following command:
$ oc -n clusters-<hosted-cluster-name> get podsExample outputNAME READY STATUS RESTARTS AGE capi-provider-5cc7b74f47-n5gkr 1/1 Running 0 3m catalog-operator-5f799567b7-fd6jw 2/2 Running 0 69s certified-operators-catalog-784b9899f9-mrp6p 1/1 Running 0 66s cluster-api-6bbc867966-l4dwl 1/1 Running 0 66s . . . redhat-operators-catalog-9d5fd4d44-z8qqk 1/1 Running 0 66sA hosted cluster that has worker nodes that are backed by KubeVirt virtual machines typically takes 10-15 minutes to be fully provisioned.
-
To check the status of the hosted cluster, see the corresponding
HostedClusterresource by entering the following command:$ oc get --namespace clusters hostedclustersSee the following example output, which illustrates a fully provisioned
HostedClusterobject:NAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters my-hosted-cluster <4.x.0> example-admin-kubeconfig Completed True False The hosted control plane is available
Replace
<4.x.0>with the supported OpenShift Container Platform version that you want to use.
Creating a hosted cluster with the KubeVirt platform by using external infrastructure
By default, the HyperShift Operator hosts both the control plane pods of the hosted cluster and the KubeVirt worker VMs within the same cluster. With the external infrastructure feature, you can place the worker node VMs on a separate cluster from the control plane pods.
-
The management cluster is the OpenShift Container Platform cluster that runs the HyperShift Operator and hosts the control plane pods for a hosted cluster.
-
The infrastructure cluster is the OpenShift Container Platform cluster that runs the KubeVirt worker VMs for a hosted cluster.
-
By default, the management cluster also acts as the infrastructure cluster that hosts VMs. However, for external infrastructure, the management and infrastructure clusters are different.
-
You must have a namespace on the external infrastructure cluster for the KubeVirt nodes to be hosted in.
-
You must have a
kubeconfigfile for the external infrastructure cluster.
You can create a hosted cluster by using the hcp command-line interface.
-
To place the KubeVirt worker VMs on the infrastructure cluster, use the
--infra-kubeconfig-fileand--infra-namespacearguments, as shown in the following example:$ hcp create cluster kubevirt \ --name <hosted-cluster-name> \ --node-pool-replicas <worker-count> \ --pull-secret <path-to-pull-secret> \ --memory <value-for-memory> \ --cores <value-for-cpu> \ --infra-namespace=<hosted-cluster-namespace>-<hosted-cluster-name> \ --infra-kubeconfig-file=<path-to-external-infra-kubeconfig>- Specify the name of your hosted cluster, for example,
my-hosted-cluster. - Specify the worker count, for example,
2. - Specify the path to your pull secret, for example,
/user/name/pullsecret. - Specify a value for memory, for example,
6Gi. - Specify a value for CPU, for example,
2. - Specify the infrastructure namespace, for example,
clusters-example. - Specify the path to your
kubeconfigfile for the infrastructure cluster, for example,/user/name/external-infra-kubeconfig.After you enter that command, the control plane pods are hosted on the management cluster that the HyperShift Operator runs on, and the KubeVirt VMs are hosted on a separate infrastructure cluster.
- Specify the name of your hosted cluster, for example,
Creating a hosted cluster by using the console
To create a hosted cluster with the KubeVirt platform by using the console, complete the following steps.
-
Open the OpenShift Container Platform web console and log in by entering your administrator credentials.
-
In the console header, ensure that All Clusters is selected.
-
Click Infrastructure > Clusters.
-
Click Create cluster > Red Hat OpenShift Virtualization > Hosted.
-
On the Create cluster page, follow the prompts to enter details about the cluster and node pools.
Note
-
If you want to use predefined values to automatically populate fields in the console, you can create a OpenShift Virtualization credential. For more information, see "Creating a credential for an on-premises environment".
-
On the Cluster details page, the pull secret is your OpenShift Container Platform pull secret that you use to access OpenShift Container Platform resources. If you selected a OpenShift Virtualization credential, the pull secret is automatically populated.
-
-
On the Node pools page, expand the Networking options section and configure the networking options for your node pool:
-
In the Additional networks field, enter a network name in the format of
<namespace>/<name>; for example,my-namespace/network1. The namespace and the name must be valid DNS labels. Multiple networks are supported. -
By default, the Attach default pod network checkbox is selected. You can clear this checkbox only if additional networks exist.
-
-
Review your entries and click Create.
The Hosted cluster view is displayed.
-
Monitor the deployment of the hosted cluster in the Hosted cluster view. If you do not see information about the hosted cluster, ensure that All Clusters is selected, and click the cluster name.
-
Wait until the control plane components are ready. This process can take a few minutes.
-
To view the node pool status, scroll to the NodePool section. The process to install the nodes takes about 10 minutes. You can also click Nodes to confirm whether the nodes joined the hosted cluster.
Configuring the default ingress and DNS for hosted control planes on OpenShift Virtualization
Every OpenShift Container Platform cluster includes a default application Ingress Controller, which must have an wildcard DNS record associated with it. By default, hosted clusters that are created by using the HyperShift KubeVirt provider automatically become a subdomain of the OpenShift Container Platform cluster that the KubeVirt virtual machines run on.
For example, your OpenShift Container Platform cluster might have the following default ingress DNS entry:
*.apps.mgmt-cluster.example.com
As a result, a KubeVirt hosted cluster that is named guest and that runs on that underlying OpenShift Container Platform cluster has the following default ingress:
*.apps.guest.apps.mgmt-cluster.example.com
For the default ingress DNS to work properly, the cluster that hosts the KubeVirt virtual machines must allow wildcard DNS routes.
-
You can configure this behavior by entering the following command:
$ oc patch ingresscontroller -n openshift-ingress-operator default \ --type=json \ -p '[{ "op": "add", "path": "/spec/routeAdmission", "value": {wildcardPolicy: "WildcardsAllowed"}}]'
Note
When you use the default hosted cluster ingress, connectivity is limited to HTTPS traffic over port 443. Plain HTTP traffic over port 80 is rejected. This limitation applies to only the default ingress behavior.
Defining a custom DNS name
As a cluster administrator, you can create a hosted cluster with an external API DNS name that differs from the internal endpoint that gets used for node bootstraps and control plane communication. You might want to define a different DNS name for the following reasons:
-
To replace the user-facing TLS certificate with one from a public CA without breaking the control plane functions that bind to the internal root CA.
-
To support split-horizon DNS and NAT scenarios.
-
To ensure a similar experience to standalone control planes, where you can use functions, such as the
Show Login Commandfunction, with the correctkubeconfigand DNS configuration.
You can define a DNS name either during your initial setup or during postinstallation operations, by entering a domain name in the kubeAPIServerDNSName parameter of a HostedCluster object.
-
You have a valid TLS certificate that covers the DNS name that you set in the
kubeAPIServerDNSNameparameter. -
You have a resolvable DNS name URI that can reach and point to the correct address.
-
In the specification for the
HostedClusterobject, add thekubeAPIServerDNSNameparameter and the address for the domain and specify which certificate to use, as shown in the following example:#... spec: configuration: apiServer: servingCerts: namedCertificates: - names: - xxx.example.com - yyy.example.com servingCertificate: name: <my_serving_certificate> kubeAPIServerDNSName: <custom_address>- The value for the
kubeAPIServerDNSNameparameter must be a valid and addressable domain.
- The value for the
After you define the kubeAPIServerDNSName parameter and specify the certificate, the Control Plane Operator controllers create a kubeconfig file named custom-admin-kubeconfig, where the file gets stored in the HostedControlPlane namespace. The generation of certificates happen from the root CA, and the HostedControlPlane namespace manages their expiration and renewal.
The Control Plane Operator reports a new kubeconfig file named CustomKubeconfig in the HostedControlPlane namespace. That file uses the defined new server in the kubeAPIServerDNSName parameter.
A reference for the custom kubeconfig file exists in the status parameter as CustomKubeconfig of the HostedCluster object. The CustomKubeConfig parameter is optional, and you can add the parameter only if the kubeAPIServerDNSName parameter is not empty. After you set the CustomKubeConfig parameter, the parameter triggers the generation of a secret named <hosted_cluster_name>-custom-admin-kubeconfig in the HostedCluster namespace. You can use the secret to access the HostedCluster API server. If you remove the CustomKubeConfig parameter during postinstallation operations, deletion of all related secrets and status references occur.
Note
Defining a custom DNS name does not directly impact the data plane, so no expected rollouts occur. The HostedControlPlane namespace receives the changes from the HyperShift Operator and deletes the corresponding parameters.
If you remove the kubeAPIServerDNSName parameter from the specification for the HostedCluster object, all newly generated secrets and the CustomKubeconfig reference are removed from the cluster and from the status parameter.
Customizing ingress and DNS behavior
If you do not want to use the default ingress and DNS behavior, you can configure a KubeVirt hosted cluster with a unique base domain at creation time. This option requires manual configuration steps during creation and involves three main steps: cluster creation, load balancer creation, and wildcard DNS configuration.
Deploying a hosted cluster that specifies the base domain
To create a hosted cluster that specifies a base domain, complete the following steps.
-
Enter the following command:
$ hcp create cluster kubevirt \ --name <hosted_cluster_name> \ --node-pool-replicas <worker_count> \ --pull-secret <path_to_pull_secret> \ --memory <value_for_memory> \ --cores <value_for_cpu> \ --base-domain <basedomain>- Specify the name of your hosted cluster.
- Specify the worker count, for example,
2. - Specify the path to your pull secret, for example,
/user/name/pullsecret. - Specify a value for memory, for example,
6Gi. - Specify a value for CPU, for example,
2. - Specify the base domain, for example,
hypershift.lab.As a result, the hosted cluster has an ingress wildcard that is configured for the cluster name and the base domain, for example,
.apps.example.hypershift.lab. The hosted cluster remains inPartialstatus because after you create a hosted cluster with unique base domain, you must configure the required DNS records and load balancer.
-
View the status of your hosted cluster by entering the following command:
$ oc get --namespace clusters hostedclustersExample outputNAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example example-admin-kubeconfig Partial True False The hosted control plane is available -
Access the cluster by entering the following commands:
$ hcp create kubeconfig --name <hosted_cluster_name> \ > <hosted_cluster_name>-kubeconfig$ oc --kubeconfig <hosted_cluster_name>-kubeconfig get coExample outputNAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console <4.x.0> False False False 30m RouteHealthAvailable: failed to GET route (https://console-openshift-console.apps.example.hypershift.lab): Get "https://console-openshift-console.apps.example.hypershift.lab": dial tcp: lookup console-openshift-console.apps.example.hypershift.lab on 172.31.0.10:53: no such host ingress <4.x.0> True False True 28m The "default" ingress controller reports Degraded=True: DegradedConditions: One or more other status conditions indicate a degraded state: CanaryChecksSucceeding=False (CanaryChecksRepetitiveFailures: Canary route checks for the default ingress controller are failing)Replace
<4.x.0>with the supported OpenShift Container Platform version that you want to use.
To fix the errors in the output, complete the steps in "Setting up the load balancer" and "Setting up a wildcard DNS".
Note
If your hosted cluster is on bare metal, you might need MetalLB to set up load balancer services. For more information, see "Configuring MetalLB".
Setting up the load balancer
Set up the load balancer service that routes ingress traffic to the KubeVirt VMs and assigns a wildcard DNS entry to the load balancer IP address.
-
A
NodePortservice that exposes the hosted cluster ingress already exists. You can export the node ports and create the load balancer service that targets those ports.-
Get the HTTP node port by entering the following command:
$ oc --kubeconfig <hosted_cluster_name>-kubeconfig get services \ -n openshift-ingress router-nodeport-default \ -o jsonpath='{.spec.ports[?(@.name=="http")].nodePort}'Note the HTTP node port value to use in the next step.
-
Get the HTTPS node port by entering the following command:
$ oc --kubeconfig <hosted_cluster_name>-kubeconfig get services \ -n openshift-ingress router-nodeport-default \ -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}'Note the HTTPS node port value to use in the next step.
-
-
Enter the following information in a YAML file:
apiVersion: v1 kind: Service metadata: labels: app: <hosted_cluster_name> name: <hosted_cluster_name>-apps namespace: clusters-<hosted_cluster_name> spec: ports: - name: https-443 port: 443 protocol: TCP targetPort: <https_node_port> - name: http-80 port: 80 protocol: TCP targetPort: <http_node_port> selector: kubevirt.io: virt-launcher type: LoadBalancer- Specify the HTTPS node port value that you noted in the previous step.
- Specify the HTTP node port value that you noted in the previous step.
-
Create the load balancer service by running the following command:
$ oc create -f <file_name>.yaml
Setting up a wildcard DNS
Set up a wildcard DNS record or CNAME that references the external IP of the load balancer service.
-
Get the external IP address by entering the following command:
$ oc -n clusters-<hosted_cluster_name> get service <hosted-cluster-name>-apps \ -o jsonpath='{.status.loadBalancer.ingress[0].ip}'Example output192.168.20.30 -
Configure a wildcard DNS entry that references the external IP address. View the following example DNS entry:
*.apps.<hosted_cluster_name\>.<base_domain\>.The DNS entry must be able to route inside and outside of the cluster.
DNS resolutions exampledig +short test.apps.example.hypershift.lab 192.168.20.30
-
Check that hosted cluster status has moved from
PartialtoCompletedby entering the following command:$ oc get --namespace clusters hostedclustersExample outputNAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE example <4.x.0> example-admin-kubeconfig Completed True False The hosted control plane is availableReplace
<4.x.0>with the supported OpenShift Container Platform version that you want to use.
Configuring MetalLB
You must install the MetalLB Operator before you configure MetalLB.
Complete the following steps to configure MetalLB on your hosted cluster:
-
Create a
MetalLBresource by saving the following sample YAML content in theconfigure-metallb.yamlfile:apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system -
Apply the YAML content by entering the following command:
$ oc apply -f configure-metallb.yamlExample outputmetallb.metallb.io/metallb created -
Create a
IPAddressPoolresource by saving the following sample YAML content in thecreate-ip-address-pool.yamlfile:apiVersion: metallb.io/v1beta1 kind: IPAddressPool metadata: name: metallb namespace: metallb-system spec: addresses: - 192.168.216.32-192.168.216.122- Create an address pool with an available range of IP addresses within the node network. Replace the IP address range with an unused pool of available IP addresses in your network.
-
Apply the YAML content by entering the following command:
$ oc apply -f create-ip-address-pool.yamlExample outputipaddresspool.metallb.io/metallb created -
Create a
L2Advertisementresource by saving the following sample YAML content in thel2advertisement.yamlfile:apiVersion: metallb.io/v1beta1 kind: L2Advertisement metadata: name: l2advertisement namespace: metallb-system spec: ipAddressPools: - metallb -
Apply the YAML content by entering the following command:
$ oc apply -f l2advertisement.yamlExample outputl2advertisement.metallb.io/metallb created
Configuring additional networks, guaranteed CPUs, and VM scheduling for node pools
If you need to configure additional networks for node pools, request a guaranteed CPU access for Virtual Machines (VMs), or manage scheduling of KubeVirt VMs, see the following procedures.
Adding multiple networks to a node pool
By default, nodes generated by a node pool are attached to the pod network. You can attach additional networks to the nodes by using Multus and NetworkAttachmentDefinitions.
-
To add multiple networks to nodes, use the
--additional-networkargument by running the following command:$ hcp create cluster kubevirt \ --name <hosted_cluster_name> \ --node-pool-replicas <worker_node_count> \ --pull-secret <path_to_pull_secret> \ --memory <memory> \ --cores <cpu> \ --additional-network name:<namespace/name> \ –-additional-network name:<namespace/name>- Specify the name of your hosted cluster, for example,
my-hosted-cluster. - Specify your worker node count, for example,
2. - Specify the path to your pull secret, for example,
/user/name/pullsecret. - Specify the memory value, for example,
8Gi. - Specify the CPU value, for example,
2. - Set the value of the
–additional-networkargument toname:<namespace/name>. Replace<namespace/name>with a namespace and name of your NetworkAttachmentDefinitions.
- Specify the name of your hosted cluster, for example,
Using an additional network as default
You can add your additional network as a default network for the nodes by disabling the default pod network.
-
To add an additional network as default to your nodes, run the following command:
$ hcp create cluster kubevirt \ --name <hosted_cluster_name> \ --node-pool-replicas <worker_node_count> \ --pull-secret <path_to_pull_secret> \ --memory <memory> \ --cores <cpu> \ --attach-default-network false \ --additional-network name:<namespace>/<network_name>- Specify the name of your hosted cluster, for example,
my-hosted-cluster. - Specify your worker node count, for example,
2. - Specify the path to your pull secret, for example,
/user/name/pullsecret. - Specify the memory value, for example,
8Gi. - Specify the CPU value, for example,
2. - The
--attach-default-network falseargument disables the default pod network. - Specify the additional network that you want to add to your nodes, for example,
name:my-namespace/my-network.
- Specify the name of your hosted cluster, for example,
Requesting guaranteed CPU resources
By default, KubeVirt VMs might share their CPUs with other workloads on a node. This might impact performance of a VM. To avoid the performance impact, you can request a guaranteed CPU access for VMs.
-
To request guaranteed CPU resources, set the
--qos-classargument toGuaranteedby running the following command:$ hcp create cluster kubevirt \ --name <hosted_cluster_name> \ --node-pool-replicas <worker_node_count> \ --pull-secret <path_to_pull_secret> \ --memory <memory> \ --cores <cpu> \ --qos-class Guaranteed- Specify the name of your hosted cluster, for example,
my-hosted-cluster. - Specify your worker node count, for example,
2. - Specify the path to your pull secret, for example,
/user/name/pullsecret. - Specify the memory value, for example,
8Gi. - Specify the CPU value, for example,
2. - The
--qos-class Guaranteedargument guarantees that the specified number of CPU resources are assigned to VMs.
- Specify the name of your hosted cluster, for example,
Scheduling KubeVirt VMs on a set of nodes
By default, KubeVirt VMs created by a node pool are scheduled to any available nodes. You can schedule KubeVirt VMs on a specific set of nodes that has enough capacity to run the VM.
-
To schedule KubeVirt VMs within a node pool on a specific set of nodes, use the
--vm-node-selectorargument by running the following command:$ hcp create cluster kubevirt \ --name <hosted_cluster_name> \ --node-pool-replicas <worker_node_count> \ --pull-secret <path_to_pull_secret> \ --memory <memory> \ --cores <cpu> \ --vm-node-selector <label_key>=<label_value>,<label_key>=<label_value>- Specify the name of your hosted cluster, for example,
my-hosted-cluster. - Specify your worker node count, for example,
2. - Specify the path to your pull secret, for example,
/user/name/pullsecret. - Specify the memory value, for example,
8Gi. - Specify the CPU value, for example,
2. - The
--vm-node-selectorflag defines a specific set of nodes that contains the key-value pairs. Replace<label_key>with the keys of your labels and replace<label_value>with the values of your labels.
- Specify the name of your hosted cluster, for example,
Scaling a node pool
You can manually scale a node pool by using the oc scale command.
-
Run the following command:
NODEPOOL_NAME=${CLUSTER_NAME}-work NODEPOOL_REPLICAS=5 $ oc scale nodepool/$NODEPOOL_NAME --namespace clusters \ --replicas=$NODEPOOL_REPLICAS -
After a few moments, enter the following command to see the status of the node pool:
$ oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodesExample outputNAME STATUS ROLES AGE VERSION example-9jvnf Ready worker 97s v1.27.4+18eadca example-n6prw Ready worker 116m v1.27.4+18eadca example-nc6g4 Ready worker 117m v1.27.4+18eadca example-thp29 Ready worker 4m17s v1.27.4+18eadca example-twxns Ready worker 88s v1.27.4+18eadca
Adding node pools
You can create node pools for a hosted cluster by specifying a name, number of replicas, and any additional information, such as memory and CPU requirements.
-
If the management cluster has a cluster-wide proxy configured, you must configure proxy settings in the
HostedClusterresource by completing the following steps:-
Edit the
HostedClusterresource by entering the following command:$ oc edit hc <hosted_cluster_name> -n <hosted_cluster_namespace> -
In the
HostedClusterresource, add the proxy configuration as shown in the following example:apiVersion: hypershift.openshift.io/v1beta1 kind: HostedCluster metadata: annotations: # ... hypershift.openshift.io/HasBeenAvailable: "true" hypershift.openshift.io/management-platform: VSphere # ... name: <hosted_cluster_name> namespace: <hosted_cluster_namespace> # ... spec: # ... clusterID: fa45babd-40f3-4085-9b30-8bc3b7df1557 configuration: proxy: httpProxy: http://web-proxy.example.com:3128 httpsProxy: http://web-proxy.example.com:3128 noProxy: .example.com,192.168.10.0/24In the
spec.configuration.proxyfields, specify the details of the proxy configuration. -
Check the status on the management cluster by entering the following command:
$ oc get nodepool -n <hosted_cluster_namespace> -
Check the status on the hosted cluster by entering the following command:
$ oc --kubeconfig <hosted_cluster_name>-kubeconfig get nodes
-
-
To create a node pool, enter the following information. In this example, the node pool has more CPUs assigned to the VMs:
export NODEPOOL_NAME=${CLUSTER_NAME}-extra-cpu export WORKER_COUNT="2" export MEM="6Gi" export CPU="4" export DISK="16" $ hcp create nodepool kubevirt \ --cluster-name $CLUSTER_NAME \ --name $NODEPOOL_NAME \ --node-count $WORKER_COUNT \ --memory $MEM \ --cores $CPU \ --root-volume-size $DISK -
Check the status of the node pool by listing
nodepoolresources in the namespace:$ oc get nodepools --namespace <hosted_cluster_namespace>Example outputNAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE example example 5 5 False False <4.x.0> example-extra-cpu example 2 False False True True Minimum availability requires 2 replicas, current 0 availableReplace
<4.x.0>with the supported OpenShift Container Platform version that you want to use.
-
After some time, you can check the status of the node pool by entering the following command:
$ oc --kubeconfig $CLUSTER_NAME-kubeconfig get nodesExample outputNAME STATUS ROLES AGE VERSION example-9jvnf Ready worker 97s v1.27.4+18eadca example-n6prw Ready worker 116m v1.27.4+18eadca example-nc6g4 Ready worker 117m v1.27.4+18eadca example-thp29 Ready worker 4m17s v1.27.4+18eadca example-twxns Ready worker 88s v1.27.4+18eadca example-extra-cpu-zh9l5 Ready worker 2m6s v1.27.4+18eadca example-extra-cpu-zr8mj Ready worker 102s v1.27.4+18eadca -
Verify that the node pool is in the status that you expect by entering this command:
$ oc get nodepools --namespace <hosted_cluster_namespace>Example outputNAME CLUSTER DESIRED NODES CURRENT NODES AUTOSCALING AUTOREPAIR VERSION UPDATINGVERSION UPDATINGCONFIG MESSAGE example example 5 5 False False <4.x.0> example-extra-cpu example 2 2 False False <4.x.0>Replace
<4.x.0>with the supported OpenShift Container Platform version that you want to use.
Verifying hosted cluster creation on OpenShift Virtualization
To verify that your hosted cluster was successfully created, complete the following steps.
-
Verify that the
HostedClusterresource transitioned to thecompletedstate by entering the following command:$ oc get --namespace clusters hostedclusters <hosted_cluster_name>Example outputNAMESPACE NAME VERSION KUBECONFIG PROGRESS AVAILABLE PROGRESSING MESSAGE clusters example 4.12.2 example-admin-kubeconfig Completed True False The hosted control plane is available -
Verify that all the cluster operators in the hosted cluster are online by entering the following commands:
$ hcp create kubeconfig --name <hosted_cluster_name> \ > <hosted_cluster_name>-kubeconfig$ oc get co --kubeconfig=<hosted_cluster_name>-kubeconfigExample outputNAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE console 4.12.2 True False False 2m38s csi-snapshot-controller 4.12.2 True False False 4m3s dns 4.12.2 True False False 2m52s image-registry 4.12.2 True False False 2m8s ingress 4.12.2 True False False 22m kube-apiserver 4.12.2 True False False 23m kube-controller-manager 4.12.2 True False False 23m kube-scheduler 4.12.2 True False False 23m kube-storage-version-migrator 4.12.2 True False False 4m52s monitoring 4.12.2 True False False 69s network 4.12.2 True False False 4m3s node-tuning 4.12.2 True False False 2m22s openshift-apiserver 4.12.2 True False False 23m openshift-controller-manager 4.12.2 True False False 23m openshift-samples 4.12.2 True False False 2m15s operator-lifecycle-manager 4.12.2 True False False 22m operator-lifecycle-manager-catalog 4.12.2 True False False 23m operator-lifecycle-manager-packageserver 4.12.2 True False False 23m service-ca 4.12.2 True False False 4m41s storage 4.12.2 True False False 4m43s
Configuring a custom API server certificate in a hosted cluster
To configure a custom certificate for the API server, specify the certificate details in the spec.configuration.apiServer section of your HostedCluster configuration.
You can configure a custom certificate during either day-1 or day-2 operations. However, because the service publishing strategy is immutable after you set it during hosted cluster creation, you must know what the hostname is for the Kubernetes API server that you plan to configure.
-
You created a Kubernetes secret that contains your custom certificate in the management cluster. The secret contains the following keys:
-
tls.crt: The certificate -
tls.key: The private key
-
-
If your
HostedClusterconfiguration includes a service publishing strategy that uses a load balancer, ensure that the Subject Alternative Names (SANs) of the certificate do not conflict with the internal API endpoint (api-int). The internal API endpoint is automatically created and managed by your platform. If you use the same hostname in both the custom certificate and the internal API endpoint, routing conflicts can occur. The only exception to this rule is when you use AWS as the provider with eitherPrivateorPublicAndPrivateconfigurations. In those cases, the SAN conflict is managed by the platform. -
The certificate must be valid for the external API endpoint.
-
The validity period of the certificate aligns with your cluster’s expected life cycle.
-
Create a secret with your custom certificate by entering the following command:
$ oc create secret tls sample-hosted-kas-custom-cert \ --cert=path/to/cert.crt \ --key=path/to/key.key \ -n <hosted_cluster_namespace> -
Update your
HostedClusterconfiguration with the custom certificate details, as shown in the following example:spec: configuration: apiServer: servingCerts: namedCertificates: - names: - api-custom-cert-sample-hosted.sample-hosted.example.com servingCertificate: name: sample-hosted-kas-custom-cert- The list of DNS names that the certificate is valid for.
- The name of the secret that contains the custom certificate.
-
Apply the changes to your
HostedClusterconfiguration by entering the following command:$ oc apply -f <hosted_cluster_config>.yaml
-
Check the API server pods to ensure that the new certificate is mounted.
-
Test the connection to the API server by using the custom domain name.
-
Verify the certificate details in your browser or by using tools such as
openssl.