Creating a compute machine set on Google Cloud
You can create a different compute machine set to serve a specific purpose in your OpenShift Container Platform cluster on Google Cloud. For example, you might create infrastructure machine sets and related machines so that you can move supporting workloads to the new machines.
Important
You can use the advanced machine management and scaling capabilities only in clusters where the Machine API is operational. Clusters with user-provisioned infrastructure require additional validation and configuration to use the Machine API.
Clusters with the infrastructure platform type none cannot use the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that supports the feature. This parameter cannot be changed after installation.
To view the platform type for your cluster, run the following command:
$ oc get infrastructure cluster -o jsonpath='{.status.platform}'
Sample YAML for a compute machine set custom resource on Google Cloud
This sample YAML defines a compute machine set that runs in Google Cloud and creates nodes that are labeled with
node-role.kubernetes.io/<role>: "",
where
<role>
is the node label to add.
Values obtained by using the OpenShift CLI
In the following example, you can obtain some of the values for your cluster by using the OpenShift CLI.
- Infrastructure ID
-
The
<infrastructure_id>string is the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. If you have the OpenShift CLI installed, you can obtain the infrastructure ID by running the following command:$ oc get -o jsonpath='{.status.infrastructureName}{"\n"}' infrastructure cluster - Image path
-
The
<path_to_image>string is the path to the image that was used to create the disk. If you have the OpenShift CLI installed, you can obtain the path to the image by running the following command:$ oc -n openshift-machine-api \ -o jsonpath='{.spec.template.spec.providerSpec.value.disks[0].image}{"\n"}' \ get machineset/<infrastructure_id>-worker-a
MachineSet valuesapiVersion: machine.openshift.io/v1beta1
kind: MachineSet
metadata:
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
name: <infrastructure_id>-w-a
namespace: openshift-machine-api
spec:
replicas: 1
selector:
matchLabels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
template:
metadata:
creationTimestamp: null
labels:
machine.openshift.io/cluster-api-cluster: <infrastructure_id>
machine.openshift.io/cluster-api-machine-role: <role>
machine.openshift.io/cluster-api-machine-type: <role>
machine.openshift.io/cluster-api-machineset: <infrastructure_id>-w-a
spec:
metadata:
labels:
node-role.kubernetes.io/<role>: ""
providerSpec:
value:
apiVersion: machine.openshift.io/v1beta1
canIPForward: false
credentialsSecret:
name: gcp-cloud-credentials
deletionProtection: false
disks:
- autoDelete: true
boot: true
image: <path_to_image>
labels: null
sizeGb: 128
type: pd-ssd
gcpMetadata:
- key: <custom_metadata_key>
value: <custom_metadata_value>
kind: GCPMachineProviderSpec
machineType: n1-standard-4
metadata:
creationTimestamp: null
networkInterfaces:
- network: <infrastructure_id>-network
subnetwork: <infrastructure_id>-worker-subnet
projectID: <project_name>
region: us-central1
serviceAccounts:
- email: <infrastructure_id>-w@<project_name>.iam.gserviceaccount.com
scopes:
- https://www.googleapis.com/auth/cloud-platform
tags:
- <infrastructure_id>-worker
userDataSecret:
name: worker-user-data
zone: us-central1-a
- For
<infrastructure_id>, specify the infrastructure ID that is based on the cluster ID that you set when you provisioned the cluster. - For
<node>, specify the node label to add. - Specify the path to the image that is used in current compute machine sets.
To use a Google Cloud Marketplace image, specify the offer to use:
-
OpenShift Container Platform:
https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-ocp-413-x86-64-202305021736 -
OpenShift Platform Plus:
https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-opp-413-x86-64-202305021736 -
OpenShift Kubernetes Engine:
https://www.googleapis.com/compute/v1/projects/redhat-marketplace-public/global/images/redhat-coreos-oke-413-x86-64-202305021736
-
- Optional: Specify custom metadata in the form of a
key:valuepair. For example use cases, see the Google Cloud documentation for setting custom metadata. - For
<project_name>, specify the name of the Google Cloud project that you use for your cluster. - Specifies a single service account. Multiple service accounts are not supported.
Creating a compute machine set
In addition to the compute machine sets created by the installation program, you can create your own to dynamically manage the machine compute resources for specific workloads of your choice.
-
Deploy an OpenShift Container Platform cluster.
-
Install the OpenShift CLI (
oc). -
Log in to
ocas a user withcluster-adminpermission.
-
Create a new YAML file that contains the compute machine set custom resource (CR) sample and is named
<file_name>.yaml.Ensure that you set the
<clusterID>and<role>parameter values. -
Optional: If you are not sure which value to set for a specific field, you can check an existing compute machine set from your cluster.
-
To list the compute machine sets in your cluster, run the following command:
$ oc get machinesets -n openshift-machine-apiExample outputNAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55m -
To view values of a specific compute machine set custom resource (CR), run the following command:
$ oc get machineset <machineset_name> \ -n openshift-machine-api -o yamlExample outputapiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> name: <infrastructure_id>-<role> namespace: openshift-machine-api spec: replicas: 1 selector: matchLabels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> template: metadata: labels: machine.openshift.io/cluster-api-cluster: <infrastructure_id> machine.openshift.io/cluster-api-machine-role: <role> machine.openshift.io/cluster-api-machine-type: <role> machine.openshift.io/cluster-api-machineset: <infrastructure_id>-<role> spec: providerSpec: ...- The cluster infrastructure ID.
- A default node label.
Note
For clusters that have user-provisioned infrastructure, a compute machine set can only create
workerandinfratype machines. - The values in the
<providerSpec>section of the compute machine set CR are platform-specific. For more information about<providerSpec>parameters in the CR, see the sample compute machine set CR configuration for your provider.
-
-
Create a
MachineSetCR by running the following command:$ oc create -f <file_name>.yaml
-
View the list of compute machine sets by running the following command:
$ oc get machineset -n openshift-machine-apiExample outputNAME DESIRED CURRENT READY AVAILABLE AGE agl030519-vplxk-infra-us-east-1a 1 1 1 1 11m agl030519-vplxk-worker-us-east-1a 1 1 1 1 55m agl030519-vplxk-worker-us-east-1b 1 1 1 1 55m agl030519-vplxk-worker-us-east-1c 1 1 1 1 55m agl030519-vplxk-worker-us-east-1d 0 0 55m agl030519-vplxk-worker-us-east-1e 0 0 55m agl030519-vplxk-worker-us-east-1f 0 0 55mWhen the new compute machine set is available, the
DESIREDandCURRENTvalues match. If the compute machine set is not available, wait a few minutes and run the command again.
Labeling GPU machine sets for the cluster autoscaler
You can use a machine set label to indicate which machines the cluster autoscaler can use to deploy GPU-enabled nodes.
-
Your cluster uses a cluster autoscaler.
-
On the machine set that you want to create machines for the cluster autoscaler to use to deploy GPU-enabled nodes, add a
cluster-api/acceleratorlabel:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: machine-set-name spec: template: spec: metadata: labels: cluster-api/accelerator: <accelerator_name>where:
- <accelerator_name>
-
Specifies a label of your choice that consists of alphanumeric characters,
-,_, or.and starts and ends with an alphanumeric character. For example, you might usenvidia-t4to represent Nvidia T4 GPUs, ornvidia-a10gfor A10G GPUs.Note
You must specify the value of this label for the
spec.resourceLimits.gpus.typeparameter in yourClusterAutoscalerCR. For more information, see "Cluster autoscaler resource definition".
Configuring persistent disk types by using machine sets
You can configure the type of persistent disk that a machine set deploys machines on by editing the machine set YAML file.
For more information about persistent disk types, compatibility, regional availability, and limitations, see the Google Cloud Compute Engine documentation about persistent disks.
-
In a text editor, open the YAML file for an existing machine set or create a new one.
-
Edit the following line under the
providerSpecfield:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: disks: type: <pd-disk-type>- Specify the persistent disk type. Valid values are
pd-ssd,pd-standard, andpd-balanced. The default value ispd-standard.
- Specify the persistent disk type. Valid values are
-
Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the
Typefield matches the configured disk type.
Configuring Confidential VM by using machine sets
By editing the machine set YAML file, you can configure the Confidential VM options that a machine set uses for machines that it deploys.
For more information about Confidential VM features, functions, and compatibility, see the Google Cloud Compute Engine documentation about Confidential VM.
Note
Confidential VMs are currently not supported on 64-bit ARM architectures. If you use Confidential VM, you must ensure that you select a supported region. For details on supported regions and configurations, see the Google Cloud Compute Engine documentation about supported zones.
-
In a text editor, open the YAML file for an existing machine set or create a new one.
-
Edit the following section under the
providerSpecfield:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: confidentialCompute: Enabled onHostMaintenance: Terminate machineType: n2d-standard-8 # ...- Specify whether Confidential VM is enabled. The following values are valid:
Enabled-
Enables Confidential VM with a default selection of Confidential VM technology. The default selection is AMD Secure Encrypted Virtualization (AMD SEV).
Important
The
Enabledvalue selects Confidential Computing with AMD Secure Encrypted Virtualization (AMD SEV), which is deprecated. Disabled-
Disables Confidential VM.
AMDEncryptedVirtualizationNestedPaging-
Enables Confidential VM using AMD Secure Encrypted Virtualization Secure Nested Paging (AMD SEV-SNP). AMD SEV-SNP supports n2d machines.
AMDEncryptedVirtualization-
Enables Confidential VM using AMD SEV. AMD SEV supports c2d, n2d, and c3d machines.
Important
The use of Confidential Computing with AMD Secure Encrypted Virtualization (AMD SEV) has been deprecated and will be removed in a future release.
IntelTrustedDomainExtensions-
Enables Confidential VM using Intel Trusted Domain Extensions (Intel TDX). Intel TDX supports n2d machines.
- Specify the behavior of the VM during a host maintenance event, such as a hardware or software update. For a machine that uses Confidential VM, this value must be set to
Terminate, which stops the VM. Confidential VM does not support live VM migration. - Specify a machine type that supports the Confidential VM option that you specified in the
confidentialComputefield.
- Specify whether Confidential VM is enabled. The following values are valid:
-
On the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Confidential VM options match the values that you configured.
Machine sets that deploy machines as Spot VMs
You can save on costs by creating a compute machine set running on Google Cloud that deploys machines as non-guaranteed Spot VMs. Spot VMs use excess Compute Engine capacity and are less expensive than normal instances. You can use Spot VMs for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.
Note
Google Cloud recommends using Spot VMs over preemptible VMs because Spot VMs include new features that preemptible VMs do not support.
Google Cloud Compute Engine can terminate a Spot VM at any time.
Compute Engine sends a best-effort preemption notice to the user indicating that an interruption will occur after 30 seconds.
OpenShift Container Platform begins to remove the workloads from the affected instances when Compute Engine issues the preemption notice.
An ACPI G3 Mechanical Off signal is sent to the operating system after 30 seconds if the instance is not stopped.
The Spot VM is then transitioned to a TERMINATED state by Compute Engine.
Interruptions can occur when using Spot VMs for the following reasons:
-
There is a system or maintenance event
-
The supply of Spot VMs decreases
When Google Cloud terminates an instance, a termination handler running on the Spot VM node deletes the machine resource.
To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a Spot VM.
Creating Spot VMs by using compute machine sets
You can save on costs by creating a compute machine set that deploys machines as non-guaranteed instances.
To launch a Spot VM on Google Cloud, you add provisioningModel: "Spot" to your compute machine set YAML file.
-
Add the following line under the
providerSpecfield:providerSpec: value: provisioningModel: "Spot"If you specify
provisioningModel: "Spot", the machine is labeled as aninterruptible-instanceafter the instance is launched.Note
This parameter is not compatible with setting the
providerSpec.value.preemptiblevalue totrue.
Machine sets that deploy machines preemptible VM instances
You can save on costs by creating a compute machine set running on Google Cloud that deploys machines as non-guaranteed preemptible VM instances. Preemptible VM instances use excess Compute Engine capacity and are less expensive than normal instances. You can use preemptible VM instances for workloads that can tolerate interruptions, such as batch or stateless, horizontally scalable workloads.
Note
Google Cloud recommends using Spot VMs over preemptible VMs because Spot VMs include new features that preemptible VMs do not support.
Google Cloud Compute Engine can terminate a preemptible VM instance at any time. Compute Engine sends a preemption notice to the user indicating that an interruption will occur after 30 seconds. OpenShift Container Platform begins to remove the workloads from the affected instances when Compute Engine issues the preemption notice. An ACPI G3 Mechanical Off signal is sent to the operating system after 30 seconds if the instance is not stopped. The preemptible VM instance is then transitioned to a TERMINATED state by Compute Engine.
Interruptions can occur when using preemptible VM instances for the following reasons:
-
There is a system or maintenance event
-
The supply of preemptible VM instances decreases
-
The instance reaches the end of the allotted 24-hour period for preemptible VM instances
When Google Cloud terminates an instance, a termination handler running on the preemptible VM instance node deletes the machine resource. To satisfy the compute machine set replicas quantity, the compute machine set creates a machine that requests a preemptible VM instance.
Creating preemptible VM instances by using compute machine sets
You can save on costs by creating a compute machine set that deploys machines as non-guaranteed instances.
To launch a preemptible VM instance on Google Cloud, you add preemptible to your compute machine set YAML file.
Note
Google Cloud recommends using Spot VMs over preemptible VMs because Spot VMs include new features that preemptible VMs do not support.
-
Add the following line under the
providerSpecfield:providerSpec: value: preemptible: trueIf
preemptibleis set totrue, the machine is labeled as aninterruptible-instanceafter the instance is launched.Note
This parameter is not compatible with setting the
providerSpec.value.provisioningModelvalue to"Spot".
Configuring Shielded VM options by using machine sets
By editing the machine set YAML file, you can configure the Shielded VM options that a machine set uses for machines that it deploys.
For more information about Shielded VM features and functionality, see the Google Cloud Compute Engine documentation about Shielded VM.
-
In a text editor, open the YAML file for an existing machine set or create a new one.
-
Edit the following section under the
providerSpecfield:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet # ... spec: template: spec: providerSpec: value: shieldedInstanceConfig: integrityMonitoring: Enabled secureBoot: Disabled virtualizedTrustedPlatformModule: Enabled # ...- In this section, specify any Shielded VM options that you want.
- Specify whether integrity monitoring is enabled. Valid values are
DisabledorEnabled.Note
When integrity monitoring is enabled, you must not disable virtual trusted platform module (vTPM).
- Specify whether UEFI Secure Boot is enabled. Valid values are
DisabledorEnabled. - Specify whether vTPM is enabled. Valid values are
DisabledorEnabled.
-
Using the Google Cloud console, review the details for a machine deployed by the machine set and verify that the Shielded VM options match the values that you configured.
Enabling customer-managed encryption keys for a machine set
Google Cloud Compute Engine allows users to supply an encryption key to encrypt data on disks at rest. The key is used to encrypt the data encryption key, not to encrypt the customer’s data. By default, Compute Engine encrypts this data by using Compute Engine keys.
You can enable encryption with a customer-managed key in clusters that use the Machine API. You must first create a KMS key and assign the correct permissions to a service account. The KMS key name, key ring name, and location are required to allow a service account to use your key.
Note
If you do not want to use a dedicated service account for the KMS encryption, the Compute Engine default service account is used instead. You must grant the default service account permission to access the keys if you do not use a dedicated service account. The Compute Engine default service account name follows the service-<project_number>@compute-system.iam.gserviceaccount.com pattern.
-
To allow a specific service account to use your KMS key and to grant the service account the correct IAM role, run the following command with your KMS key name, key ring name, and location:
$ gcloud kms keys add-iam-policy-binding <key_name> \ --keyring <key_ring_name> \ --location <key_ring_location> \ --member "serviceAccount:service-<project_number>@compute-system.iam.gserviceaccount.com” \ --role roles/cloudkms.cryptoKeyEncrypterDecrypter -
Configure the encryption key under the
providerSpecfield in your machine set YAML file. For example:apiVersion: machine.openshift.io/v1beta1 kind: MachineSet ... spec: template: spec: providerSpec: value: disks: - type: encryptionKey: kmsKey: name: machine-encryption-key keyRing: openshift-encrpytion-ring location: global projectID: openshift-gcp-project kmsKeyServiceAccount: openshift-service-account@openshift-gcp-project.iam.gserviceaccount.com- The name of the customer-managed encryption key that is used for the disk encryption.
- The name of the KMS key ring that the KMS key belongs to.
- The Google Cloud location in which the KMS key ring exists.
- Optional: The ID of the project in which the KMS key ring exists. If a project ID is not set, the machine set
projectIDin which the machine set was created is used. - Optional: The service account that is used for the encryption request for the given KMS key. If a service account is not set, the Compute Engine default service account is used.
When a new machine is created by using the updated
providerSpecobject configuration, the disk encryption key is encrypted with the KMS key.
Enabling GPU support for a compute machine set
Google Cloud Compute Engine enables users to add GPUs to VM instances. Workloads that benefit from access to GPU resources can perform better on compute machines with this feature enabled. OpenShift Container Platform on Google Cloud supports NVIDIA GPU models in the A2 and N1 machine series.
| Model name | GPU type | Machine types [1] |
|---|---|---|
NVIDIA A100 |
|
|
NVIDIA K80 |
|
|
NVIDIA P100 |
|
|
NVIDIA P4 |
|
|
NVIDIA T4 |
|
|
NVIDIA V100 |
|
-
For more information about machine types, including specifications, compatibility, regional availability, and limitations, see the Google Cloud Compute Engine documentation about N1 machine series, A2 machine series, and GPU regions and zones availability.
You can define which supported GPU to use for an instance by using the Machine API.
You can configure machines in the N1 machine series to deploy with one of the supported GPU types. Machines in the A2 machine series come with associated GPUs, and cannot use guest accelerators.
Note
GPUs for graphics workloads are not supported.
-
In a text editor, open the YAML file for an existing compute machine set or create a new one.
-
Specify a GPU configuration under the
providerSpecfield in your compute machine set YAML file. See the following examples of valid configurations:Example configuration for the A2 machine seriesproviderSpec: value: machineType: a2-highgpu-1g onHostMaintenance: Terminate restartPolicy: Always- Specify the machine type. Ensure that the machine type is included in the A2 machine series.
- When using GPU support, you must set
onHostMaintenancetoTerminate. - Specify the restart policy for machines deployed by the compute machine set. Allowed values are
AlwaysorNever.Example configuration for the N1 machine seriesproviderSpec: value: gpus: - count: 1 type: nvidia-tesla-p100 machineType: n1-standard-1 onHostMaintenance: Terminate restartPolicy: Always - Specify the number of GPUs to attach to the machine.
- Specify the type of GPUs to attach to the machine. Ensure that the machine type and GPU type are compatible.
- Specify the machine type. Ensure that the machine type and GPU type are compatible.
- When using GPU support, you must set
onHostMaintenancetoTerminate. - Specify the restart policy for machines deployed by the compute machine set. Allowed values are
AlwaysorNever.
Adding a GPU node to an existing OpenShift Container Platform cluster
You can copy and modify a default compute machine set configuration to create a GPU-enabled machine set and machines for the Google Cloud cloud provider.
The following table lists the validated instance types:
| Instance type | NVIDIA GPU accelerator | Maximum number of GPUs | Architecture |
|---|---|---|---|
|
A100 |
1 |
x86 |
|
T4 |
1 |
x86 |
-
Make a copy of an existing
MachineSet. -
In the new copy, change the machine set
nameinmetadata.nameand in both instances ofmachine.openshift.io/cluster-api-machineset. -
Change the instance type to add the following two lines to the newly copied
MachineSet:machineType: a2-highgpu-1g onHostMaintenance: Terminate
Examplea2-highgpu-1g.jsonfile{ "apiVersion": "machine.openshift.io/v1beta1", "kind": "MachineSet", "metadata": { "annotations": { "machine.openshift.io/GPU": "0", "machine.openshift.io/memoryMb": "16384", "machine.openshift.io/vCPU": "4" }, "creationTimestamp": "2023-01-13T17:11:02Z", "generation": 1, "labels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p" }, "name": "myclustername-2pt9p-worker-gpu-a", "namespace": "openshift-machine-api", "resourceVersion": "20185", "uid": "2daf4712-733e-4399-b4b4-d43cb1ed32bd" }, "spec": { "replicas": 1, "selector": { "matchLabels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p", "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" } }, "template": { "metadata": { "labels": { "machine.openshift.io/cluster-api-cluster": "myclustername-2pt9p", "machine.openshift.io/cluster-api-machine-role": "worker", "machine.openshift.io/cluster-api-machine-type": "worker", "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" } }, "spec": { "lifecycleHooks": {}, "metadata": {}, "providerSpec": { "value": { "apiVersion": "machine.openshift.io/v1beta1", "canIPForward": false, "credentialsSecret": { "name": "gcp-cloud-credentials" }, "deletionProtection": false, "disks": [ { "autoDelete": true, "boot": true, "image": "projects/rhcos-cloud/global/images/rhcos-412-86-202212081411-0-gcp-x86-64", "labels": null, "sizeGb": 128, "type": "pd-ssd" } ], "kind": "GCPMachineProviderSpec", "machineType": "a2-highgpu-1g", "onHostMaintenance": "Terminate", "metadata": { "creationTimestamp": null }, "networkInterfaces": [ { "network": "myclustername-2pt9p-network", "subnetwork": "myclustername-2pt9p-worker-subnet" } ], "preemptible": true, "projectID": "myteam", "region": "us-central1", "serviceAccounts": [ { "email": "myclustername-2pt9p-w@myteam.iam.gserviceaccount.com", "scopes": [ "https://www.googleapis.com/auth/cloud-platform" ] } ], "tags": [ "myclustername-2pt9p-worker" ], "userDataSecret": { "name": "worker-user-data" }, "zone": "us-central1-a" } } } } }, "status": { "availableReplicas": 1, "fullyLabeledReplicas": 1, "observedGeneration": 1, "readyReplicas": 1, "replicas": 1 } } -
View the existing nodes, machines, and machine sets by running the following command. Note that each node is an instance of a machine definition with a specific Google Cloud region and OpenShift Container Platform role.
$ oc get nodesExample outputNAME STATUS ROLES AGE VERSION myclustername-2pt9p-master-0.c.openshift-qe.internal Ready control-plane,master 8h v1.34.2 myclustername-2pt9p-master-1.c.openshift-qe.internal Ready control-plane,master 8h v1.34.2 myclustername-2pt9p-master-2.c.openshift-qe.internal Ready control-plane,master 8h v1.34.2 myclustername-2pt9p-worker-a-mxtnz.c.openshift-qe.internal Ready worker 8h v1.34.2 myclustername-2pt9p-worker-b-9pzzn.c.openshift-qe.internal Ready worker 8h v1.34.2 myclustername-2pt9p-worker-c-6pbg6.c.openshift-qe.internal Ready worker 8h v1.34.2 myclustername-2pt9p-worker-gpu-a-wxcr6.c.openshift-qe.internal Ready worker 4h35m v1.34.2 -
View the machines and machine sets that exist in the
openshift-machine-apinamespace by running the following command. Each compute machine set is associated with a different availability zone within the Google Cloud region. The installer automatically load balances compute machines across availability zones.$ oc get machinesets -n openshift-machine-apiExample outputNAME DESIRED CURRENT READY AVAILABLE AGE myclustername-2pt9p-worker-a 1 1 1 1 8h myclustername-2pt9p-worker-b 1 1 1 1 8h myclustername-2pt9p-worker-c 1 1 8h myclustername-2pt9p-worker-f 0 0 8h -
View the machines that exist in the
openshift-machine-apinamespace by running the following command. You can only configure one compute machine per set, although you can scale a compute machine set to add a node in a particular region and zone.$ oc get machines -n openshift-machine-api | grep workerExample outputmyclustername-2pt9p-worker-a-mxtnz Running n2-standard-4 us-central1 us-central1-a 8h myclustername-2pt9p-worker-b-9pzzn Running n2-standard-4 us-central1 us-central1-b 8h myclustername-2pt9p-worker-c-6pbg6 Running n2-standard-4 us-central1 us-central1-c 8h -
Make a copy of one of the existing compute
MachineSetdefinitions and output the result to a JSON file by running the following command. This will be the basis for the GPU-enabled compute machine set definition.$ oc get machineset myclustername-2pt9p-worker-a -n openshift-machine-api -o json > <output_file.json> -
Edit the JSON file to make the following changes to the new
MachineSetdefinition:-
Rename the machine set
nameby inserting the substringgpuinmetadata.nameand in both instances ofmachine.openshift.io/cluster-api-machineset. -
Change the
machineTypeof the newMachineSetdefinition toa2-highgpu-1g, which includes an NVIDIA A100 GPU.jq .spec.template.spec.providerSpec.value.machineType ocp_4.19_machineset-a2-highgpu-1g.json "a2-highgpu-1g"The
<output_file.json>file is saved asocp_4.19_machineset-a2-highgpu-1g.json.
-
-
Update the following fields in
ocp_4.19_machineset-a2-highgpu-1g.json:-
Change
.metadata.nameto a name containinggpu. -
Change
.spec.selector.matchLabels["machine.openshift.io/cluster-api-machineset"]to match the new.metadata.name. -
Change
.spec.template.metadata.labels["machine.openshift.io/cluster-api-machineset"]to match the new.metadata.name. -
Change
.spec.template.spec.providerSpec.value.MachineTypetoa2-highgpu-1g. -
Add the following line under
machineType: `"onHostMaintenance": "Terminate". For example:"machineType": "a2-highgpu-1g", "onHostMaintenance": "Terminate",
-
-
To verify your changes, perform a
diffof the original compute definition and the new GPU-enabled node definition by running the following command:$ oc get machineset/myclustername-2pt9p-worker-a -n openshift-machine-api -o json | diff ocp_4.19_machineset-a2-highgpu-1g.json -Example output15c15 < "name": "myclustername-2pt9p-worker-gpu-a", --- > "name": "myclustername-2pt9p-worker-a", 25c25 < "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" --- > "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a" 34c34 < "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-gpu-a" --- > "machine.openshift.io/cluster-api-machineset": "myclustername-2pt9p-worker-a" 59,60c59 < "machineType": "a2-highgpu-1g", < "onHostMaintenance": "Terminate", --- > "machineType": "n2-standard-4", -
Create the GPU-enabled compute machine set from the definition file by running the following command:
$ oc create -f ocp_4.19_machineset-a2-highgpu-1g.jsonExample outputmachineset.machine.openshift.io/myclustername-2pt9p-worker-gpu-a created
-
View the machine set you created by running the following command:
$ oc -n openshift-machine-api get machinesets | grep gpuThe MachineSet replica count is set to
1so a newMachineobject is created automatically.Example outputmyclustername-2pt9p-worker-gpu-a 1 1 1 1 5h24m -
View the
Machineobject that the machine set created by running the following command:$ oc -n openshift-machine-api get machines | grep gpuExample outputmyclustername-2pt9p-worker-gpu-a-wxcr6 Running a2-highgpu-1g us-central1 us-central1-a 5h25m
Note
Note that there is no need to specify a namespace for the node. The node definition is cluster scoped.
Deploying the Node Feature Discovery Operator
After the GPU-enabled node is created, you need to discover the GPU-enabled node so it can be scheduled. To do this, install the Node Feature Discovery (NFD) Operator. The NFD Operator identifies hardware device features in nodes. It solves the general problem of identifying and cataloging hardware resources in the infrastructure nodes so they can be made available to OpenShift Container Platform.
-
Install the Node Feature Discovery Operator from the software catalog in the OpenShift Container Platform console.
-
After installing the NFD Operator, select Node Feature Discovery from the installed Operators list and select Create instance. This installs the
nfd-masterandnfd-workerpods, onenfd-workerpod for each compute node, in theopenshift-nfdnamespace. -
Verify that the Operator is installed and running by running the following command:
$ oc get pods -n openshift-nfdExample outputNAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 1d -
Browse to the installed Oerator in the console and select Create Node Feature Discovery.
-
Select Create to build a NFD custom resource. This creates NFD pods in the
openshift-nfdnamespace that poll the OpenShift Container Platform nodes for hardware resources and catalogue them.
-
After a successful build, verify that a NFD pod is running on each nodes by running the following command:
$ oc get pods -n openshift-nfdExample outputNAME READY STATUS RESTARTS AGE nfd-controller-manager-8646fcbb65-x5qgk 2/2 Running 7 (8h ago) 12d nfd-master-769656c4cb-w9vrv 1/1 Running 0 12d nfd-worker-qjxb2 1/1 Running 3 (3d14h ago) 12d nfd-worker-xtz9b 1/1 Running 5 (3d14h ago) 12dThe NFD Operator uses vendor PCI IDs to identify hardware in a node. NVIDIA uses the PCI ID
10de. -
View the NVIDIA GPU discovered by the NFD Operator by running the following command:
$ oc describe node ip-10-0-132-138.us-east-2.compute.internal | egrep 'Roles|pci'Example outputRoles: worker feature.node.kubernetes.io/pci-1013.present=true feature.node.kubernetes.io/pci-10de.present=true feature.node.kubernetes.io/pci-1d0f.present=true10deappears in the node feature list for the GPU-enabled node. This mean the NFD Operator correctly identified the node from the GPU-enabled MachineSet.