VMware vSphere CSI Driver Operator
Overview
OpenShift Container Platform can provision persistent volumes (PVs) using the Container Storage Interface (CSI) VMware vSphere driver for Virtual Machine Disk (VMDK) volumes.
Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.
To create CSI-provisioned persistent volumes (PVs) that mount to vSphere storage assets, OpenShift Container Platform installs the vSphere CSI Driver Operator and the vSphere CSI driver by default in the openshift-cluster-csi-drivers namespace.
-
vSphere CSI Driver Operator: The Operator provides a storage class, called
thin-csi, that you can use to create persistent volumes claims (PVCs). The vSphere CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand, eliminating the need for cluster administrators to pre-provision storage. You can disable this default storage class if desired (see Managing the default storage class). -
vSphere CSI driver: The driver enables you to create and mount vSphere PVs. In OpenShift Container Platform 4.20, the driver version is 3.6.0 The vSphere CSI driver supports all of the file systems supported by the underlying Red Hat Core operating system release, including XFS and Ext4. For more information about supported file systems, see Overview of available file systems.
Note
For new installations, OpenShift Container Platform 4.13 and later provides automatic migration for the vSphere in-tree volume plugin to its equivalent CSI driver. Updating to OpenShift Container Platform 4.15 and later also provides automatic migration. For more information about updating and migration, see CSI automatic migration.
CSI automatic migration should be seamless. Migration does not change how you use all existing API objects, such as persistent volumes, persistent volume claims, and storage classes.
About CSI
Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.
CSI Operators give OpenShift Container Platform users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.
vSphere CSI limitations
The following limitations apply to the vSphere Container Storage Interface (CSI) Driver Operator:
-
The vSphere CSI Driver supports dynamic and static provisioning. However, when using static provisioning in the PV specifications, do not use the key
storage.kubernetes.io/csiProvisionerIdentityincsi.volumeAttributesbecause this key indicates dynamically provisioned PVs. -
Migrating persistent container volumes between datastores using the vSphere client interface is not supported with OpenShift Container Platform.
vSphere storage policy
The vSphere CSI Driver Operator storage class uses vSphere’s storage policy. OpenShift Container Platform automatically creates a storage policy that targets datastore configured in cloud configuration:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: thin-csi
provisioner: csi.vsphere.vmware.com
parameters:
StoragePolicyName: "$openshift-storage-policy-xxxx"
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: false
reclaimPolicy: Delete
ReadWriteMany vSphere volume support
If the underlying vSphere environment supports the vSAN file service, then vSphere Container Storage Interface (CSI) Driver Operator installed by OpenShift Container Platform supports provisioning of ReadWriteMany (RWX) volumes. If vSAN file service is not configured, then ReadWriteOnce (RWO) is the only access mode available. If you do not have vSAN file service configured, and you request RWX, the volume fails to get created and an error is logged.
For more information about configuring the vSAN file service in your environment, see vSAN File Service.
You can request RWX volumes by making the following persistent volume claim (PVC):
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
resources:
requests:
storage: 1Gi
accessModes:
- ReadWriteMany
storageClassName: thin-csi
Requesting a PVC of the RWX volume type should result in provisioning of persistent volumes (PVs) backed by the vSAN file service.
VMware vSphere CSI Driver Operator requirements
To install the vSphere Container Storage Interface (CSI) Driver Operator, the following requirements must be met:
-
VMware vSphere version 8.0 Update 1 or later; or VMware vSphere Foundation (VVF) 9; or VMware Cloud Foundation (VCF) 5 or later
-
vCenter version 8.0 Update 1 or later; or VVF 9; or VCF 5 or later
-
Virtual machines of hardware version 15 or later
-
No third-party vSphere CSI driver already installed in the cluster
If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later.
Note
The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest.
You can create a custom role for the Container Storage Interface (CSI) driver, the vSphere CSI Driver Operator, and the vSphere Problem Detector Operator. The custom role can include privilege sets that assign a minimum set of permissions to each vSphere object. This means that the CSI driver, the vSphere CSI Driver Operator, and the vSphere Problem Detector Operator can establish a basic interaction with these objects.
Important
Installing an OpenShift Container Platform cluster in a vCenter is tested against a full list of privileges as described in the "Required vCenter account privileges" section. By adhering to the full list of privileges, you can reduce the possibility of unexpected and unsupported behaviors that might occur when creating a custom role with a set of restricted privileges.
To remove a third-party CSI driver, see Removing a third-party vSphere CSI Driver.
Removing a third-party vSphere CSI Driver Operator
OpenShift Container Platform 4.10, and later, includes a built-in version of the vSphere Container Storage Interface (CSI) Operator Driver that is supported by Red Hat. If you have installed a vSphere CSI driver provided by the community or another vendor, updates to the next major version of OpenShift Container Platform, such as 4.13, or later, might be disabled for your cluster.
OpenShift Container Platform 4.12, and later, clusters are still fully supported, and updates to z-stream releases of 4.12, such as 4.12.z, are not blocked, but you must correct this state by removing the third-party vSphere CSI Driver before updates to next major version of OpenShift Container Platform can occur. Removing the third-party vSphere CSI driver does not require deletion of associated persistent volume (PV) objects, and no data loss should occur.
Note
These instructions may not be complete, so consult the vendor or community provider uninstall guide to ensure removal of the driver and components.
To uninstall the third-party vSphere CSI Driver:
-
Delete the third-party vSphere CSI Driver (VMware vSphere Container Storage Plugin) Deployment and Daemonset objects.
-
Delete the configmap and secret objects that were installed previously with the third-party vSphere CSI Driver.
-
Delete the third-party vSphere CSI driver
CSIDriverobject:$ oc delete CSIDriver csi.vsphere.vmware.comcsidriver.storage.k8s.io "csi.vsphere.vmware.com" deleted
After you have removed the third-party vSphere CSI Driver from the OpenShift Container Platform cluster, installation of Red Hat’s vSphere CSI Driver Operator automatically resumes, and any conditions that could block upgrades to OpenShift Container Platform 4.11, or later, are automatically removed. If you had existing vSphere CSI PV objects, their lifecycle is now managed by Red Hat’s vSphere CSI Driver Operator.
vSphere persistent disks encryption
You can encrypt virtual machines (VMs) and dynamically provisioned persistent volumes (PVs) on OpenShift Container Platform running on top of vSphere.
Note
OpenShift Container Platform does not support RWX-encrypted PVs. You cannot request RWX PVs out of a storage class that uses an encrypted storage policy.
You must encrypt VMs before you can encrypt PVs, which you can do during or after installation.
For information about encrypting VMs, see:
After encrypting VMs, you can configure a storage class that supports dynamic encryption volume provisioning using the vSphere Container Storage Interface (CSI) driver. This can be accomplished in one of two ways using:
-
Datastore URL: This approach is not very flexible, and forces you to use a single datastore. It also does not support topology-aware provisioning.
-
Tag-based placement: Encrypts the provisioned volumes and uses tag-based placement to target specific datastores.
Using datastore URL
To encrypt using the datastore URL:
-
Find out the name of the default storage policy in your datastore that supports encryption.
This is same policy that was used for encrypting your VMs.
-
Create a storage class that uses this storage policy:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: encryption provisioner: csi.vsphere.vmware.com parameters: storagePolicyName: <storage-policy-name> datastoreurl: "ds:///vmfs/volumes/vsan:522e875627d-b090c96b526bb79c/"- Name of default storage policy in your datastore that supports encryption
Using tag-based placement
To encrypt using tag-based placement:
-
In vCenter create a category for tagging datastores that will be made available to this storage class. Also, ensure that StoragePod(Datastore clusters), Datastore, and Folder are selected as Associable Entities for the created category.
-
In vCenter, create a tag that uses the category created earlier.
-
Assign the previously created tag to each datastore that will be made available to the storage class. Make sure that datastores are shared with hosts participating in the OpenShift Container Platform cluster.
-
In vCenter, from the main menu, click Policies and Profiles.
-
On the Policies and Profiles page, in the navigation pane, click VM Storage Policies.
-
Click CREATE.
-
Type a name for the storage policy.
-
Select Enable host based rules and Enable tag based placement rules.
-
In the Next tab:
-
Select Encryption and Default Encryption Properties.
-
Select the tag category created earlier, and select tag selected. Verify that the policy is selecting matching datastores.
-
-
Create the storage policy.
-
Create a storage class that uses the storage policy:
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: csi-encrypted provisioner: csi.vsphere.vmware.com reclaimPolicy: Delete volumeBindingMode: WaitForFirstConsumer parameters: storagePolicyName: <storage-policy-name>- Name of the storage policy that you created for encryption
Multiple vCenter support for vSphere CSI
Deploying OpenShift Container Platform across multiple vSphere vCenter clusters without shared storage for high availability can be helpful. OpenShift Container Platform v4.17, and later, supports this capability.
Note
Multiple vCenters can only be configured during installation. Multiple vCenters cannot be configured after installation.
The maximum number of supported vCenter clusters is three.
Configuring multiple vCenters during installation
To configure multiple vCenters during installation:
-
Specify multiple vSphere clusters during installation. For information, see "Installation configuration parameters for vSphere".
vSphere CSI topology overview
OpenShift Container Platform provides the ability to deploy OpenShift Container Platform for vSphere on different zones and regions, which allows you to deploy over multiple compute clusters and data centers, thus helping to avoid a single point of failure.
This is accomplished by defining zone and region categories in vCenter, and then assigning these categories to different failure domains, such as a compute cluster, by creating tags for these zone and region categories. After you have created the appropriate categories, and assigned tags to vCenter objects, you can create additional machinesets that create virtual machines (VMs) that are responsible for scheduling pods in those failure domains.
The following example defines two failure domains with one region and two zones:
| Compute cluster | Failure domain | Description |
|---|---|---|
Compute cluster: ocp1, Data center: Atlanta |
openshift-region: us-east-1 (tag), openshift-zone: us-east-1a (tag) |
This defines a failure domain in region us-east-1 with zone us-east-1a. |
Computer cluster: ocp2, Data center: Atlanta |
openshift-region: us-east-1 (tag), openshift-zone: us-east-1b (tag) |
This defines a different failure domain within the same region called us-east-1b. |
vSphere CSI topology requirements
The following guidelines are recommended for vSphere CSI topology:
-
You are strongly recommended to add topology tags to data centers and compute clusters, and not to hosts.
vsphere-problem-detectorprovides alerts if theopenshift-regionoropenshift-zonetags are not defined at the data center or compute cluster level, and each topology tag (openshift-regionoropenshift-zone) should occur only once in the hierarchy.Note
Ignoring this recommendation only results in a log warning from the CSI driver and duplicate tags lower in the hierarchy, such as hosts, are ignored; VMware considers this an invalid configuration, and therefore to prevent problems you should not use it.
-
Volume provisioning requests in topology-aware environments attempt to create volumes in datastores accessible to all hosts under a given topology segment. This includes hosts that do not have Kubernetes node VMs running on them. For example, if the vSphere Container Storage Plug-in driver receives a request to provision a volume in
zone-a, applied on the data centerdc-1, all hosts underdc-1must have access to the datastore selected for volume provisioning. The hosts include those that are directly underdc-1, and those that are a part of clusters insidedc-1. -
For additional recommendations, you should read the VMware Guidelines and Best Practices for Deployment with Topology section.
Creating vSphere storage topology during installation
Procedure
-
Specify the topology during installation. See the Configuring regions and zones for a VMware vCenter section.
No additional action is necessary and the default storage class that is created by OpenShift Container Platform is topology aware and should allow provisioning of volumes in different failure domains.
Creating vSphere storage topology postinstallation
Procedure
-
In the VMware vCenter vSphere client GUI, define appropriate zone and region catagories and tags.
While vSphere allows you to create categories with any arbitrary name, OpenShift Container Platform strongly recommends use of
openshift-regionandopenshift-zonenames for defining topology categories.For more information about vSphere categories and tags, see the VMware vSphere documentation.
-
In OpenShift Container Platform, create failure domains. See the Specifying multiple regions and zones for your cluster on vSphere section.
-
Create a tag to assign to datastores across failure domains:
When an OpenShift Container Platform spans more than one failure domain, the datastore might not be shared across those failure domains, which is where topology-aware provisioning of persistent volumes (PVs) is useful.
-
In vCenter, create a category for tagging the datastores. For example,
openshift-zonal-datastore-cat. You can use any other category name, provided the category uniquely is used for tagging datastores participating in OpenShift Container Platform cluster. Also, ensure thatStoragePod,Datastore, andFolderare selected as Associable Entities for the created category. -
In vCenter, create a tag that uses the previously created category. This example uses the tag name
openshift-zonal-datastore. -
Assign the previously created tag (in this example
openshift-zonal-datastore) to each datastore in a failure domain that would be considered for dynamic provisioning.Note
You can use any names you like for datastore categories and tags. The names used in this example are provided as recommendations. Ensure that the tags and categories that you define uniquely identify only datastores that are shared with all hosts in the OpenShift Container Platform cluster.
-
-
As needed, create a storage policy that targets the tag-based datastores in each failure domain:
-
In vCenter, from the main menu, click Policies and Profiles.
-
On the Policies and Profiles page, in the navigation pane, click VM Storage Policies.
-
Click CREATE.
-
Type a name for the storage policy.
-
For the rules, choose Tag Placement rules and select the tag and category that targets the desired datastores (in this example, the
openshift-zonal-datastoretag).The datastores are listed in the storage compatibility table.
-
-
Create a new storage class that uses the new zoned storage policy:
-
Click Storage > StorageClasses.
-
On the StorageClasses page, click Create StorageClass.
-
Type a name for the new storage class in Name.
-
Under Provisioner, select csi.vsphere.vmware.com.
-
Under Additional parameters, for the StoragePolicyName parameter, set Value to the name of the new zoned storage policy that you created earlier.
-
Click Create.
Example outputkind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: zoned-sc provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: zoned-storage-policy reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer- New topology aware storage class name.
- Specify zoned storage policy.
Note
You can also create the storage class by editing the preceding YAML file and running the command
oc create -f $FILE.
-
Creating vSphere storage topology without an infra topology
Note
OpenShift Container Platform recommends using the infrastructure object for specifying failure domains in a topology aware setup. Specifying failure domains in the infrastructure object and specify topology-categories in the ClusterCSIDriver object at the same time is an unsupported operation.
Procedure
-
In the VMware vCenter vSphere client GUI, define appropriate zone and region catagories and tags.
While vSphere allows you to create categories with any arbitrary name, OpenShift Container Platform strongly recommends use of
openshift-regionandopenshift-zonenames for defining topology.For more information about vSphere categories and tags, see the VMware vSphere documentation.
-
To allow the container storage interface (CSI) driver to detect this topology, edit the
clusterCSIDriverobject YAML filedriverConfigsection:-
Specify the
openshift-zoneandopenshift-regioncategories that you created earlier. -
Set
driverTypetovSphere.~ $ oc edit clustercsidriver csi.vsphere.vmware.com -o yamlExample outputapiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: csi.vsphere.vmware.com spec: logLevel: Normal managementState: Managed observedConfig: null operatorLogLevel: Normal unsupportedConfigOverrides: null driverConfig: driverType: vSphere vSphere: topologyCategories: - openshift-zone - openshift-region- Ensure that
driverTypeis set tovSphere. openshift-zoneandopenshift-regioncategories created earlier in vCenter.
- Ensure that
-
-
Verify that
CSINodeobject has topology keys by running the following commands:~ $ oc get csinodeExample outputNAME DRIVERS AGE co8-4s88d-infra-2m5vd 1 27m co8-4s88d-master-0 1 70m co8-4s88d-master-1 1 70m co8-4s88d-master-2 1 70m co8-4s88d-worker-j2hmg 1 47m co8-4s88d-worker-mbb46 1 47m co8-4s88d-worker-zlk7d 1 47m~ $ oc get csinode co8-4s88d-worker-j2hmg -o yamlExample output... spec: drivers: - allocatable: count: 59 name: csi-vsphere.vmware.com nodeID: co8-4s88d-worker-j2hmg topologyKeys: - topology.csi.vmware.com/openshift-zone - topology.csi.vmware.com/openshift-region- Topology keys from vSphere
openshift-zoneandopenshift-regioncatagories.Note
CSINodeobjects might take some time to receive updated topology information. After the driver is updated,CSINodeobjects should have topology keys in them.
- Topology keys from vSphere
-
Create a tag to assign to datastores across failure domains:
When an OpenShift Container Platform spans more than one failure domain, the datastore might not be shared across those failure domains, which is where topology-aware provisioning of persistent volumes (PVs) is useful.
-
In vCenter, create a category for tagging the datastores. For example,
openshift-zonal-datastore-cat. You can use any other category name, provided the category uniquely is used for tagging datastores participating in OpenShift Container Platform cluster. Also, ensure thatStoragePod,Datastore, andFolderare selected as Associable Entities for the created category. -
In vCenter, create a tag that uses the previously created category. This example uses the tag name
openshift-zonal-datastore. -
Assign the previously created tag (in this example
openshift-zonal-datastore) to each datastore in a failure domain that would be considered for dynamic provisioning.Note
You can use any names you like for categories and tags. The names used in this example are provided as recommendations. Ensure that the tags and categories that you define uniquely identify only datastores that are shared with all hosts in the OpenShift Container Platform cluster.
-
-
Create a storage policy that targets the tag-based datastores in each failure domain:
-
In vCenter, from the main menu, click Policies and Profiles.
-
On the Policies and Profiles page, in the navigation pane, click VM Storage Policies.
-
Click CREATE.
-
Type a name for the storage policy.
-
For the rules, choose Tag Placement rules and select the tag and category that targets the desired datastores (in this example, the
openshift-zonal-datastoretag).The datastores are listed in the storage compatibility table.
-
-
Create a new storage class that uses the new zoned storage policy:
-
Click Storage > StorageClasses.
-
On the StorageClasses page, click Create StorageClass.
-
Type a name for the new storage class in Name.
-
Under Provisioner, select csi.vsphere.vmware.com.
-
Under Additional parameters, for the StoragePolicyName parameter, set Value to the name of the new zoned storage policy that you created earlier.
-
Click Create.
Example outputkind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: zoned-sc provisioner: csi.vsphere.vmware.com parameters: StoragePolicyName: zoned-storage-policy reclaimPolicy: Delete allowVolumeExpansion: true volumeBindingMode: WaitForFirstConsumer- New topology aware storage class name.
- Specify zoned storage policy.
Note
You can also create the storage class by editing the preceding YAML file and running the command
oc create -f $FILE.
-
Results
Creating persistent volume claims (PVCs) and PVs from the topology aware storage class are truly zonal, and should use the datastore in their respective zone depending on how pods are scheduled:
$ oc get pv <pv_name> -o yaml
...
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: topology.csi.vmware.com/openshift-zone
operator: In
values:
- <openshift_zone>
- key: topology.csi.vmware.com/openshift-region
operator: In
values:
- <openshift_region>
...
peristentVolumeclaimPolicy: Delete
storageClassName: <zoned_storage_class_name>
volumeMode: Filesystem
...
- PV has zoned keys.
- PV is using the zoned storage class.
Changing the maximum number of snapshots for vSphere
The default maximum number of snapshots per volume in vSphere Container Storage Interface (CSI) is 3. You can change the maximum number up to 32 per volume.
However, be aware that increasing the snapshot maximum involves a performance trade off, so for better performance use only 2 to 3 snapshots per volume.
For more VMware snapshot performance recommendations, see Additional resources.
-
Access to the cluster with administrator rights.
-
Check the current secret by the running the following command:
$ oc -n openshift-cluster-csi-drivers get secret/vsphere-csi-config-secret -o jsonpath='{.data.cloud\.conf}' | base64 -dExample output# Labels with topology values are added dynamically via operator [Global] cluster-id = vsphere-01-cwv8p # Populate VCenters (multi) after here [VirtualCenter "vcenter.openshift.com"] insecure-flag = true datacenters = DEVQEdatacenter password = "xxxxxxxx" user = "xxxxxxxx@devcluster.openshift.com" migration-datastore-url = ds:///vmfs/volumes/vsan:52c842f232751e0d-3253aadeac21ca82/In this example, the global maximum number of snapshots is not configured, so the default value of 3 is applied.
-
Change the snapshot limit by running the following command:
-
Set global snapshot limit:
$ oc patch clustercsidriver/csi.vsphere.vmware.com --type=merge -p '{"spec":{"driverConfig":{"vSphere":{"globalMaxSnapshotsPerBlockVolume": 10}}}}' clustercsidriver.operator.openshift.io/csi.vsphere.vmware.com patchedIn this example, the global limit is being changed to 10 (
globalMaxSnapshotsPerBlockVolumeset to 10). -
Set Virtual Volume snapshot limit:
This parameter sets the limit on the Virtual Volumes datastore only. The Virtual Volume maximum snapshot limit overrides the global constraint if set, but defaults to the global limit if it is not set.
$ oc patch clustercsidriver/csi.vsphere.vmware.com --type=merge -p '{"spec":{"driverConfig":{"vSphere":{"granularMaxSnapshotsPerBlockVolumeInVVOL": 5}}}}' clustercsidriver.operator.openshift.io/csi.vsphere.vmware.com patchedIn this example, the Virtual Volume limit is being changed to 5 (
granularMaxSnapshotsPerBlockVolumeInVVOLset to 5). -
Set vSAN snapshot limit:
This parameter sets the limit on the vSAN datastore only. The vSAN maximum snapshot limit overrides the global constraint if set, but defaults to the global limit if it is not set. You can set a maximum value of 32 under vSAN ESA setup.
$ oc patch clustercsidriver/csi.vsphere.vmware.com --type=merge -p '{"spec":{"driverConfig":{"vSphere":{"granularMaxSnapshotsPerBlockVolumeInVSAN": 7}}}}' clustercsidriver.operator.openshift.io/csi.vsphere.vmware.com patchedIn this example, the vSAN limit is being changed to 7 (
granularMaxSnapshotsPerBlockVolumeInVSANset to 7).
-
-
Verify that any changes you made are reflected in the config map by running the following command:
$ oc -n openshift-cluster-csi-drivers get secret/vsphere-csi-config-secret -o jsonpath='{.data.cloud\.conf}' | base64 -dExample output# Labels with topology values are added dynamically via operator [Global] cluster-id = vsphere-01-cwv8p # Populate VCenters (multi) after here [VirtualCenter "vcenter.openshift.com"] insecure-flag = true datacenters = DEVQEdatacenter password = "xxxxxxxx" user = "xxxxxxxx@devcluster.openshift.com" migration-datastore-url = ds:///vmfs/volumes/vsan:52c842f232751e0d-3253aadeac21ca82/ [Snapshot] global-max-snapshots-per-block-volume = 10global-max-snapshots-per-block-volumeis now set to 10.
Migrating CNS volumes between datastores for vSphere
If you are running out of space in your current datastore, or want to move to a more performant datastore, you can migrate VMware vSphere Cloud Native Storage (CNS) volumes between datastores. This applies to both attached and detached volumes.
-
Requires VMware vSphere 8.0.2 or later, or VMware vSphere Foundation (VVF) 9, or VMware Cloud Foundation (VCF) 9
-
Only one volume can be migrated at a time.
-
RWX volumes are not supported.
-
CNS volume should only be migrated to a datastore that is shared with all hosts that make up the OpenShift Container Platform cluster.
-
Migrating volumes between different datastore in different datacenters is not supported.
-
VMware HCX is not supported.
For more general information, see:
Disabling and enabling storage on vSphere
Cluster administrators might want to disable the VMware vSphere Container Storage Interface (CSI) Driver as a Day 2 operation, so the vSphere CSI Driver does not interface with your vSphere setup.
Consequences of disabling and enabling storage on vSphere
The consequences of disabling and enabling storage on vSphere are described in the following table.
| Disabling | Enabling |
|---|---|
|
* vSphere CSI Driver Operator re-installs the CSI driver. * If necessary, the vSphere CSI Driver Operator creates the vSphere storage policy. |
Disabling and enabling storage on vSphere
Important
Before running this procedure, carefully review the preceding "Consequences of disabling and enabling storage on vSphere" table and potential impacts to your environment.
To disable or enable storage on vSphere:
-
Click Administration > CustomResourceDefinitions.
-
On the CustomResourceDefinitions page next to the Name dropdown box, type "clustercsidriver".
-
Click CRD ClusterCSIDriver.
-
Click the Instances tab.
-
Click csi.vsphere.vmware.com.
-
Click the YAML tab.
-
For
spec.managementState, change the value toRemovedorManaged:-
Removed: storage is disabled -
Managed: storage is enabled
-
-
Click Save.
-
If you are disabling storage, confirm that the driver has been removed:
-
Click Workloads > Pods.
-
On the Pods page, in the Name filter box type "vmware-vsphere-csi-driver".
The only item that should appear is the operator. For example: " vmware-vsphere-csi-driver-operator-559b97ffc5-w99fm"
-
Adding bare-metal nodes
Adding bare-metal nodes to an OpenShift Container Platform cluster on vSphere is supported as a Technology Preview feature.
However, if you add bare-metal nodes, you must remove the vSphere CSI Driver, otherwise the cluster is marked as degraded. For information about how to remove the driver and the consequences of doing this, see Section Disabling and enabling storage on vSphere.
For information about how to add bare-metal nodes, under Additional resources, see Section Adding bare-metal compute machines to a vSphere cluster.
Important
Adding bare-metal nodes is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Increasing maximum volumes per node for vSphere
For vSphere version 8 or later, or VMware vSphere Foundation (VVF) 9, or VMware Cloud Foundation (VCF) 9, you can increase the allowable number of volumes per node to a maximum of 255. Otherwise, the default value remains at 59.
Important
You must have an homogeneous vSphere 8 environment that only contains ESXi 8 hypervisors, or an homogeneous VVF or VCF 9 environment that only contains ESXi 9 hypervisors. Heterogeneous environments that contain a mix of versions of ESXi are not allowed. In such heterogenous environment, if you set a value greater than 59, the cluster degrades.
-
You must be running VMware vSphere version 8 or later, or VVF 9, or VCF 9.
-
You can potentially exceed the limit of 2048 virtual disks per host if you increase the maximum number of volumes per node on enough nodes. This can occur because there is no Distributed Resource scheduler (DRS) validation for vSphere to ensure you do not exceed this limit.
Important
Increasing volumes per node is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
Increasing the maximum allowable volumes per node for vSphere
-
Access to the OpenShift Container Platform web console.
-
Access to the cluster as a user with the cluster-admin role.
-
Access to VMware vSphere vCenter.
-
In vCenter, ensure that the parameter
pvscsiCtrlr256DiskSupportEnabledis set to 'True'.Important
Changing the
pvscsiCtrlr256DiskSupportEnabledparameter is not fully supported by VMware. Also, the parameter is a cluster-wide option.
Use the following procedure to increase the maximum number of volumes per node for vSphere:
-
Click Administration > CustomResourceDefinitions.
-
On the CustomResourceDefinitions page next to the Name dropdown box, type "clustercsidriver".
-
Click CRD ClusterCSIDriver.
-
Click the Instances tab.
-
Click csi.vsphere.vmware.com.
-
Click the YAML tab.
-
Set the parameter
spec.driverConfig.driverTypetovSphere. -
Add the parameter
spec.driverConfig.vSphere.maxAllowedBlockVolumesPerNodeto the YAML file, and provide a value for the desired maximum number of volumes per node as in the following sample YAML file:Sample YAML file for adding the parameter maxAllowedBlockVolumesPerNode... spec: driverConfig: driverType: vSphere vSphere: maxAllowedBlockVolumesPerNode: ...- Enter the desired value here for the maximum number of volumes per node. The default is 59. The minimum value is 1 and the maximum value is 255.
-
Click Save.