Persistent storage using Logical Volume Manager Storage
Logical Volume Manager (LVM) Storage uses LVM2 through the TopoLVM CSI driver to dynamically provision local storage on a cluster with limited resources.
You can create volume groups, persistent volume claims (PVCs), volume snapshots, and volume clones by using LVM Storage.
Logical Volume Manager Storage installation
You can install Logical Volume Manager (LVM) Storage on an OpenShift Container Platform cluster and configure it to dynamically provision storage for your workloads.
You can install LVM Storage by using the OpenShift Container Platform CLI (oc), OpenShift Container Platform web console, or Red Hat Advanced Cluster Management (RHACM).
Warning
When using LVM Storage on multi-node clusters, LVM Storage only supports provisioning local storage. LVM Storage does not support storage data replication mechanisms across nodes. You must ensure storage data replication through active or passive replication mechanisms to avoid a single point of failure.
Prerequisites to install LVM Storage
The prerequisites to install LVM Storage are as follows:
-
Ensure that you have a minimum of 10 milliCPU and 100 MiB of RAM.
-
Ensure that every managed cluster has dedicated disks that are used to provision storage. LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them.
-
Before installing LVM Storage in a private CI environment where you can reuse the storage devices that you configured in the previous LVM Storage installation, ensure that you have wiped the disks that are not in use. If you do not wipe the disks before installing LVM Storage, you cannot reuse the disks without manual intervention.
Note
You cannot wipe the disks that are in use.
-
If you want to install LVM Storage by using Red Hat Advanced Cluster Management (RHACM), ensure that you have installed RHACM on an OpenShift Container Platform cluster. See the "Installing LVM Storage using RHACM" section.
Installing LVM Storage by using the CLI
As a cluster administrator, you can install LVM Storage by using the OpenShift CLI.
Note
The default namespace for the LVM Storage Operator is openshift-lvm-storage.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to OpenShift Container Platform as a user with
cluster-adminand Operator installation permissions.
-
Create a YAML file with the configuration for creating a namespace:
Example YAML configuration for creating a namespaceapiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-lvm-storage -
Create the namespace by running the following command:
$ oc create -f <file_name> -
Create an
OperatorGroupCR YAML file:ExampleOperatorGroupCRapiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-lvm-storage spec: targetNamespaces: - openshift-storage -
Create the
OperatorGroupCR by running the following command:$ oc create -f <file_name> -
Create a
SubscriptionCR YAML file:ExampleSubscriptionCRapiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-lvm-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace -
Create the
SubscriptionCR by running the following command:$ oc create -f <file_name>
-
To verify that LVM Storage is installed, run the following command:
$ oc get csv -n openshift-lvm-storage -o custom-columns=Name:.metadata.name,Phase:.status.phaseExample outputName Phase 4.13.0-202301261535 Succeeded
Installing LVM Storage by using the web console
You can install LVM Storage by using the OpenShift Container Platform web console.
Note
The default namespace for the LVM Storage Operator is openshift-lvm-storage.
-
You have access to the cluster.
-
You have access to OpenShift Container Platform with
cluster-adminand Operator installation permissions.
-
Log in to the OpenShift Container Platform web console.
-
Click Ecosystem → Software Catalog.
-
Click LVM Storage on the software catalog page.
-
Set the following options on the Operator Installation page:
-
Update Channel as stable-4.19.
-
Installation Mode as A specific namespace on the cluster.
-
Installed Namespace as Operator recommended namespace openshift-storage. If the
openshift-lvm-storagenamespace does not exist, it is created during the operator installation. -
Update approval as Automatic or Manual.
Note
If you select Automatic updates, the Operator Lifecycle Manager (OLM) automatically updates the running instance of LVM Storage without any intervention.
If you select Manual updates, the OLM creates an update request. As a cluster administrator, you must manually approve the update request to update LVM Storage to a newer version.
-
-
Optional: Select the Enable Operator recommended cluster monitoring on this Namespace checkbox.
-
Click Install.
-
Verify that LVM Storage shows a green tick, indicating successful installation.
Installing LVM Storage in a disconnected environment
You can install LVM Storage on OpenShift Container Platform in a disconnected environment. All sections referenced in this procedure are linked in the "Additional resources" section.
-
You read the "About disconnected installation mirroring" section.
-
You have access to the OpenShift Container Platform image repository.
-
You created a mirror registry.
-
Follow the steps in the "Creating the image set configuration" procedure. To create an
ImageSetConfigurationcustom resource (CR) for LVM Storage, you can use the following exampleImageSetConfigurationCR configuration:ExampleImageSetConfigurationCR for LVM Storagekind: ImageSetConfiguration apiVersion: mirror.openshift.io/v1alpha2 archiveSize: 4 storageConfig: registry: imageURL: example.com/mirror/oc-mirror-metadata skipTLS: false mirror: platform: channels: - name: stable-4.19 type: ocp graph: true operators: - catalog: registry.redhat.io/redhat/redhat-operator-index:v4.19 packages: - name: lvms-operator channels: - name: stable additionalImages: - name: registry.redhat.io/ubi9/ubi:latest helm: {}- Set the maximum size (in GiB) of each file within the image set.
- Specify the location in which you want to save the image set. This location can be a registry or a local directory. You must configure the
storageConfigfield unless you are using the Technology Preview OCI feature. - Specify the storage URL for the image stream when using a registry. For more information, see Why use imagestreams.
- Specify the channel from which you want to retrieve the OpenShift Container Platform images.
- Set this field to
trueto generate the OpenShift Update Service (OSUS) graph image. For more information, see About the OpenShift Update Service. - Specify the Operator catalog from which you want to retrieve the OpenShift Container Platform images.
- Specify the Operator packages to include in the image set. If this field is empty, all packages in the catalog are retrieved.
- Specify the channels of the Operator packages to include in the image set. You must include the default channel for the Operator package even if you do not use the bundles in that channel. You can find the default channel by running the following command:
$ oc mirror list operators --catalog=<catalog_name> --package=<package_name>. - Specify any additional images to include in the image set.
-
Follow the procedure in the "Mirroring an image set to a mirror registry" section.
-
Follow the procedure in the "Configuring image registry repository mirroring" section.
Installing LVM Storage by using RHACM
To install LVM Storage on the clusters by using Red Hat Advanced Cluster Management (RHACM), you must create a Policy custom resource (CR). You can also configure the criteria to select the clusters on which you want to install LVM Storage.
Note
The Policy CR that is created to install LVM Storage is also applied to the clusters that are imported or created after creating the Policy CR.
-
You have access to the RHACM cluster using an account with
cluster-adminand Operator installation permissions. -
You have dedicated disks that LVM Storage can use on each cluster.
-
The cluster must be managed by RHACM.
-
Log in to the RHACM CLI using your OpenShift Container Platform credentials.
-
Create a namespace.
$ oc create ns <namespace> -
Create a
PolicyCR YAML file:ExamplePolicyCR to install and configure LVM StorageapiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-install-lvms spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-install-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-install-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: install-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: install-lvms spec: disabled: false remediationAction: enforce policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: install-lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: v1 kind: Namespace metadata: labels: openshift.io/cluster-monitoring: "true" pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged name: openshift-lvm-storage - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-lvm-storage spec: targetNamespaces: - openshift-lvm-storage - complianceType: musthave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms namespace: openshift-lvm-storage spec: installPlanApproval: Automatic name: lvms-operator source: redhat-operators sourceNamespace: openshift-marketplace remediationAction: enforce severity: low- Set the
keyfield andvaluesfield inPlacementRule.spec.clusterSelectorto match the labels that are configured in the clusters on which you want to install LVM Storage. - Namespace configuration.
- The
OperatorGroupCR configuration. - The
SubscriptionCR configuration.
- Set the
-
Create the
PolicyCR by running the following command:$ oc create -f <file_name> -n <namespace>Upon creating the
PolicyCR, the following custom resources are created on the clusters that match the selection criteria configured in thePlacementRuleCR:-
Namespace -
OperatorGroup -
Subscription
-
Note
The default namespace for the LVM Storage Operator is openshift-lvm-storage.
About the LVMCluster custom resource
You can configure the LVMCluster CR to perform the following actions:
-
Create LVM volume groups that you can use to provision persistent volume claims (PVCs).
-
Configure a list of devices that you want to add to the LVM volume groups.
-
Configure the requirements to select the nodes on which you want to create an LVM volume group, and the thin pool configuration for the volume group.
-
Force wipe the selected devices.
After you have installed LVM Storage, you must create an LVMCluster custom resource (CR).
LVMCluster CR YAML fileapiVersion: lvm.topolvm.io/v1alpha1
kind: LVMCluster
metadata:
name: my-lvmcluster
namespace: openshift-storage
spec:
tolerations:
- effect: NoSchedule
key: xyz
operator: Equal
value: "true"
storage:
deviceClasses:
- name: vg1
fstype: ext4
default: true
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: mykey
operator: In
values:
- ssd
deviceSelector:
paths:
- /dev/disk/by-path/pci-0000:87:00.0-nvme-1
- /dev/disk/by-path/pci-0000:88:00.0-nvme-1
optionalPaths:
- /dev/disk/by-path/pci-0000:89:00.0-nvme-1
- /dev/disk/by-path/pci-0000:90:00.0-nvme-1
forceWipeDevicesAndDestroyAllData: true
thinPoolConfig:
name: thin-pool-1
sizePercent: 90
overprovisionRatio: 10
chunkSize: 128Ki
chunkSizeCalculationPolicy: Static
metadataSize: 1Gi
metadataSizeCalculationPolicy: Host
- Optional field
Explanation of fields in the LVMCluster CR
The LVMCluster CR fields are described in the following table:
| Field | Type | Description |
|---|---|---|
|
|
Contains the configuration to assign the local storage devices to the LVM volume groups. LVM Storage creates a storage class and volume snapshot class for each device class that you create. |
|
|
Specify a name for the LVM volume group (VG). You can also configure this field to reuse a volume group that you created in the previous installation. For more information, see "Reusing a volume group from the previous LVM Storage installation". |
|
|
Set this field to |
|
|
Set this field to |
|
|
Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered. On the control-plane node, LVM Storage detects and uses the additional worker nodes when the new nodes become active in the cluster. |
|
|
Configure the requirements that are used to select the node. |
|
|
Contains the configuration to perform the following actions:
For more information, see "About adding devices to a volume group". |
|
|
Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the |
|
|
Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error. |
|
|
LVM Storage uses only those disks that are empty and do not contain file system signatures. To ensure that the disks are empty and do not contain file system signatures, wipe the disks before using them. To force wipe the selected devices, set this field to Warning If this field is set to Wiping the device can lead to inconsistencies in data integrity if any of the following conditions are met:
If any of these conditions are true, do not force wipe the disk. Instead, you must manually wipe the disk. |
|
|
Contains the configuration to create a thin pool in the LVM volume group. If you exclude this field, logical volumes are thick provisioned. Using thick-provisioned storage includes the following limitations:
|
|
|
Specify a name for the thin pool. |
|
|
Specify the percentage of space in the LVM volume group for creating the thin pool. By default, this field is set to 90. The minimum value that you can set is 10, and the maximum value is 90. |
|
|
Specify a factor by which you can provision additional storage based on the available storage in the thin pool. For example, if this field is set to 10, you can provision up to 10 times the amount of available storage in the thin pool. You can modify this field after the LVM cluster has been created. To update the parameter, do any of the following tasks:
$ oc edit lvmcluster <lvmcluster_name>
$ oc patch lvmcluster <lvmcluster_name> -p <patch_file.yaml> To disable over-provisioning, set this field to 1. |
|
|
Specifies the statically calculated chunk size for the thin pool. This field is only used when the If you do not configure this field and the For more information, see "Overview of chunk size". |
|
|
Specifies the policy to calculate the chunk size for the underlying volume group. You can set this field to either If this field is set to If this field is set to For more information, see "Limitations to configure the size of the devices used in LVM Storage". |
|
|
Specifies the metadata size for the thin pool. You can configure this field only when the If this field is not configured, and the The value for this field must be configured in the range of 2 MiB to 16 GiB due to the underlying limitations of |
|
|
Specifies the policy to calculate the metadata size for the underlying volume group. You can set this field to either If this field is set to If this field is set to |
Limitations to configure the size of the devices used in LVM Storage
The limitations to configure the size of the devices that you can use to provision storage using LVM Storage are as follows:
-
The total storage size that you can provision is limited by the size of the underlying Logical Volume Manager (LVM) thin pool and the over-provisioning factor.
-
The size of the logical volume depends on the size of the Physical Extent (PE) and the Logical Extent (LE).
-
You can define the size of PE and LE during the physical and logical device creation.
-
The default PE and LE size is 4 MB.
-
If the size of the PE is increased, the maximum size of the LVM is determined by the kernel limits and your disk space.
-
The following tables describe the chunk size and volume size limits for static and host configurations:
| Parameter | Value |
|---|---|
Chunk size |
128 KiB |
Maximum volume size |
32 TiB |
| Parameter | Minimum value | Maximum value |
|---|---|---|
Chunk size |
64 KiB |
1 GiB |
Volume size |
Minimum size of the underlying Red Hat Enterprise Linux CoreOS (RHCOS) system. |
Maximum size of the underlying RHCOS system. |
| Parameter | Value |
|---|---|
Chunk size |
This value is based on the configuration in the |
Maximum volume size |
Equal to the maximum volume size of the underlying RHCOS system. |
Minimum volume size |
Equal to the minimum volume size of the underlying RHCOS system. |
About adding devices to a volume group
The deviceSelector field in the LVMCluster CR contains the configuration to specify the paths to the devices that you want to add to the Logical Volume Manager (LVM) volume group.
You can specify the device paths in the deviceSelector.paths field, the deviceSelector.optionalPaths field, or both. If you do not specify the device paths in both the deviceSelector.paths field and the deviceSelector.optionalPaths field, LVM Storage adds the supported unused devices to the volume group (VG).
Important
It is recommended to avoid referencing disks using symbolic naming, such as /dev/sdX, as these names may change across reboots within RHCOS. Instead, you must use stable naming schemes, such as /dev/disk/by-path/ or /dev/disk/by-id/, to ensure consistent disk identification.
With this change, you might need to adjust existing automation workflows in the cases where monitoring collects information about the install device for each node.
For more information, see the RHEL documentation.
You can add the path to the Redundant Array of Independent Disks (RAID) arrays in the deviceSelector field to integrate the RAID arrays with LVM Storage. You can create the RAID array by using the mdadm utility. LVM Storage does not support creating a software RAID.
Note
You can create a RAID array only during an OpenShift Container Platform installation. For information on creating a RAID array, see the following sections:
-
"Configuring a RAID-enabled data volume" in "Additional resources".
You can also add encrypted devices to the volume group. You can enable disk encryption on the cluster nodes during an OpenShift Container Platform installation. After encrypting a device, you can specify the path to the LUKS encrypted device in the deviceSelector field. For information on disk encryption, see "About disk encryption" and "Configuring disk encryption and mirroring".
The devices that you want to add to the VG must be supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage".
LVM Storage adds the devices to the VG only if the following conditions are met:
-
The device path exists.
-
The device is supported by LVM Storage.
Important
After a device is added to the VG, you cannot remove the device.
LVM Storage supports dynamic device discovery. If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices to the VG when the devices are available.
Warning
It is not recommended to add the devices to the VG through dynamic device discovery due to the following reasons:
-
When you add a new device that you do not intend to add to the VG, LVM Storage automatically adds this device to the VG through dynamic device discovery.
-
If LVM Storage adds a device to the VG through dynamic device discovery, LVM Storage does not restrict you from removing the device from the node. Removing or updating the devices that are already added to the VG can disrupt the VG. This can also lead to data loss and necessitate manual node remediation.
About removing devices and device classes from a volume group
The deviceSelector field in the LVMCluster CR contains the configuration to specify the paths to the devices that you can remove from the Logical Volume Manager (LVM) volume group.
Removing the device paths in the deviceSelector.paths field
You can remove the device paths in the deviceSelector.paths field.
Important
Ensure that the following criteria are met before removing device paths:
-
The device that you want to remove is empty. You can use the
pvdisplaycommand to see attributes of physical volumes (PVs) used in LVM. -
At least one additional device is specified in the
deviceSelector.pathsfield.
Removing the deviceClass from the LVMCluster
You can also remove the deviceClass object from the LVMCluster resource. For device class deletion, there is no need to delete deviceSelector.paths object.
Important
Ensure that the following criteria are met before removing a device class:
-
The
deviceClasses.defaultfield is set tofalse. -
The disks specified in the
deviceSelector.pathsfield are empty. -
At least one additional device class is specified in the
storagefield.
Devices not supported by LVM Storage
When you are adding the device paths in the deviceSelector field of the LVMCluster custom resource (CR), ensure that the devices are supported by LVM Storage. If you add paths to the unsupported devices, LVM Storage excludes the devices to avoid complexity in managing logical volumes.
If you do not specify any device path in the deviceSelector field, LVM Storage adds only the unused devices that it supports.
Note
To get information about the devices, run the following command:
$ lsblk --paths --json -o \
NAME,ROTA,TYPE,SIZE,MODEL,VENDOR,RO,STATE,KNAME,SERIAL,PARTLABEL,FSTYPE
LVM Storage does not support the following devices:
- Read-only devices
-
Devices with the
roparameter set totrue. - Suspended devices
-
Devices with the
stateparameter set tosuspended. - ROM devices
-
Devices with the
typeparameter set torom. - LVM partition devices
-
Devices with the
typeparameter set tolvm. - Devices with invalid partition labels
-
Devices with the
partlabelparameter set tobios,boot, orreserved. - Devices with an invalid filesystem
-
Devices with the
fstypeparameter set to any value other thannullorLVM2_member.Important
LVM Storage supports devices with
fstypeparameter set toLVM2_memberonly if the devices do not contain children devices. - Devices that are part of another volume group
-
To get the information about the volume groups of the device, run the following command:
$ pvs <device-name>- Replace
<device-name>with the device name.
- Replace
- Devices with bind mounts
-
To get the mount points of a device, run the following command:
$ cat /proc/1/mountinfo | grep <device-name>- Replace
<device-name>with the device name.
- Replace
- Devices that contain children devices
Note
It is recommended to wipe the device before using it in LVM Storage to prevent unexpected behavior.
Ways to create an LVMCluster custom resource
You can create an LVMCluster custom resource (CR) by using the OpenShift CLI (oc) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also create an LVMCluster CR by using RHACM.
Important
You must create the LVMCluster CR in the same namespace where you installed the LVM Storage Operator, which is openshift-storage by default.
Upon creating the LVMCluster CR, LVM Storage creates the following system-managed CRs:
-
A
storageClassandvolumeSnapshotClassfor each device class.Note
LVM Storage configures the name of the storage class and volume snapshot class in the format
lvms-<device_class_name>, where,<device_class_name>is the value of thedeviceClasses.namefield in theLVMClusterCR. For example, if thedeviceClasses.namefield is set to vg1, the name of the storage class and volume snapshot class islvms-vg1. -
LVMVolumeGroup: This CR is a specific type of persistent volume (PV) that is backed by an LVM volume group. It tracks the individual volume groups across multiple nodes. -
LVMVolumeGroupNodeStatus: This CR tracks the status of the volume groups on a node.
Reusing a volume group from the previous LVM Storage installation
You can reuse an existing volume group (VG) from the previous LVM Storage installation instead of creating a new VG.
You can only reuse a VG but not the logical volume associated with the VG.
Important
You can perform this procedure only while creating an LVMCluster custom resource (CR).
-
The VG that you want to reuse must not be corrupted.
-
The VG that you want to reuse must have the
lvmstag. For more information on adding tags to LVM objects, see Grouping LVM objects with tags.
-
Open the
LVMClusterCR YAML file. -
Configure the
LVMClusterCR parameters as described in the following example:ExampleLVMClusterCR YAML fileapiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-storage spec: # ... storage: deviceClasses: - name: vg1 fstype: ext4 default: true deviceSelector: # ... forceWipeDevicesAndDestroyAllData: false thinPoolConfig: # ... nodeSelector: # ...- Set this field to the name of a VG from the previous LVM Storage installation.
- Set this field to
ext4orxfs. By default, this field is set toxfs. - You can add new devices to the VG that you want to reuse by specifying the new device paths in the
deviceSelectorfield. If you do not want to add new devices to the VG, ensure that thedeviceSelectorconfiguration in the current LVM Storage installation is same as that of the previous LVM Storage installation. - If this field is set to
true, LVM Storage wipes all the data on the devices that are added to the VG. - To retain the
thinPoolConfigconfiguration of the VG that you want to reuse, ensure that thethinPoolConfigconfiguration in the current LVM Storage installation is same as that of the previous LVM Storage installation. Otherwise, you can configure thethinPoolConfigfield as required. - Configure the requirements to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered.
-
Save the
LVMClusterCR YAML file.
Note
To view the devices that are part a volume group, run the following command:
$ pvs -S vgname=<vg_name>
- Replace
<vg_name>with the name of the volume group.
Creating an LVMCluster CR by using the CLI
You can create an LVMCluster custom resource (CR) on a worker node using the OpenShift CLI (oc).
Important
You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to OpenShift Container Platform as a user with
cluster-adminprivileges. -
You have installed LVM Storage.
-
You have installed a worker node in the cluster.
-
You read the "About the LVMCluster custom resource" section.
-
Create an
LVMClustercustom resource (CR) YAML file:ExampleLVMClusterCR YAML fileapiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-lvm-storage spec: # ... storage: deviceClasses: # ... nodeSelector: # ... deviceSelector: # ... thinPoolConfig: # ...- Contains the configuration to assign the local storage devices to the LVM volume groups.
- Contains the configuration to choose the nodes on which you want to create the LVM volume group. If this field is empty, all nodes without no-schedule taints are considered.
- Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group, and force wipe the devices that are added to the LVM volume group.
- Contains the configuration to create a thin pool in the LVM volume group. If you exclude this field, logical volumes are thick provisioned.
-
Create the
LVMClusterCR by running the following command:$ oc create -f <file_name>Example outputlvmcluster/lvmcluster created
-
Check that the
LVMClusterCR is in theReadystate:$ oc get lvmclusters.lvm.topolvm.io -o jsonpath='{.items[*].status}' -n <namespace>Example output{"deviceClassStatuses": [ { "name": "vg1", "nodeStatus": [ { "devices": [ "/dev/nvme0n1", "/dev/nvme1n1", "/dev/nvme2n1" ], "node": "kube-node", "status": "Ready" } ] } ] "state":"Ready"}- The status of the device class.
- The status of the LVM volume group on each node.
- The list of devices used to create the LVM volume group.
- The node on which the device class is created.
- The status of the LVM volume group on the node.
- The status of the
LVMClusterCR.Note
If the
LVMClusterCR is in theFailedstate, you can view the reason for failure in thestatusfield.Example of
statusfield with the reason for failue:status: deviceClassStatuses: - name: vg1 nodeStatus: - node: my-node-1.example.com reason: no available devices found for volume group status: Failed state: Failed
-
Optional: To view the storage classes created by LVM Storage for each device class, run the following command:
$ oc get storageclassExample outputNAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE lvms-vg1 topolvm.io Delete WaitForFirstConsumer true 31m -
Optional: To view the volume snapshot classes created by LVM Storage for each device class, run the following command:
$ oc get volumesnapshotclassExample outputNAME DRIVER DELETIONPOLICY AGE lvms-vg1 topolvm.io Delete 24h
Creating an LVMCluster CR by using the web console
You can create an LVMCluster CR on a worker node using the OpenShift Container Platform web console.
Important
You can only create a single instance of the LVMCluster custom resource (CR) on an OpenShift Container Platform cluster.
-
You have access to the OpenShift Container Platform cluster with
cluster-adminprivileges. -
You have installed LVM Storage.
-
You have installed a worker node in the cluster.
-
You read the "About the LVMCluster custom resource" section.
-
Log in to the OpenShift Container Platform web console.
-
Click Ecosystem → Installed Operators.
-
In the
openshift-lvm-storagenamespace, click LVM Storage. -
Click Create LVMCluster and select either Form view or YAML view.
-
Configure the required
LVMClusterCR parameters. -
Click Create.
-
Optional: If you want to edit the
LVMCLusterCR, perform the following actions:-
Click the LVMCluster tab.
-
From the Actions menu, select Edit LVMCluster.
-
Click YAML and edit the required
LVMCLusterCR parameters. -
Click Save.
-
-
On the LVMCLuster page, check that the
LVMClusterCR is in theReadystate. -
Optional: To view the available storage classes created by LVM Storage for each device class, click Storage → StorageClasses.
-
Optional: To view the available volume snapshot classes created by LVM Storage for each device class, click Storage → VolumeSnapshotClasses.
Creating an LVMCluster CR by using RHACM
After you have installed LVM Storage by using RHACM, you must create an LVMCluster custom resource (CR).
-
You have installed LVM Storage by using RHACM.
-
You have access to the RHACM cluster using an account with
cluster-adminpermissions. -
You read the "About the LVMCluster custom resource" section.
-
Log in to the RHACM CLI using your OpenShift Container Platform credentials.
-
Create a
ConfigurationPolicyCR YAML file with the configuration to create anLVMClusterCR:ExampleConfigurationPolicyCR YAML file to create anLVMClusterCRapiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms namespace: openshift-lvm-storage spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-lvm-storage spec: storage: deviceClasses: # ... deviceSelector: # ... thinPoolConfig: # ... nodeSelector: # ... remediationAction: enforce severity: low- Contains the configuration to assign the local storage devices to the LVM volume groups.
- Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group, and force wipe the devices that are added to the LVM volume group.
- Contains the configuration to create a thin pool in the LVM volume group. If you exclude this field, logical volumes are thick provisioned.
- Contains the configuration to choose the nodes on which you want to create the LVM volume groups. If this field is empty, then all nodes without no-schedule taints are considered.
-
Create the
ConfigurationPolicyCR by running the following command:$ oc create -f <file_name> -n <cluster_namespace>- Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed.
Ways to delete an LVMCluster custom resource
You can delete an LVMCluster custom resource (CR) by using the OpenShift CLI (oc) or the OpenShift Container Platform web console. If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can also delete an LVMCluster CR by using RHACM.
Upon deleting the LVMCluster CR, LVM Storage deletes the following CRs:
-
storageClass -
volumeSnapshotClass -
LVMVolumeGroup -
LVMVolumeGroupNodeStatus
Deleting an LVMCluster CR by using the CLI
You can delete the LVMCluster custom resource (CR) using the OpenShift CLI (oc).
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions. -
You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
-
Log in to the OpenShift CLI (
oc). -
Delete the
LVMClusterCR by running the following command:$ oc delete lvmcluster <lvm_cluster_name> -n <namespace>
-
To verify that the
LVMClusterCR has been deleted, run the following command:$ oc get lvmcluster -n <namespace>Example outputNo resources found in openshift-lvm-storage namespace.
Deleting an LVMCluster CR by using the web console
You can delete the LVMCluster custom resource (CR) using the OpenShift Container Platform web console.
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions. -
You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
-
Log in to the OpenShift Container Platform web console.
-
Click Ecosystem → Installed Operators to view all the installed Operators.
-
Click LVM Storage in the
openshift-lvm-storagenamespace. -
Click the LVMCluster tab.
-
From the Actions, select Delete LVMCluster.
-
Click Delete.
-
On the
LVMCLusterpage, check that theLVMClusterCR has been deleted.
Deleting an LVMCluster CR by using RHACM
If you have installed LVM Storage by using Red Hat Advanced Cluster Management (RHACM), you can delete an LVMCluster CR by using RHACM.
-
You have access to the RHACM cluster as a user with
cluster-adminpermissions. -
You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
-
Log in to the RHACM CLI using your OpenShift Container Platform credentials.
-
Delete the
ConfigurationPolicyCR YAML file that was created for theLVMClusterCR:$ oc delete -f <file_name> -n <cluster_namespace>- Namespace of the OpenShift Container Platform cluster on which LVM Storage is installed.
-
Create a
PolicyCR YAML file to delete theLVMClusterCR:ExamplePolicyCR to delete theLVMClusterCRapiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-delete annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: enforce disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal spec: remediationAction: enforce severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-lvm-storage --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-delete placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-delete subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-delete --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-delete spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue- The
spec.remediationActioninpolicy-templateis overridden by the preceding parameter value forspec.remediationAction. - This
namespacefield must have theopenshift-lvm-storagevalue. - Configure the requirements to select the clusters. LVM Storage is uninstalled on the clusters that match the selection criteria.
- The
-
Create the
PolicyCR by running the following command:$ oc create -f <file_name> -n <namespace> -
Create a
PolicyCR YAML file to check if theLVMClusterCR has been deleted:ExamplePolicyCR to check if theLVMClusterCR has been deletedapiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: name: policy-lvmcluster-inform annotations: policy.open-cluster-management.io/standards: NIST SP 800-53 policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration spec: remediationAction: inform disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-lvmcluster-removal-inform spec: remediationAction: inform severity: low object-templates: - complianceType: mustnothave objectDefinition: kind: LVMCluster apiVersion: lvm.topolvm.io/v1alpha1 metadata: name: my-lvmcluster namespace: openshift-lvm-storage --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-policy-lvmcluster-check placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-policy-lvmcluster-check subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: policy-lvmcluster-inform --- apiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-policy-lvmcluster-check spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue- The
policy-templatespec.remediationActionis overridden by the preceding parameter value forspec.remediationAction. - The
namespacefield must have theopenshift-lvm-storagevalue.
- The
-
Create the
PolicyCR by running the following command:$ oc create -f <file_name> -n <namespace>
-
Check the status of the
PolicyCRs by running the following command:$ oc get policy -n <namespace>Example outputNAME REMEDIATION ACTION COMPLIANCE STATE AGE policy-lvmcluster-delete enforce Compliant 15m policy-lvmcluster-inform inform Compliant 15mImportant
The
PolicyCRs must be inCompliantstate.
Provisioning storage
After you have created the LVM volume groups using the LVMCluster custom resource (CR), you can provision the storage by creating persistent volume claims (PVCs).
The following are the minimum storage sizes that you can request for each file system type:
-
block: 8 MiB -
xfs: 300 MiB -
ext4: 32 MiB
To create a PVC, you must create a PersistentVolumeClaim object.
-
You have created an
LVMClusterCR.
-
Log in to the OpenShift CLI (
oc). -
Create a
PersistentVolumeClaimobject:ExamplePersistentVolumeClaimobjectapiVersion: v1 kind: PersistentVolumeClaim metadata: name: lvm-block-1 namespace: default spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 10Gi limits: storage: 20Gi storageClassName: lvms-vg1- Specify a name for the PVC.
- To create a file PVC, set this field to
Filesystem. To create a block PVC, set this field toBlock. - Specify the storage size. If the value is less than the minimum storage size, the requested storage size is rounded to the minimum storage size. The total storage size you can provision is limited by the size of the Logical Volume Manager (LVM) thin pool and the over-provisioning factor.
- Optional: Specify the storage limit. Set this field to a value that is greater than or equal to the minimum storage size. Otherwise, PVC creation fails with an error.
- The value of the
storageClassNamefield must be in the formatlvms-<device_class_name>where<device_class_name>is the value of thedeviceClasses.namefield in theLVMClusterCR. For example, if thedeviceClasses.namefield is set tovg1, you must set thestorageClassNamefield tolvms-vg1.Note
The
volumeBindingModefield of the storage class is set toWaitForFirstConsumer.
-
Create the PVC by running the following command:
# oc create -f <file_name> -n <application_namespace>Note
The created PVCs remain in
Pendingstate until you deploy the pods that use them.
-
To verify that the PVC is created, run the following command:
$ oc get pvc -n <namespace>Example outputNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1 Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s
Ways to scale up the storage of clusters
OpenShift Container Platform supports additional worker nodes for clusters on bare metal user-provisioned infrastructure. You can scale up the storage of clusters either by adding new worker nodes with available storage or by adding new devices to the existing worker nodes.
Logical Volume Manager (LVM) Storage detects and uses additional worker nodes when the nodes become active.
To add a new device to the existing worker nodes on a cluster, you must add the path to the new device in the deviceSelector field of the LVMCluster custom resource (CR).
Important
You can add the deviceSelector field in the LVMCluster CR only while creating the LVMCluster CR. If you have not added the deviceSelector field while creating the LVMCluster CR, you must delete the LVMCluster CR and create a new LVMCluster CR containing the deviceSelector field.
If you do not add the deviceSelector field in the LVMCluster CR, LVM Storage automatically adds the new devices when the devices are available.
Note
LVM Storage adds only the supported devices. For information about unsupported devices, see "Devices not supported by LVM Storage".
Scaling up the storage of clusters by using the CLI
You can scale up the storage capacity of the worker nodes on a cluster by using the OpenShift CLI (oc).
-
You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage.
-
You have installed the OpenShift CLI (
oc). -
You have created an
LVMClustercustom resource (CR).
-
Edit the
LVMClusterCR by running the following command:$ oc edit <lvmcluster_file_name> -n <namespace> -
Add the path to the new device in the
deviceSelectorfield.ExampleLVMClusterCRapiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: # ... deviceSelector: paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 # ...- Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group.
You can specify the device paths in the
pathsfield, theoptionalPathsfield, or both. If you do not specify the device paths in bothpathsandoptionalPaths, Logical Volume Manager (LVM) Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the following conditions are met:-
The device path exists.
-
The device is supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage".
-
- Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the
LVMClusterCR moves to theFailedstate. - Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error.
Important
After a device is added to the LVM volume group, it cannot be removed.
- Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group.
You can specify the device paths in the
-
Save the
LVMClusterCR.
Scaling up the storage of clusters by using the web console
You can scale up the storage capacity of the worker nodes on a cluster by using the OpenShift Container Platform web console.
-
You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage.
-
You have created an
LVMClustercustom resource (CR).
-
Log in to the OpenShift Container Platform web console.
-
Click Ecosystem → Installed Operators.
-
Click LVM Storage in the
openshift-lvm-storagenamespace. -
Click the LVMCluster tab to view the
LVMClusterCR created on the cluster. -
From the Actions menu, select Edit LVMCluster.
-
Click the YAML tab.
-
Edit the
LVMClusterCR to add the new device path in thedeviceSelectorfield:ExampleLVMClusterCRapiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster spec: storage: deviceClasses: # ... deviceSelector: paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 - /dev/disk/by-path/pci-0000:88:00.0-nvme-1 optionalPaths: - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 - /dev/disk/by-path/pci-0000:90:00.0-nvme-1 # ...- Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group.
You can specify the device paths in the
pathsfield, theoptionalPathsfield, or both. If you do not specify the device paths in bothpathsandoptionalPaths, Logical Volume Manager (LVM) Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the following conditions are met:-
The device path exists.
-
The device is supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage".
-
- Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the
LVMClusterCR moves to theFailedstate. - Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error.
Important
After a device is added to the LVM volume group, it cannot be removed.
- Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group.
You can specify the device paths in the
-
Click Save.
Scaling up the storage of clusters by using RHACM
You can scale up the storage capacity of worker nodes on the clusters by using RHACM.
-
You have access to the RHACM cluster using an account with
cluster-adminprivileges. -
You have created an
LVMClustercustom resource (CR) by using RHACM. -
You have additional unused devices on each cluster to be used by Logical Volume Manager (LVM) Storage.
-
Log in to the RHACM CLI using your OpenShift Container Platform credentials.
-
Edit the
LVMClusterCR that you created using RHACM by running the following command:$ oc edit -f <file_name> -n <namespace>- Replace
<file_name>with the name of theLVMClusterCR.
- Replace
-
In the
LVMClusterCR, add the path to the new device in thedeviceSelectorfield.ExampleLVMClusterCRapiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: lvms spec: object-templates: - complianceType: musthave objectDefinition: apiVersion: lvm.topolvm.io/v1alpha1 kind: LVMCluster metadata: name: my-lvmcluster namespace: openshift-lvm-storage spec: storage: deviceClasses: # ... deviceSelector: paths: - /dev/disk/by-path/pci-0000:87:00.0-nvme-1 optionalPaths: - /dev/disk/by-path/pci-0000:89:00.0-nvme-1 # ...- Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group.
You can specify the device paths in the
pathsfield, theoptionalPathsfield, or both. If you do not specify the device paths in bothpathsandoptionalPaths, Logical Volume Manager (LVM) Storage adds the supported unused devices to the LVM volume group. LVM Storage adds the devices to the LVM volume group only if the following conditions are met:-
The device path exists.
-
The device is supported by LVM Storage. For information about unsupported devices, see "Devices not supported by LVM Storage".
-
- Specify the device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, the
LVMClusterCR moves to theFailedstate. - Specify the optional device paths. If the device path specified in this field does not exist, or the device is not supported by LVM Storage, LVM Storage ignores the device without causing an error.
Important
After a device is added to the LVM volume group, it cannot be removed.
- Contains the configuration to specify the paths to the devices that you want to add to the LVM volume group.
You can specify the device paths in the
-
Save the
LVMClusterCR.
Expanding a persistent volume claim
After scaling up the storage of a cluster, you can expand the existing persistent volume claims (PVCs).
To expand a PVC, you must update the storage field in the PVC.
-
Dynamic provisioning is used.
-
The
StorageClassobject associated with the PVC has theallowVolumeExpansionfield set totrue.
-
Log in to the OpenShift CLI (
oc). -
Update the value of the
spec.resources.requests.storagefield to a value that is greater than the current value by running the following command:$ oc patch pvc <pvc_name> -n <application_namespace> \ --type=merge -p \ '{ "spec": { "resources": { "requests": { "storage": "<desired_size>" }}}}'- Replace
<pvc_name>with the name of the PVC that you want to expand. - Replace
<desired_size>with the new size to expand the PVC.
- Replace
-
To verify that resizing is completed, run the following command:
$ oc get pvc <pvc_name> -n <application_namespace> -o=jsonpath={.status.capacity.storage}LVM Storage adds the
Resizingcondition to the PVC during expansion. It deletes theResizingcondition after the PVC expansion.
Deleting a persistent volume claim
You can delete a persistent volume claim (PVC) by using the OpenShift CLI (oc).
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions.
-
Log in to the OpenShift CLI (
oc). -
Delete the PVC by running the following command:
$ oc delete pvc <pvc_name> -n <namespace>
-
To verify that the PVC is deleted, run the following command:
$ oc get pvc -n <namespace>The deleted PVC must not be present in the output of this command.
About volume snapshots
You can create snapshots of persistent volume claims (PVCs) that are provisioned by LVM Storage.
You can perform the following actions using the volume snapshots:
-
Back up your application data.
Important
Volume snapshots are located on the same devices as the original data. To use the volume snapshots as backups, you must move the snapshots to a secure location. You can use OpenShift API for Data Protection (OADP) backup and restore solutions. For information about OADP, see "OADP features".
-
Revert to a state at which the volume snapshot was taken.
Note
You can also create volume snapshots of the volume clones.
Limitations for creating volume snapshots in multi-node topology
LVM Storage has the following limitations for creating volume snapshots in multi-node topology:
-
Creating volume snapshots is based on the LVM thin pool capabilities.
-
After creating a volume snapshot, the node must have additional storage space for further updating the original data source.
-
You can create volume snapshots only on the node where you have deployed the original data source.
-
Pods relying on the PVC that uses the snapshot data can be scheduled only on the node where you have deployed the original data source.
Creating volume snapshots
You can create volume snapshots based on the available capacity of the thin pool and the over-provisioning limits.
To create a volume snapshot, you must create a VolumeSnapshotClass object.
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions. -
You ensured that the persistent volume claim (PVC) is in
Boundstate. This is required for a consistent snapshot. -
You stopped all the I/O to the PVC.
-
Log in to the OpenShift CLI (
oc). -
Create a
VolumeSnapshotobject:ExampleVolumeSnapshotobjectapiVersion: snapshot.storage.k8s.io/v1 kind: VolumeSnapshot metadata: name: lvm-block-1-snap spec: source: persistentVolumeClaimName: lvm-block-1 volumeSnapshotClassName: lvms-vg1- Specify a name for the volume snapshot.
- Specify the name of the source PVC. LVM Storage creates a snapshot of this PVC.
- Set this field to the name of a volume snapshot class.
Note
To get the list of available volume snapshot classes, run the following command:
$ oc get volumesnapshotclass
-
Create the volume snapshot in the namespace where you created the source PVC by running the following command:
$ oc create -f <file_name> -n <namespace>LVM Storage creates a read-only copy of the PVC as a volume snapshot.
-
To verify that the volume snapshot is created, run the following command:
$ oc get volumesnapshot -n <namespace>Example outputNAME READYTOUSE SOURCEPVC SOURCESNAPSHOTCONTENT RESTORESIZE SNAPSHOTCLASS SNAPSHOTCONTENT CREATIONTIME AGE lvm-block-1-snap true lvms-test-1 1Gi lvms-vg1 snapcontent-af409f97-55fc-40cf-975f-71e44fa2ca91 19s 19sThe value of the
READYTOUSEfield for the volume snapshot that you created must betrue.
Restoring volume snapshots
To restore a volume snapshot, you must create a persistent volume claim (PVC) with the dataSource.name field set to the name of the volume snapshot.
The restored PVC is independent of the volume snapshot and the source PVC.
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions. -
You have created a volume snapshot.
-
Log in to the OpenShift CLI (
oc). -
Create a
PersistentVolumeClaimobject with the configuration to restore the volume snapshot:ExamplePersistentVolumeClaimobject to restore a volume snapshotkind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-block-1-restore spec: accessModes: - ReadWriteOnce volumeMode: Block Resources: Requests: storage: 2Gi storageClassName: lvms-vg1 dataSource: name: lvm-block-1-snap kind: VolumeSnapshot apiGroup: snapshot.storage.k8s.io- Specify the storage size of the restored PVC. The storage size of the requested PVC must be greater than or equal to the stoage size of the volume snapshot that you want to restore. If a larger PVC is required, you can also resize the PVC after restoring the volume snapshot.
- Set this field to the value of the
storageClassNamefield in the source PVC of the volume snapshot that you want to restore. - Set this field to the name of the volume snapshot that you want to restore.
-
Create the PVC in the namespace where you created the volume snapshot by running the following command:
$ oc create -f <file_name> -n <namespace>
-
To verify that the volume snapshot is restored, run the following command:
$ oc get pvc -n <namespace>Example outputNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-restore Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s
Deleting volume snapshots
You can delete the volume snapshots of the persistent volume claims (PVCs).
Important
When you delete a persistent volume claim (PVC), LVM Storage deletes only the PVC, but not the snapshots of the PVC.
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions. -
You have ensured that the volume snpashot that you want to delete is not in use.
-
Log in to the OpenShift CLI (
oc). -
Delete the volume snapshot by running the following command:
$ oc delete volumesnapshot <volume_snapshot_name> -n <namespace>
-
To verify that the volume snapshot is deleted, run the following command:
$ oc get volumesnapshot -n <namespace>The deleted volume snapshot must not be present in the output of this command.
About volume clones
A volume clone is a duplicate of an existing persistent volume claim (PVC). You can create a volume clone to make a point-in-time copy of the data.
Limitations for creating volume clones in multi-node topology
LVM Storage has the following limitations for creating volume clones in multi-node topology:
-
Creating volume clones is based on the LVM thin pool capabilities.
-
The node must have additional storage after creating a volume clone for further updating the original data source.
-
You can create volume clones only on the node where you have deployed the original data source.
-
Pods relying on the PVC that uses the clone data can be scheduled only on the node where you have deployed the original data source.
Creating volume clones
To create a clone of a persistent volume claim (PVC), you must create a PersistentVolumeClaim object in the namespace where you created the source PVC.
Important
The cloned PVC has write access.
-
You ensured that the source PVC is in
Boundstate. This is required for a consistent clone.
-
Log in to the OpenShift CLI (
oc). -
Create a
PersistentVolumeClaimobject:ExamplePersistentVolumeClaimobject to create a volume clonekind: PersistentVolumeClaim apiVersion: v1 metadata: name: lvm-pvc-clone spec: accessModes: - ReadWriteOnce storageClassName: lvms-vg1 volumeMode: Filesystem dataSource: kind: PersistentVolumeClaim name: lvm-pvc resources: requests: storage: 1Gi- Set this field to the value of the
storageClassNamefield in the source PVC. - Set this field to the
volumeModefield in the source PVC. - Specify the name of the source PVC.
- Specify the storage size for the cloned PVC. The storage size of the cloned PVC must be greater than or equal to the storage size of the source PVC.
- Set this field to the value of the
-
Create the PVC in the namespace where you created the source PVC by running the following command:
$ oc create -f <file_name> -n <namespace>
-
To verify that the volume clone is created, run the following command:
$ oc get pvc -n <namespace>Example outputNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvm-block-1-clone Bound pvc-e90169a8-fd71-4eea-93b8-817155f60e47 1Gi RWO lvms-vg1 5s
Deleting volume clones
You can delete volume clones.
Important
When you delete a persistent volume claim (PVC), LVM Storage deletes only the source persistent volume claim (PVC) but not the clones of the PVC.
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions.
-
Log in to the OpenShift CLI (
oc). -
Delete the cloned PVC by running the following command:
# oc delete pvc <clone_pvc_name> -n <namespace>
-
To verify that the volume clone is deleted, run the following command:
$ oc get pvc -n <namespace>The deleted volume clone must not be present in the output of this command.
Updating LVM Storage
You can update LVM Storage to ensure compatibility with the OpenShift Container Platform version.
Note
The default namespace for the LVM Storage Operator is openshift-lvm-storage.
-
You have updated your OpenShift Container Platform cluster.
-
You have installed a previous version of LVM Storage.
-
You have installed the OpenShift CLI (
oc). -
You have access to the cluster using an account with
cluster-adminpermissions.
-
Log in to the OpenShift CLI (
oc). -
Update the
Subscriptioncustom resource (CR) that you created while installing LVM Storage by running the following command:$ oc patch subscription lvms-operator -n openshift-lvm-storage --type merge --patch '{"spec":{"channel":"<update_channel>"}}'- Replace
<update_channel>with the version of LVM Storage that you want to install. For example,stable-4.19.
- Replace
-
View the update events to check that the installation is complete by running the following command:
$ oc get events -n openshift-lvm-storageExample output... 8m13s Normal RequirementsUnknown clusterserviceversion/lvms-operator.v4.19 requirements not yet checked 8m11s Normal RequirementsNotMet clusterserviceversion/lvms-operator.v4.19 one or more requirements couldn't be found 7m50s Normal AllRequirementsMet clusterserviceversion/lvms-operator.v4.19 all requirements found, attempting install 7m50s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.19 waiting for install components to report healthy 7m49s Normal InstallWaiting clusterserviceversion/lvms-operator.v4.19 installing: waiting for deployment lvms-operator to become ready: deployment "lvms-operator" waiting for 1 outdated replica(s) to be terminated 7m39s Normal InstallSucceeded clusterserviceversion/lvms-operator.v4.19 install strategy completed with no errors ...
-
Verify the LVM Storage version by running the following command:
$ oc get subscription lvms-operator -n openshift-lvm-storage -o jsonpath='{.status.installedCSV}'Example outputlvms-operator.v4.19
Monitoring LVM Storage
To enable cluster monitoring, you must add the following label in the namespace where you have installed LVM Storage:
openshift.io/cluster-monitoring=true
Important
For information about enabling cluster monitoring in RHACM, see Observability and Adding custom metrics.
Metrics
You can monitor LVM Storage by viewing the metrics.
The following table describes the topolvm metrics:
| Alert | Description |
|---|---|
|
Indicates the percentage of data space used in the LVM thinpool. |
|
Indicates the percentage of metadata space used in the LVM thinpool. |
|
Indicates the size of the LVM thin pool in bytes. |
|
Indicates the available space in the LVM volume group in bytes. |
|
Indicates the size of the LVM volume group in bytes. |
|
Indicates the available over-provisioned size of the LVM thin pool in bytes. |
Note
Metrics are updated every 10 minutes or when there is a change, such as a new logical volume creation, in the thin pool.
Alerts
When the thin pool and volume group reach maximum storage capacity, further operations fail. This can lead to data loss.
LVM Storage sends the following alerts when the usage of the thin pool and volume group exceeds a certain value:
| Alert | Description |
|---|---|
|
This alert is triggered when both the volume group and thin pool usage exceeds 75% on nodes. Data deletion or volume group expansion is required. |
|
This alert is triggered when both the volume group and thin pool usage exceeds 85% on nodes. In this case, the volume group is critically full. Data deletion or volume group expansion is required. |
|
This alert is triggered when the thin pool data uusage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required. |
|
This alert is triggered when the thin pool data usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required. |
|
This alert is triggered when the thin pool metadata usage in the volume group exceeds 75% on nodes. Data deletion or thin pool expansion is required. |
|
This alert is triggered when the thin pool metadata usage in the volume group exceeds 85% on nodes. Data deletion or thin pool expansion is required. |
Uninstalling LVM Storage by using the CLI
You can uninstall LVM Storage by using the OpenShift CLI (oc).
-
You have logged in to
ocas a user withcluster-adminpermissions. -
You deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
-
You deleted the
LVMClustercustom resource (CR).
-
Get the
currentCSVvalue for the LVM Storage Operator by running the following command:$ oc get subscription.operators.coreos.com lvms-operator -n <namespace> -o yaml | grep currentCSVExample outputcurrentCSV: lvms-operator.v4.15.3 -
Delete the subscription by running the following command:
$ oc delete subscription.operators.coreos.com lvms-operator -n <namespace>Example outputsubscription.operators.coreos.com "lvms-operator" deleted -
Delete the CSV for the LVM Storage Operator in the target namespace by running the following command:
$ oc delete clusterserviceversion <currentCSV> -n <namespace>- Replace
<currentCSV>with thecurrentCSVvalue for the LVM Storage Operator.Example outputclusterserviceversion.operators.coreos.com "lvms-operator.v4.15.3" deleted
- Replace
-
To verify that the LVM Storage Operator is uninstalled, run the following command:
$ oc get csv -n <namespace>If the LVM Storage Operator was successfully uninstalled, it does not appear in the output of this command.
Uninstalling LVM Storage by using the web console
You can uninstall LVM Storage using the OpenShift Container Platform web console.
-
You have access to OpenShift Container Platform as a user with
cluster-adminpermissions. -
You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
-
You have deleted the
LVMClustercustom resource (CR).
-
Log in to the OpenShift Container Platform web console.
-
Click Ecosystem → Installed Operators.
-
Click LVM Storage in the
openshift-lvm-storagenamespace. -
Click the Details tab.
-
From the Actions menu, select Uninstall Operator.
-
Optional: When prompted, select the Delete all operand instances for this operator checkbox to delete the operand instances for LVM Storage.
-
Click Uninstall.
Uninstalling LVM Storage installed using RHACM
To uninstall LVM Storage that you installed using RHACM, you must delete the RHACM Policy custom resource (CR) that you created for installing and configuring LVM Storage.
-
You have access to the RHACM cluster as a user with
cluster-adminpermissions. -
You have deleted the persistent volume claims (PVCs), volume snapshots, and volume clones provisioned by LVM Storage. You have also deleted the applications that are using these resources.
-
You have deleted the
LVMClusterCR that you created using RHACM.
-
Log in to the OpenShift CLI (
oc). -
Delete the RHACM
PolicyCR that you created for installing and configuring LVM Storage by using the following command:$ oc delete -f <policy> -n <namespace>- Replace
<policy>with the name of thePolicyCR YAML file.
- Replace
-
Create a
PolicyCR YAML file with the configuration to uninstall LVM Storage:ExamplePolicyCR to uninstall LVM StorageapiVersion: apps.open-cluster-management.io/v1 kind: PlacementRule metadata: name: placement-uninstall-lvms spec: clusterConditions: - status: "True" type: ManagedClusterConditionAvailable clusterSelector: matchExpressions: - key: mykey operator: In values: - myvalue --- apiVersion: policy.open-cluster-management.io/v1 kind: PlacementBinding metadata: name: binding-uninstall-lvms placementRef: apiGroup: apps.open-cluster-management.io kind: PlacementRule name: placement-uninstall-lvms subjects: - apiGroup: policy.open-cluster-management.io kind: Policy name: uninstall-lvms --- apiVersion: policy.open-cluster-management.io/v1 kind: Policy metadata: annotations: policy.open-cluster-management.io/categories: CM Configuration Management policy.open-cluster-management.io/controls: CM-2 Baseline Configuration policy.open-cluster-management.io/standards: NIST SP 800-53 name: uninstall-lvms spec: disabled: false policy-templates: - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: uninstall-lvms spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: v1 kind: Namespace metadata: name: openshift-lvm-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-storage-operatorgroup namespace: openshift-lvm-storage spec: targetNamespaces: - openshift-lvm-storage - complianceType: mustnothave objectDefinition: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: lvms-operator namespace: openshift-lvm-storage remediationAction: enforce severity: low - objectDefinition: apiVersion: policy.open-cluster-management.io/v1 kind: ConfigurationPolicy metadata: name: policy-remove-lvms-crds spec: object-templates: - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: logicalvolumes.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmclusters.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroupnodestatuses.lvm.topolvm.io - complianceType: mustnothave objectDefinition: apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: lvmvolumegroups.lvm.topolvm.io remediationAction: enforce severity: high -
Create the
PolicyCR by running the following command:$ oc create -f <policy> -ns <namespace>
Downloading log files and diagnostic information using must-gather
When LVM Storage is unable to automatically resolve a problem, use the must-gather tool to collect the log files and diagnostic information so that you or the Red Hat Support can review the problem and determine a solution.
-
Run the
must-gathercommand from the client connected to the LVM Storage cluster:$ oc adm must-gather --image=registry.redhat.io/lvms4/lvms-must-gather-rhel9:v4.19 --dest-dir=<directory_name>
Troubleshooting persistent storage
While configuring persistent storage using Logical Volume Manager (LVM) Storage, you can encounter several issues that require troubleshooting.
Investigating a PVC stuck in the Pending state
A persistent volume claim (PVC) can get stuck in the Pending state for the following reasons:
-
Insufficient computing resources.
-
Network problems.
-
Mismatched storage class or node selector.
-
No available persistent volumes (PVs).
-
The node with the PV is in the
Not Readystate.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the OpenShift CLI (
oc) as a user withcluster-adminpermissions.
-
Retrieve the list of PVCs by running the following command:
$ oc get pvcExample outputNAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE lvms-test Pending lvms-vg1 11s -
Inspect the events associated with a PVC stuck in the
Pendingstate by running the following command:$ oc describe pvc <pvc_name>- Replace
<pvc_name>with the name of the PVC. For example,lvms-vg1.Example outputType Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 4s (x2 over 17s) persistentvolume-controller storageclass.storage.k8s.io "lvms-vg1" not found
- Replace
Recovering from a missing storage class
If you encounter the storage class not found error, check the LVMCluster custom resource (CR) and ensure that all the Logical Volume Manager (LVM) Storage pods are in the Running state.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the OpenShift CLI (
oc) as a user withcluster-adminpermissions.
-
Verify that the
LVMClusterCR is present by running the following command:$ oc get lvmcluster -n <namespace>Example outputNAME AGE my-lvmcluster 65m -
If the
LVMClusterCR is not present, create anLVMClusterCR. For more information, see "Ways to create an LVMCluster custom resource". -
In the namespace where the operator is installed, check that all the LVM Storage pods are in the
Runningstate by running the following command:$ oc get pods -n <namespace>Example outputNAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m vg-manager-r6zdv 1/1 Running 0 66mThe output of this command must contain a running instance of the following pods:
-
lvms-operator -
vg-managerIf the
vg-managerpod is stuck while loading a configuration file, it is due to a failure to locate an available disk for LVM Storage to use. To retrieve the necessary information to troubleshoot this issue, review the logs of thevg-managerpod by running the following command:$ oc logs -l app.kubernetes.io/component=vg-manager -n <namespace>
-
Recovering from node failure
A persistent volume claim (PVC) can be stuck in the Pending state due to a node failure in the cluster.
To identify the failed node, you can examine the restart count of the topolvm-node pod. An increased restart count indicates potential problems with the underlying node, which might require further investigation and troubleshooting.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the OpenShift CLI (
oc) as a user withcluster-adminpermissions.
-
Examine the restart count of the
topolvm-nodepod instances by running the following command:$ oc get pods -n <namespace>Example outputNAME READY STATUS RESTARTS AGE lvms-operator-7b9fb858cb-6nsml 3/3 Running 0 70m topolvm-controller-5dd9cf78b5-7wwr2 5/5 Running 0 66m topolvm-node-dr26h 4/4 Running 0 66m topolvm-node-54as8 4/4 Running 0 66m topolvm-node-78fft 4/4 Running 17 (8s ago) 66m vg-manager-r6zdv 1/1 Running 0 66m vg-manager-990ut 1/1 Running 0 66m vg-manager-an118 1/1 Running 0 66m
-
If the PVC is stuck in the
Pendingstate even after you have resolved any issues with the node, you must perform a forced clean-up. For more information, see "Performing a forced clean-up".
Recovering from disk failure
If you see a failure message while inspecting the events associated with the persistent volume claim (PVC), there can be a problem with the underlying volume or disk.
Disk and volume provisioning issues result with a generic error message such as Failed to provision volume with storage class <storage_class_name>. The generic error message is followed by a specific volume failure error message.
The following table describes the volume failure error messages:
| Error message | Description |
|---|---|
|
Indicates a problem in verifying whether the volume already exists. Volume verification failure can be caused by network connectivity problems or other failures. |
|
Failure to bind a volume can happen if the persistent volume (PV) that is available does not match the requirements of the PVC. |
|
This error indicates problems when trying to mount the volume to a node. If the disk has failed, this error can appear when a pod tries to use the PVC. |
|
This error indicates problems when trying to unmount a volume from a node. If the disk has failed, this error can appear when a pod tries to use the PVC. |
|
This error can appear with storage solutions that do not support |
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the OpenShift CLI (
oc) as a user withcluster-adminpermissions.
-
Inspect the events associated with a PVC by running the following command:
$ oc describe pvc <pvc_name>- Replace
<pvc_name>with the name of the PVC.
- Replace
-
Establish a direct connection to the host where the problem is occurring.
-
Resolve the disk issue.
-
If the volume failure messages persist or recur even after you have resolved the issue with the disk, you must perform a forced clean-up. For more information, see "Performing a forced clean-up".
Performing a forced clean-up
If the disk or node-related problems persist even after you have completed the troubleshooting procedures, you must perform a forced clean-up. A forced clean-up is used to address persistent issues and ensure the proper functioning of Logical Volume Manager (LVM) Storage.
-
You have installed the OpenShift CLI (
oc). -
You have logged in to the OpenShift CLI (
oc) as a user withcluster-adminpermissions. -
You have deleted all the persistent volume claims (PVCs) that were created by using LVM Storage.
-
You have stopped the pods that are using the PVCs that were created by using LVM Storage.
-
Switch to the namespace where you have installed the LVM Storage Operator by running the following command:
$ oc project <namespace> -
Check if the
LogicalVolumecustom resources (CRs) are present by running the following command:$ oc get logicalvolume-
If the
LogicalVolumeCRs are present, delete them by running the following command:$ oc delete logicalvolume <name>- Replace
<name>with the name of theLogicalVolumeCR.
- Replace
-
After deleting the
LogicalVolumeCRs, remove their finalizers by running the following command:$ oc patch logicalvolume <name> -p '{"metadata":{"finalizers":[]}}' --type=merge- Replace
<name>with the name of theLogicalVolumeCR.
- Replace
-
-
Check if the
LVMVolumeGroupCRs are present by running the following command:$ oc get lvmvolumegroup-
If the
LVMVolumeGroupCRs are present, delete them by running the following command:$ oc delete lvmvolumegroup <name>- Replace
<name>with the name of theLVMVolumeGroupCR.
- Replace
-
After deleting the
LVMVolumeGroupCRs, remove their finalizers by running the following command:$ oc patch lvmvolumegroup <name> -p '{"metadata":{"finalizers":[]}}' --type=merge- Replace
<name>with the name of theLVMVolumeGroupCR.
- Replace
-
-
Delete any
LVMVolumeGroupNodeStatusCRs by running the following command:$ oc delete lvmvolumegroupnodestatus --all -
Delete the
LVMClusterCR by running the following command:$ oc delete lvmcluster --all-
After deleting the
LVMClusterCR, remove its finalizer by running the following command:$ oc patch lvmcluster <name> -p '{"metadata":{"finalizers":[]}}' --type=merge- Replace
<name>with the name of theLVMClusterCR.
- Replace
-