Using quotas and limit ranges
As a cluster administrator, you can use quotas and limit ranges to set constraints. These constraints limit the number of objects or the amount of compute resources that are used in your project.
By using quotes and limits, you can better manage and allocate resoures across all projects. You can also ensure that no projects use more resources than is appropriate for the cluster size.
A resource quota, defined by a ResourceQuota object, provides constraints that limit aggregate resource consumption per project. The quota can limit the quantity of objects that can be created in a project by type. Additinally, the quota can limit the total amount of compute resources and storage that might be consumed by resources in that project.
Important
Quotas are set by cluster administrators and are scoped to a given project. OpenShift Container Platform project owners can change quotas for their project, but not limit ranges. OpenShift Container Platform users cannot modify quotas or limit ranges.
Resources managed by quota
To limit aggregate resource consumption per project, define a ResourceQuota object. By using this object, you can restrict the number of created objects by type. You can also restrict the total amount of compute resources and storage consumed within the project.
The following tables describe the set of compute resources and object types that a quota might manage.
Note
A pod is in a terminal state if status.phase is Failed or Succeeded.
| Resource Name | Description |
|---|---|
|
The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. |
|
The sum of memory requests across all pods in a non-terminal state cannot exceed this value. |
|
The sum of local ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. |
|
The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. |
|
The sum of memory requests across all pods in a non-terminal state cannot exceed this value. |
|
The sum of ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. |
|
The sum of CPU limits across all pods in a non-terminal state cannot exceed this value. |
|
The sum of memory limits across all pods in a non-terminal state cannot exceed this value. |
|
The sum of ephemeral storage limits across all pods in a non-terminal state cannot exceed this value. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default. |
| Resource Name | Description |
|---|---|
|
The sum of storage requests across all persistent volume claims in any state cannot exceed this value. |
|
The total number of persistent volume claims that can exist in the project. |
|
The sum of storage requests across all persistent volume claims in any state that have a matching storage class, cannot exceed this value. |
|
The total number of persistent volume claims with a matching storage class that can exist in the project. |
| Resource Name | Description |
|---|---|
|
The total number of pods in a non-terminal state that can exist in the project. |
|
The total number of replication controllers that can exist in the project. |
|
The total number of resource quotas that can exist in the project. |
|
The total number of services that can exist in the project. |
|
The total number of secrets that can exist in the project. |
|
The total number of |
|
The total number of persistent volume claims that can exist in the project. |
|
The total number of image streams that can exist in the project. |
You can configure an object count quota for these standard namespaced resource types using the count/<resource>.<group> syntax.
$ oc create quota <name> --hard=count/<resource>.<group>=<quota>
where:
<resource>-
Specifies the name of the resource.
<group>-
Specifies the API group, if applicable. You can use the
kubectl api-resourcescommand for a list of resources and their associated API groups.
Setting resource quota for extended resources
To manage the consumption of extended resources, such as nvidia.com/gpu, define a resource quota by using the requests prefix. Since overcommitment is prohibited for these resources, you must explicitly specify both requests and limits to ensure valid configuration.
-
To determine how many GPUs are available on a node in your cluster, use the following command:
$ oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'Example outputopenshift.com/gpu-accelerator=true Capacity: nvidia.com/gpu: 2 Allocatable: nvidia.com/gpu: 2 nvidia.com/gpu: 0 0In this example, 2 GPUs are available.
-
Use this command to set a quota in the namespace
nvidia. In this example, the quota is1:$ cat gpu-quota.yamlExample outputapiVersion: v1 kind: ResourceQuota metadata: name: gpu-quota namespace: nvidia spec: hard: requests.nvidia.com/gpu: 1 -
Create the quota with the following command:
$ oc create -f gpu-quota.yamlExample outputresourcequota/gpu-quota created -
Verify that the namespace has the correct quota set using the following command:
$ oc describe quota gpu-quota -n nvidiaExample outputName: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 0 1 -
Run a pod that asks for a single GPU with the following command:
$ oc create pod gpu-pod.yamlExample outputapiVersion: v1 kind: Pod metadata: generateName: gpu-pod-s46h7 namespace: nvidia spec: restartPolicy: OnFailure containers: - name: rhel7-gpu-pod image: rhel7 env: - name: NVIDIA_VISIBLE_DEVICES value: all - name: NVIDIA_DRIVER_CAPABILITIES value: "compute,utility" - name: NVIDIA_REQUIRE_CUDA value: "cuda>=5.0" command: ["sleep"] args: ["infinity"] resources: limits: nvidia.com/gpu: 1 -
Verify that the pod is running with the following command:
$ oc get podsExample outputNAME READY STATUS RESTARTS AGE gpu-pod-s46h7 1/1 Running 0 1m -
Verify that the quota
Usedcounter is correct by running the following command:$ oc describe quota gpu-quota -n nvidiaExample outputName: gpu-quota Namespace: nvidia Resource Used Hard -------- ---- ---- requests.nvidia.com/gpu 1 1 -
Using the following command, attempt to create a second GPU pod in the
nvidianamespace. This is technically available on the node because it has 2 GPUs:$ oc create -f gpu-pod.yamlExample outputError from server (Forbidden): error when creating "gpu-pod.yaml": pods "gpu-pod-f7z2w" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1You recieve this
Forbiddenerror message because you have a quota of 1 GPU and the pod tried to allocate a second GPU, which exceeds the allowed quota.
Quota scopes
To restrict the set of resources that a quota applies to, add associated scopes. This configuration limits usage measurement to the intersection of the enumerated scopes, ensuring that specifying a resource outside the allowed set results in a validation error.
| Scope | Description |
|---|---|
|
Match pods where |
|
Match pods where |
|
Match pods that have best effort quality of service for either |
|
Match pods that do not have best effort quality of service for |
A BestEffort scope restricts a quota to limiting the following resources:
-
pods
A Terminating, NotTerminating, and NotBestEffort scope restricts a quota to tracking the following resources:
-
pods -
memory -
requests.memory -
limits.memory -
cpu -
requests.cpu -
limits.cpu -
ephemeral-storage -
requests.ephemeral-storage -
limits.ephemeral-storage
Note
Ephemeral storage requests and limits apply only if you enabled the ephemeral storage technology preview. This feature is disabled by default.
Admin quota usage
To ensure projects remain within defined constraints, monitor admin quota usage. By tracking the aggregate consumption of compute resources and storage, you can identify when ResourceQuota limits are reached or approached.
- Quota enforcement
-
After a resource quota for a project is first created, the project restricts the ability to create any new resources that can violate a quota constraint until the quota has calculated updated usage statistics.
After a quota is created and usage statistics are updated, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource.
When you delete a resource, your quota use is decremented during the next full recalculation of quota statistics for the project.
A configurable amount of time determines how long the quota takes to reduce quota usage statistics to their current observed system value.
If project modifications exceed a quota usage limit, the server denies the action and returns an appropriate error message to the user. The error message explains the quota constraint violated and what their currently observed usage statistics are in the system.
- Requests compared to limits
-
When allocating compute resources by quota, each container can specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values.
If the quota has a value specified for
requests.cpuorrequests.memory, then the quota requires that every incoming container makes an explicit request for those resources. If the quota has a value specified forlimits.cpuorlimits.memory, the quota requires that every incoming container specify an explicit limit for those resources.
Sample resource quota definitions
To properly structure your quota configurations, reference these sample ResourceQuota definitions. These YAML examples demonstrate how to specify hard limits for compute resources, storage, and object counts to ensure your project complies with cluster policies.
apiVersion: v1
kind: ResourceQuota
metadata:
name: core-object-counts
spec:
hard:
configmaps: "10"
persistentvolumeclaims: "4"
replicationcontrollers: "20"
secrets: "10"
services: "10"
# ...
where:
configmaps-
Specifies the total number of
ConfigMapobjects that can exist in the project. persistentvolumeclaims-
Specifies the total number of persistent volume claims (PVCs) that can exist in the project.
replicationcontrollers-
Specifies the total number of replication controllers that can exist in the project.
secrets-
Specifies the total number of secrets that can exist in the project.
services-
Specifies the total number of services that can exist in the project.
apiVersion: v1
kind: ResourceQuota
metadata:
name: openshift-object-counts
spec:
hard:
openshift.io/imagestreams: "10"
# ...
where:
openshift.io/imagestreams-
Specifies the total number of image streams that can exist in the project.
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources
spec:
hard:
pods: "4"
requests.cpu: "1"
requests.memory: 1Gi
requests.ephemeral-storage: 2Gi
limits.cpu: "2"
limits.memory: 2Gi
limits.ephemeral-storage: 4Gi
# ...
where:
pods-
Specifies the total number of pods in a non-terminal state that can exist in the project.
requests.cpu-
Specifies that across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core.
requests.memory-
Specifies that across all pods in a non-terminal state, the sum of memory requests cannot exceed 1 Gi.
requests.ephemeral-storage-
Specifies that across all pods in a non-terminal state, the sum of ephemeral storage requests cannot exceed 2 Gi.
limits.cpu-
Specifies that across all pods in a non-terminal state, the sum of CPU limits cannot exceed 2 cores.
limits.memory-
Specifies that across all pods in a non-terminal state, the sum of memory limits cannot exceed 2 Gi.
limits.ephemeral-storage-
Specifies that across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed 4 Gi.
apiVersion: v1
kind: ResourceQuota
metadata:
name: besteffort
spec:
hard:
pods: "1"
scopes:
- BestEffort
# ...
where:
pods-
Specifies the total number of pods in a non-terminal state with
BestEffortquality of service that can exist in the project. scopes-
Specifies a restriction on the quota to only match pods that have
BestEffortquality of service for either memory or CPU.
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources-long-running
spec:
hard:
pods: "4"
limits.cpu: "4"
limits.memory: "2Gi"
limits.ephemeral-storage: "4Gi"
scopes:
- NotTerminating
# ...
where:
pods-
Specifies the total number of pods in a non-terminal state.
limits.cpu-
Specifies that across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value.
limits.memory-
Specifies that across all pods in a non-terminal state, the sum of memory limits cannot exceed this value.
limits.ephemeral-storage-
Specifies that across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed this value.
scopes-
Specifies a restriction on the quota that only matches pods where
spec.activeDeadlineSecondsis set tonil. Build pods fall underNotTerminatingunless theRestartNeverpolicy is applied.
apiVersion: v1
kind: ResourceQuota
metadata:
name: compute-resources-time-bound
spec:
hard:
pods: "2"
limits.cpu: "1"
limits.memory: "1Gi"
limits.ephemeral-storage: "1Gi"
scopes:
- Terminating
# ...
where:
pods-
Specifies the total number of pods in a non-terminal state.
limits.cpu-
Specifies that across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value.
limits.memory-
Specifies that across all pods in a non-terminal state, the sum of memory limits cannot exceed this value.
limits.ephemeral-storage-
Specifies that across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed this value.
scopes-
Specifies a restriction on the quota that only matches pods where
spec.activeDeadlineSeconds>=0. For example, this quota would charge for build pods, but not long running pods such as a web server or database.
apiVersion: v1
kind: ResourceQuota
metadata:
name: storage-consumption
spec:
hard:
persistentvolumeclaims: "10"
requests.storage: "50Gi"
gold.storageclass.storage.k8s.io/requests.storage: "10Gi"
silver.storageclass.storage.k8s.io/requests.storage: "20Gi"
silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5"
bronze.storageclass.storage.k8s.io/requests.storage: "0"
bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0"
# ...
where:
persistentvolumeclaims-
Specifies the total number of PVCs in a project.
requests.storage-
Specifies that across all PVCs in a project, the sum of storage requested cannot exceed this value.
gold.storageclass.storage.k8s.io/requests.storage-
Specifies that across all PVCs in a project, the sum of storage requested in the gold storage class cannot exceed this value.
silver.storageclass.storage.k8s.io/requests.storage-
Specifies that across all PVCs in a project, the sum of storage requested in the silver storage class cannot exceed this value.
silver.storageclass.storage.k8s.io/persistentvolumeclaims-
Specifies that across PVCs in a project, the total number of claims in the silver storage class cannot exceed this value.
bronze.storageclass.storage.k8s.io/requests.storage-
Specifies that across all PVCs in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to
0, the bronze storage class cannot request storage. bronze.storageclass.storage.k8s.io/persistentvolumeclaims-
Specifies that across all PVCs in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to
0, the bronze storage class cannot create claims.
Creating a quota
To create a quota, define a ResourceQuota object in a file and apply the file to a project. By doing this task, you can restrict aggregate resource consumption and object counts within the project to ensure the project complies with cluster policies.
-
To apply resource constraints to a specific project, create a
ResourceQuotaobject by using the OpenShift CLI (oc). Run the followingoc createcommand with your definition file to enforce the limits on aggregate resource consumption and object counts specified for that namespace:$ oc create -f <resource_quota_definition> [-n <project_name>]Example command to create a ResourceQuota object$ oc create -f core-object-counts.yaml -n demoproject
Creating object count quotas
To manage the consumption of standard namespaced resource types, create an object count quota. By creating an object count quota within a OpenShift Container Platform project, you can set defined limits on the number of objects, such as BuildConfig and DeploymentConfig objects.
When you use a resource quota, OpenShift Container Platform charges an object against the quota if the object exists in server storage. These quotas protect against exhaustion of storage resources.
-
To configure an object count quota for a resource, run the following command:
$ oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota>Example showing object count quota$ oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4 resourcequota "test" created -
To inspect the detailed status of the object count quota, use the following
oc describecommand:$ oc describe quota testExample outputName: test Namespace: quota Resource Used Hard -------- ---- ---- count/deployments.extensions 0 2 count/pods 0 3 count/replicasets.extensions 0 4 count/secrets 0 4This example limits the listed resources to the hard limit in each project in the cluster.
Viewing a quota
To monitor usage statistics against defined hard limits, navigate to the Quota page in the web console. Alternatively, you can use the CLI to view detailed quota information for the project.
-
Get the list of quotas defined in the project by entering the following commmand:
Example command with a project called demoproject$ oc get quota -n demoprojectExample outputNAME AGE besteffort 11m compute-resources 2m core-object-counts 29m -
Describe the target quota by entering the following command:
Example command for the core-object-counts quota$ oc describe quota core-object-counts -n demoprojectExample outputName: core-object-counts Namespace: demoproject Resource Used Hard -------- ---- ---- configmaps 3 10 persistentvolumeclaims 0 4 replicationcontrollers 3 20 secrets 9 10 services 2 10
Limit ranges in a LimitRange object
To define compute resource constraints at the object level, create a LimitRange object. By creating this object, you can specify the exact amount of resources that an individual pod, container, image, or persistent volume claim can consume.
All requests to create and modify resources are evaluated against each LimitRange object in the project. If the resource violates any of the enumerated constraints, the resource is rejected. If the resource does not set an explicit value, and if the constraint supports a default value, the default value is applied to the resource.
For CPU and memory limits, if you specify a maximum value but do not specify a minimum limit, the resource can consume more CPU and memory resources than the maximum value.
apiVersion: "v1"
kind: "LimitRange"
metadata:
name: "core-resource-limits"
spec:
limits:
- type: "Pod"
max:
cpu: "2"
memory: "1Gi"
min:
cpu: "200m"
memory: "6Mi"
- type: "Container"
max:
cpu: "2"
memory: "1Gi"
min:
cpu: "100m"
memory: "4Mi"
default:
cpu: "300m"
memory: "200Mi"
defaultRequest:
cpu: "200m"
memory: "100Mi"
maxLimitRequestRatio:
cpu: "10"
# ...
where:
metadata.name-
Specifies the name of the limit range object.
max.cpu-
Specifies the maximum amount of CPU that a pod can request on a node across all containers.
max.memory-
Specifies the maximum amount of memory that a pod can request on a node across all containers.
min.cpu-
Specifies the minimum amount of CPU that a pod can request on a node across all containers. If you do not set a
minvalue or you setminto0, the result is no limit and the pod can consume more than themaxCPU value. min.memory-
Specifies the minimum amount of memory that a pod can request on a node across all containers. If you do not set a
minvalue or you setminto0, the result is no limit and the pod can consume more than themaxmemory value. max.cpu-
Specifies the maximum amount of CPU that a single container in a pod can request.
max.memory-
Specifies the maximum amount of memory that a single container in a pod can request.
min.cpu-
Specifies the minimum amount of CPU that a single container in a pod can request. If you do not set a
minvalue or you setminto0, the result is no limit and the pod can consume more than themaxCPU value. max.memory-
Specifies the minimum amount of memory that a single container in a pod can request. If you do not set a
minvalue or you setminto0, the result is no limit and the pod can consume more than themaxmemory value. default.cpu-
Specifies the default CPU limit for a container if you do not specify a limit in the pod specification.
default.memory-
Specifies the default memory limit for a container if you do not specify a limit in the pod specification.
defaultRequest.cpu-
Specifies the default CPU request for a container if you do not specify a request in the pod specification.
defaultRequest.memory-
Specifies the default memory request for a container if you do not specify a request in the pod specification.
maxLimitRequestRatio.cpu-
Specifies the maximum limit-to-request ratio for a container.
apiVersion: "v1"
kind: "LimitRange"
metadata:
name: "openshift-resource-limits"
spec:
limits:
- type: openshift.io/Image
max:
storage: 1Gi
- type: openshift.io/ImageStream
max:
openshift.io/image-tags: 20
openshift.io/images: 30
- type: "Pod"
max:
cpu: "2"
memory: "1Gi"
ephemeral-storage: "1Gi"
min:
cpu: "1"
memory: "1Gi"
# ...
where:
limits.max.storage-
Specifies the maximum size of an image that can be pushed to an internal registry.
limits.max.openshift.io/image-tags-
Specifies the maximum number of unique image tags as defined in the specification for the image stream.
limits.max.openshift.io/images-
Specifies the maximum number of unique image references as defined in the specification for the image stream status.
type.max.cpu-
Specifies the maximum amount of CPU that a pod can request on a node across all containers.
type.max.memory-
Specifies the maximum amount of memory that a pod can request on a node across all containers.
type.max.ephemeral-storage-
Specifies the maximum amount of ephemeral storage that a pod can request on a node across all containers.
min.cpu-
Speciifes the minimum amount of CPU that a pod can request on a node across all containers. See the Supported Constraints table for important information.
min.memory-
Specifies the minimum amount of memory that a pod can request on a node across all containers. If you do not set a
minvalue or you setminto0, the result is no limit and the pod can consume more than themaxmemory value.
You can specify both core and OpenShift Container Platform resources in one limit range object.
Container limits
After you create the LimitRange object, you can specify the exact amount of resources that a container can consume.
The following list shows resources that a container can consume:
-
CPU
-
Memory
The following table shows the supported constraints for a container. If specified, the constraints must hold true for each container.
| Constraint | Behavior |
|---|---|
|
If the configuration defines a |
|
If the configuration defines a |
|
If the limit range defines a For example, if a container has |
The following list shows default resources that a container can consume:
-
Default[<resource>]: Defaultscontainer.resources.limit[<resource>]to specified value if none. -
Default Requests[<resource>]: Defaultscontainer.resources.requests[<resource>]to specified value if none.
Pod limits
After you create the LimitRange object, you can specify the exact amount of resources that a pod can consume.
A pod can consume the following resources:
-
CPU
-
Memory
The following table shows the supported constraints for a pod. Across all pods, the following behavior must hold true:
| Constraint | Enforced behavior |
|---|---|
|
|
|
|
|
|
Image limits
After you create the LimitRange object, you can specify the exact amount of resources that an image can consume.
An image can consume the following resources:
-
Storage
-
openshift.io/Image
The following table shows the supported constraints for an image. If specified, the constraints must hold true for each image.
| Constraint | Behavior |
|---|---|
|
|
Note
To prevent blobs that exceed the limit from being uploaded to the registry, you must configure the registry to enforce quota. The REGISTRY_MIDDLEWARE_REPOSITORY_OPENSHIFT_ENFORCEQUOTA environment variable must be set to true. By default, the environment variable is set to true for new deployments.
Image stream limits
After you create the LimitRange object, you can specify the exact amount of resources that an image stream can consume.
An image stream can consume the following resources:
-
openshift.io/image-tags -
openshift.io/images -
openshift.io/ImageStream
The openshift.io/image-tags resource represents unique stream limits. Possible references are an ImageStreamTag, an ImageStreamImage, or a DockerImage. You can use the oc tag and oc import-image commands or use image stream to create tags. No distinction exists between internal and external references. However, each unique reference that is tagged in an image stream specification is counted only once. The reference does not restrict pushes to an internal container image registry in any way, but the reference is useful for tag restriction.
The openshift.io/images resource represents unique image names that are recorded in image stream status. The resource helps restrict the number of images that can be pushed to the internal registry. Internal and external references are not distinguished.
The following table shows the supported constraints for an image stream. If specified, the constraints must hold true for each image stream.
| Constraint | Behavior |
|---|---|
|
|
|
|
PersistentVolumeClaim limits
After you create the LimitRange object, you can specify the exact amount of resources that a PersistentVolumeClaim resource can consume.
A PersistentVolumeClaim resource can consume storage resources.
The following table shows the supported constraints for a persistent volume claim. If specified, the constraints must hold true for each persistent volume claim.
| Constraint | Enforced behavior |
|---|---|
|
Min[<resource>] <= claim.spec.resources.requests[<resource>] (required) |
|
claim.spec.resources.requests[<resource>] (required) <= Max[<resource>] |
{
"apiVersion": "v1",
"kind": "LimitRange",
"metadata": {
"name": "pvcs"
},
"spec": {
"limits": [{
"type": "PersistentVolumeClaim",
"min": {
"storage": "2Gi"
},
"max": {
"storage": "50Gi"
}
}
]
}
}
where:
metadata.name-
Specifies the name of the limit range object.
limits.min.storage-
Specifies the minimum amount of storage that can be requested in a persistent volume claim.
limits.max.storage-
Specifies the maximum amount of storage that can be requested in a persistent volume claim.
Limit range operations
You can create, view, and delete limit ranges in a project.
You can view any limit ranges that are defined in a project by navigating in the web console to the Quota page for the project. You can also use the CLI to view limit range details.
-
To create the object, enter the following command:
$ oc create -f <limit_range_file> -n <project> -
To view the list of limit range objects that exist in a project, enter the following command:
Example command with a project calleddemoproject$ oc get limits -n demoprojectExample outputNAME AGE resource-limits 6d -
To describe a limit range, enter the following command:
Example command with a limit range calledresource-limits$ oc describe limits resource-limits -n demoprojectExample outputName: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - -
To delete a limit range, enter the following command:
$ oc delete limits <limit_name>