Troubleshooting Operator issues
A cluster administrator can do the following to resolve Operator issues: verify Operator subscription status, check Operator pod health, and gather Operator logs.
Operators are a method of packaging, deploying, and managing an OpenShift Container Platform application. They act like an extension of the software vendor’s engineering team, watching over an OpenShift Container Platform environment and using its current state to make decisions in real time. Operators are designed to handle upgrades seamlessly, react to failures automatically, and not take shortcuts, such as skipping a software backup process to save time.
OpenShift Container Platform 4.19 includes a default set of Operators that are required for proper functioning of the cluster. These default Operators are managed by the Cluster Version Operator (CVO).
As a cluster administrator, you can install application Operators from the software catalog using the OpenShift Container Platform web console or the CLI. You can then subscribe the Operator to one or more namespaces to make it available for developers on your cluster. Application Operators are managed by Operator Lifecycle Manager (OLM).
If you experience Operator issues, verify Operator subscription status. Check Operator pod health across the cluster and gather Operator logs for diagnosis.
Operator subscription condition types
Subscriptions can report the following condition types:
| Condition | Description |
|---|---|
|
Some or all of the catalog sources to be used in resolution are unhealthy. |
|
An install plan for a subscription is missing. |
|
An install plan for a subscription is pending installation. |
|
An install plan for a subscription has failed. |
|
The dependency resolution for a subscription has failed. |
Note
Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a Subscription object. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have a Subscription object.
Viewing Operator subscription status by using the CLI
You can view Operator subscription status by using the CLI.
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have installed the OpenShift CLI (
oc).
-
List Operator subscriptions:
$ oc get subs -n <operator_namespace> -
Use the
oc describecommand to inspect aSubscriptionresource:$ oc describe sub <subscription_name> -n <operator_namespace> -
In the command output, find the
Conditionssection for the status of Operator subscription condition types. In the following example, theCatalogSourcesUnhealthycondition type has a status offalsebecause all available catalog sources are healthy:Example outputName: cluster-logging Namespace: openshift-logging Labels: operators.coreos.com/cluster-logging.openshift-logging= Annotations: <none> API Version: operators.coreos.com/v1alpha1 Kind: Subscription # ... Conditions: Last Transition Time: 2019-07-29T13:42:57Z Message: all available catalogsources are healthy Reason: AllCatalogSourcesHealthy Status: False Type: CatalogSourcesUnhealthy # ...Note
Default OpenShift Container Platform cluster Operators are managed by the Cluster Version Operator (CVO) and they do not have a
Subscriptionobject. Application Operators are managed by Operator Lifecycle Manager (OLM) and they have aSubscriptionobject.
Viewing Operator catalog source status by using the CLI
You can view the status of an Operator catalog source by using the CLI.
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have installed the OpenShift CLI (
oc).
-
List the catalog sources in a namespace. For example, you can check the
openshift-marketplacenamespace, which is used for cluster-wide catalog sources:$ oc get catalogsources -n openshift-marketplaceExample outputNAME DISPLAY TYPE PUBLISHER AGE certified-operators Certified Operators grpc Red Hat 55m community-operators Community Operators grpc Red Hat 55m example-catalog Example Catalog grpc Example Org 2m25s redhat-operators Red Hat Operators grpc Red Hat 55m -
Use the
oc describecommand to get more details and status about a catalog source:$ oc describe catalogsource example-catalog -n openshift-marketplaceExample outputName: example-catalog Namespace: openshift-marketplace Labels: <none> Annotations: operatorframework.io/managed-by: marketplace-operator target.workload.openshift.io/management: {"effect": "PreferredDuringScheduling"} API Version: operators.coreos.com/v1alpha1 Kind: CatalogSource # ... Status: Connection State: Address: example-catalog.openshift-marketplace.svc:50051 Last Connect: 2021-09-09T17:07:35Z Last Observed State: TRANSIENT_FAILURE Registry Service: Created At: 2021-09-09T17:05:45Z Port: 50051 Protocol: grpc Service Name: example-catalog Service Namespace: openshift-marketplace # ...In the preceding example output, the last observed state is
TRANSIENT_FAILURE. This state indicates that there is a problem establishing a connection for the catalog source. -
List the pods in the namespace where your catalog source was created:
$ oc get pods -n openshift-marketplaceExample outputNAME READY STATUS RESTARTS AGE certified-operators-cv9nn 1/1 Running 0 36m community-operators-6v8lp 1/1 Running 0 36m marketplace-operator-86bfc75f9b-jkgbc 1/1 Running 0 42m example-catalog-bwt8z 0/1 ImagePullBackOff 0 3m55s redhat-operators-smxx8 1/1 Running 0 36mWhen a catalog source is created in a namespace, a pod for the catalog source is created in that namespace. In the preceding example output, the status for the
example-catalog-bwt8zpod isImagePullBackOff. This status indicates that there is an issue pulling the catalog source’s index image. -
Use the
oc describecommand to inspect a pod for more detailed information:$ oc describe pod example-catalog-bwt8z -n openshift-marketplaceExample outputName: example-catalog-bwt8z Namespace: openshift-marketplace Priority: 0 Node: ci-ln-jyryyg2-f76d1-ggdbq-worker-b-vsxjd/10.0.128.2 ... Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 48s default-scheduler Successfully assigned openshift-marketplace/example-catalog-bwt8z to ci-ln-jyryyf2-f76d1-fgdbq-worker-b-vsxjd Normal AddedInterface 47s multus Add eth0 [10.131.0.40/23] from openshift-sdn Normal BackOff 20s (x2 over 46s) kubelet Back-off pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 20s (x2 over 46s) kubelet Error: ImagePullBackOff Normal Pulling 8s (x3 over 47s) kubelet Pulling image "quay.io/example-org/example-catalog:v1" Warning Failed 8s (x3 over 47s) kubelet Failed to pull image "quay.io/example-org/example-catalog:v1": rpc error: code = Unknown desc = reading manifest v1 in quay.io/example-org/example-catalog: unauthorized: access to the requested resource is not authorized Warning Failed 8s (x3 over 47s) kubelet Error: ErrImagePullIn the preceding example output, the error messages indicate that the catalog source’s index image is failing to pull successfully because of an authorization issue. For example, the index image might be stored in a registry that requires login credentials.
Querying Operator pod status
You can list Operator pods within a cluster and their status. You can also collect a detailed Operator pod summary.
-
You have access to the cluster as a user with the
cluster-adminrole. -
Your API service is still functional.
-
You have installed the OpenShift CLI (
oc).
-
List Operators running in the cluster. The output includes Operator version, availability, and up-time information:
$ oc get clusteroperators -
List Operator pods running in the Operator’s namespace, plus pod status, restarts, and age:
$ oc get pod -n <operator_namespace> -
Output a detailed Operator pod summary:
$ oc describe pod <operator_pod_name> -n <operator_namespace> -
If an Operator issue is node-specific, query Operator container status on that node.
-
Start a debug pod for the node:
$ oc debug node/my-node -
Set
/hostas the root directory within the debug shell. The debug pod mounts the host’s root file system in/hostwithin the pod. By changing the root directory to/host, you can run binaries contained in the host’s executable paths:# chroot /hostNote
OpenShift Container Platform 4.19 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,
ocoperations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>instead. -
List details about the node’s containers, including state and associated pod IDs:
# crictl ps -
List information about a specific Operator container on the node. The following example lists information about the
network-operatorcontainer:# crictl ps --name network-operator -
Exit from the debug shell.
-
Gathering Operator logs
If you experience Operator issues, you can gather detailed diagnostic information from Operator pod logs.
-
You have access to the cluster as a user with the
cluster-adminrole. -
Your API service is still functional.
-
You have installed the OpenShift CLI (
oc). -
You have the fully qualified domain names of the control plane or control plane machines.
-
List the Operator pods that are running in the Operator’s namespace, plus the pod status, restarts, and age:
$ oc get pods -n <operator_namespace> -
Review logs for an Operator pod:
$ oc logs pod/<pod_name> -n <operator_namespace>If an Operator pod has multiple containers, the preceding command will produce an error that includes the name of each container. Query logs from an individual container:
$ oc logs pod/<operator_pod_name> -c <container_name> -n <operator_namespace> -
If the API is not functional, review Operator pod and container logs on each control plane node by using SSH instead. Replace
<master-node>.<cluster_name>.<base_domain>with appropriate values.-
List pods on each control plane node:
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl pods -
For any Operator pods not showing a
Readystatus, inspect the pod’s status in detail. Replace<operator_pod_id>with the Operator pod’s ID listed in the output of the preceding command:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspectp <operator_pod_id> -
List containers related to an Operator pod:
$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl ps --pod=<operator_pod_id> -
For any Operator container not showing a
Readystatus, inspect the container’s status in detail. Replace<container_id>with a container ID listed in the output of the preceding command:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl inspect <container_id> -
Review the logs for any Operator containers not showing a
Readystatus. Replace<container_id>with a container ID listed in the output of the preceding command:$ ssh core@<master-node>.<cluster_name>.<base_domain> sudo crictl logs -f <container_id>Note
OpenShift Container Platform 4.19 cluster nodes running Red Hat Enterprise Linux CoreOS (RHCOS) are immutable and rely on Operators to apply cluster changes. Accessing cluster nodes by using SSH is not recommended. Before attempting to collect diagnostic data over SSH, review whether the data collected by running
oc adm must gatherand otheroccommands is sufficient instead. However, if the OpenShift Container Platform API is not available, or the kubelet is not properly functioning on the target node,ocoperations will be impacted. In such situations, it is possible to access nodes usingssh core@<node>.<cluster_name>.<base_domain>.
-
Disabling the Machine Config Operator from automatically rebooting
When configuration changes are made by the Machine Config Operator (MCO), Red Hat Enterprise Linux CoreOS (RHCOS) must reboot for the changes to take effect. Whether the configuration change is automatic or manual, an RHCOS node reboots automatically unless it is paused.
Note
-
When the MCO detects any of the following changes, it applies the update without draining or rebooting the node:
-
Changes to the SSH key in the
spec.config.passwd.users.sshAuthorizedKeysparameter of a machine config. -
Changes to the global pull secret or pull secret in the
openshift-confignamespace. -
Automatic rotation of the
/etc/kubernetes/kubelet-ca.crtcertificate authority (CA) by the Kubernetes API Server Operator.
-
-
When the MCO detects changes to the
/etc/containers/registries.conffile, such as editing anImageDigestMirrorSet,ImageTagMirrorSet, orImageContentSourcePolicyobject, it drains the corresponding nodes, applies the changes, and uncordons the nodes. The node drain does not happen for the following changes:-
The addition of a registry with the
pull-from-mirror = "digest-only"parameter set for each mirror. -
The addition of a mirror with the
pull-from-mirror = "digest-only"parameter set in a registry. -
The addition of items to the
unqualified-search-registrieslist.
-
To avoid unwanted disruptions, you can modify the machine config pool (MCP) to prevent automatic rebooting after the Operator makes changes to the machine config.
Disabling the Machine Config Operator from automatically rebooting by using the console
To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can use the OpenShift Container Platform web console to modify the machine config pool (MCP) to prevent the MCO from making any changes to nodes in that pool. This prevents any reboots that would normally be part of the MCO update process.
-
You have access to the cluster as a user with the
cluster-adminrole.
-
Log in to the OpenShift Container Platform web console as a user with the
cluster-adminrole. -
Click Compute → MachineConfigPools.
-
On the MachineConfigPools page, click either master or worker, depending upon which nodes you want to pause rebooting for.
-
On the master or worker page, click YAML.
-
In the YAML, update the
spec.pausedfield totrue.Sample MachineConfigPool objectapiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool # ... spec: # ... paused: true # ...Update the
spec.pausedfield totrueto pause rebooting. -
To verify that the MCP is paused, return to the MachineConfigPools page.
On the MachineConfigPools page, the Paused column reports True for the MCP you modified.
If the MCP has pending changes while paused, the Updated column is False and Updating is False. When Updated is True and Updating is False, there are no pending changes.
Important
If there are pending changes (where both the Updated and Updating columns are False), it is recommended to schedule a maintenance window for a reboot as early as possible. Use the following steps for unpausing the autoreboot process to apply the changes that were queued since the last reboot.
-
Unpause the autoreboot process:
-
-
Log in to the OpenShift Container Platform web console as a user with the
cluster-adminrole. -
Click Compute → MachineConfigPools.
-
On the MachineConfigPools page, click either master or worker, depending upon which nodes you want to pause rebooting for.
-
On the master or worker page, click YAML.
-
In the YAML, update the
spec.pausedfield tofalse.Sample MachineConfigPool objectapiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool # ... spec: # ... paused: false # ...Update the
spec.pausedfield tofalseto allow rebooting.Note
By unpausing an MCP, the MCO applies all paused changes reboots Red Hat Enterprise Linux CoreOS (RHCOS) as needed.
-
To verify that the MCP is paused, return to the MachineConfigPools page.
On the MachineConfigPools page, the Paused column reports False for the MCP you modified.
If the MCP is applying any pending changes, the Updated column is False and the Updating column is True. When Updated is True and Updating is False, there are no further changes being made.
Disabling the Machine Config Operator from automatically rebooting by using the CLI
To avoid unwanted disruptions from changes made by the Machine Config Operator (MCO), you can modify the machine config pool (MCP) using the OpenShift CLI (oc) to prevent the MCO from making any changes to nodes in that pool. This prevents any reboots that would normally be part of the MCO update process.
Note
See second NOTE in Disabling the Machine Config Operator from automatically rebooting.
-
You have access to the cluster as a user with the
cluster-adminrole. -
You have installed the OpenShift CLI (
oc).
-
To pause or unpause automatic MCO update rebooting:
-
Update the
MachineConfigPoolcustom resource to set thespec.pausedfield totrue.Control plane (master) nodes$ oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/masterWorker nodes$ oc patch --type=merge --patch='{"spec":{"paused":true}}' machineconfigpool/worker -
Verify that the MCP is paused:
Control plane (master) nodes$ oc get machineconfigpool/master --template='{{.spec.paused}}'Worker nodes$ oc get machineconfigpool/worker --template='{{.spec.paused}}'Example outputtrueThe
spec.pausedfield istrueand the MCP is paused. -
Determine if the MCP has pending changes:
# oc get machineconfigpoolExample outputNAME CONFIG UPDATED UPDATING master rendered-master-33cf0a1254318755d7b48002c597bf91 True False worker rendered-worker-e405a5bdb0db1295acea08bcca33fa60 False False
If the UPDATED column is False and UPDATING is False, there are pending changes. When UPDATED is True and UPDATING is False, there are no pending changes. In the previous example, the worker node has pending changes. The control plane node does not have any pending changes.
Important
If there are pending changes (where both the Updated and Updating columns are False), it is recommended to schedule a maintenance window for a reboot as early as possible. Use the following steps for unpausing the autoreboot process to apply the changes that were queued since the last reboot.
-
-
Unpause the autoreboot process:
-
Update the
MachineConfigPoolcustom resource to set thespec.pausedfield tofalse.Control plane (master) nodes$ oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/masterWorker nodes$ oc patch --type=merge --patch='{"spec":{"paused":false}}' machineconfigpool/workerNote
By unpausing an MCP, the MCO applies all paused changes and reboots Red Hat Enterprise Linux CoreOS (RHCOS) as needed.
-
Verify that the MCP is unpaused:
Control plane (master) nodes$ oc get machineconfigpool/master --template='{{.spec.paused}}'Worker nodes$ oc get machineconfigpool/worker --template='{{.spec.paused}}'Example outputfalseThe
spec.pausedfield isfalseand the MCP is unpaused. -
Determine if the MCP has pending changes:
$ oc get machineconfigpoolExample outputNAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True
If the MCP is applying any pending changes, the UPDATED column is False and the UPDATING column is True. When UPDATED is True and UPDATING is False, there are no further changes being made. In the previous example, the MCO is updating the worker node.
-
Refreshing failing subscriptions
In Operator Lifecycle Manager (OLM), if you subscribe to an Operator that references images that are not accessible on your network, you can find jobs in the openshift-marketplace namespace that are failing with the following errors:
ImagePullBackOff for
Back-off pulling image "example.com/openshift4/ose-elasticsearch-operator-bundle@sha256:6d2587129c846ec28d384540322b40b05833e7e00b25cca584e004af9a1d292e"
rpc error: code = Unknown desc = error pinging docker registry example.com: Get "https://example.com/v2/": dial tcp: lookup example.com on 10.0.0.1:53: no such host
As a result, the subscription is stuck in this failing state and the Operator is unable to install or upgrade.
You can refresh a failing subscription by deleting the subscription, cluster service version (CSV), and other related objects. After recreating the subscription, OLM then reinstalls the correct version of the Operator.
-
You have a failing subscription that is unable to pull an inaccessible bundle image.
-
You have confirmed that the correct bundle image is accessible.
-
Get the names of the
SubscriptionandClusterServiceVersionobjects from the namespace where the Operator is installed:$ oc get sub,csv -n <namespace>Example outputNAME PACKAGE SOURCE CHANNEL subscription.operators.coreos.com/elasticsearch-operator elasticsearch-operator redhat-operators 5.0 NAME DISPLAY VERSION REPLACES PHASE clusterserviceversion.operators.coreos.com/elasticsearch-operator.5.0.0-65 OpenShift Elasticsearch Operator 5.0.0-65 Succeeded -
Delete the subscription:
$ oc delete subscription <subscription_name> -n <namespace> -
Delete the cluster service version:
$ oc delete csv <csv_name> -n <namespace> -
Get the names of any failing jobs and related config maps in the
openshift-marketplacenamespace:$ oc get job,configmap -n openshift-marketplaceExample outputNAME COMPLETIONS DURATION AGE job.batch/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 1/1 26s 9m30s NAME DATA AGE configmap/1de9443b6324e629ddf31fed0a853a121275806170e34c926d69e53a7fcbccb 3 9m30s -
Delete the job:
$ oc delete job <job_name> -n openshift-marketplaceThis ensures pods that try to pull the inaccessible image are not recreated.
-
Delete the config map:
$ oc delete configmap <configmap_name> -n openshift-marketplace -
Reinstall the Operator using the software catalog in the web console.
-
Check that the Operator has been reinstalled successfully:
$ oc get sub,csv,installplan -n <namespace>
Reinstalling Operators after failed uninstallation
You must successfully and completely uninstall an Operator prior to attempting to reinstall the same Operator. Failure to fully uninstall the Operator properly can leave resources, such as a project or namespace, stuck in a "Terminating" state and cause "error resolving resource" messages. For example:
Example Project resource description
...
message: 'Failed to delete all resource types, 1 remaining: Internal error occurred:
error resolving resource'
...
These types of issues can prevent an Operator from being reinstalled successfully.
Warning
Forced deletion of a namespace is not likely to resolve "Terminating" state issues and can lead to unstable or unpredictable cluster behavior, so it is better to try to find related resources that might be preventing the namespace from being deleted. For more information, see the Red Hat Knowledgebase Solution #4165791, paying careful attention to the cautions and warnings.
The following procedure shows how to troubleshoot when an Operator cannot be reinstalled because an existing custom resource definition (CRD) from a previous installation of the Operator is preventing a related namespace from deleting successfully.
-
Check if there are any namespaces related to the Operator that are stuck in "Terminating" state:
$ oc get namespacesExample outputoperator-ns-1 Terminating
-
Check if there are any CRDs related to the Operator that are still present after the failed uninstallation:
$ oc get crdsNote
CRDs are global cluster definitions; the actual custom resource (CR) instances related to the CRDs could be in other namespaces or be global cluster instances.
-
If there are any CRDs that you know were provided or managed by the Operator and that should have been deleted after uninstallation, delete the CRD:
$ oc delete crd <crd_name> -
Check if there are any remaining CR instances related to the Operator that are still present after uninstallation, and if so, delete the CRs:
-
The type of CRs to search for can be difficult to determine after uninstallation and can require knowing what CRDs the Operator manages. For example, if you are troubleshooting an uninstallation of the etcd Operator, which provides the
EtcdClusterCRD, you can search for remainingEtcdClusterCRs in a namespace:$ oc get EtcdCluster -n <namespace_name>Alternatively, you can search across all namespaces:
$ oc get EtcdCluster --all-namespaces -
If there are any remaining CRs that should be removed, delete the instances:
$ oc delete <cr_name> <cr_instance_name> -n <namespace_name>
-
-
Check that the namespace deletion has successfully resolved:
$ oc get namespace <namespace_name>Important
If the namespace or other Operator resources are still not uninstalled cleanly, contact Red Hat Support.
-
Reinstall the Operator using the software catalog in the web console.
-
Check that the Operator has been reinstalled successfully:
$ oc get sub,csv,installplan -n <namespace>