Managing hosted control planes on IBM Power
After you deploy hosted control planes on IBM Power, you can manage a hosted cluster by completing the following tasks.
Creating an InfraEnv resource for hosted control planes on IBM Power
An InfraEnv is a environment where hosts that are starting the live ISO can join as agents. In this case, the agents are created in the same namespace as your hosted control plane.
You can create an InfraEnv resource for hosted control planes on 64-bit x86 bare metal for IBM Power compute nodes.
-
Create a YAML file to configure an
InfraEnvresource. See the following example:apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: <hosted_cluster_name> \ namespace: <hosted_control_plane_namespace> \ spec: cpuArchitecture: ppc64le pullSecretRef: name: pull-secret sshAuthorizedKey: <path_to_ssh_public_key>- Replace
<hosted_cluster_name>with the name of your hosted cluster. - Replace
<hosted_control_plane_namespace>with the name of the hosted control plane namespace, for example,clusters-hosted. - Replace
<path_to_ssh_public_key>with the path to your SSH public key. The default file path is~/.ssh/id_rsa.pub.
- Replace
-
Save the file as
infraenv-config.yaml. -
Apply the configuration by entering the following command:
$ oc apply -f infraenv-config.yaml -
To fetch the URL to download the live ISO, which allows IBM Power machines to join as agents, enter the following command:
$ oc -n <hosted_control_plane_namespace> get InfraEnv <hosted_cluster_name> \ -o json
Adding IBM Power agents to the InfraEnv resource
You can add agents by manually configuring the machine to start with the live ISO.
-
Download the live ISO and use it to start a bare metal or a virtual machine (VM) host. You can find the URL for the live ISO in the
status.isoDownloadURLfield, in theInfraEnvresource. At startup, the host communicates with the Assisted Service and registers as an agent in the same namespace as theInfraEnvresource. -
To list the agents and some of their properties, enter the following command:
$ oc -n <hosted_control_plane_namespace> get agentsExample outputNAME CLUSTER APPROVED ROLE STAGE 86f7ac75-4fc4-4b36-8130-40fa12602218 auto-assign e57a637f-745b-496e-971d-1abbf03341ba auto-assign -
After each agent is created, you can optionally set the
installation_disk_idandhostnamefor an agent:-
To set the
installation_disk_idfield for an agent, enter the following command:$ oc -n <hosted_control_plane_namespace> patch agent <agent_name> -p '{"spec":{"installation_disk_id":"<installation_disk_id>","approved":true}}' --type merge -
To set the
hostnamefield for an agent, enter the following command:$ oc -n <hosted_control_plane_namespace> patch agent <agent_name> -p '{"spec":{"hostname":"<hostname>","approved":true}}' --type merge
-
-
To verify that the agents are approved for use, enter the following command:
$ oc -n <hosted_control_plane_namespace> get agentsExample outputNAME CLUSTER APPROVED ROLE STAGE 86f7ac75-4fc4-4b36-8130-40fa12602218 true auto-assign e57a637f-745b-496e-971d-1abbf03341ba true auto-assign
Scaling the NodePool object for a hosted cluster on IBM Power
The NodePool object is created when you create a hosted cluster. By scaling the NodePool object, you can add more compute nodes to hosted control planes.
-
Run the following command to scale the
NodePoolobject to two nodes:$ oc -n <hosted_cluster_namespace> scale nodepool <nodepool_name> --replicas 2The Cluster API agent provider randomly picks two agents that are then assigned to the hosted cluster. Those agents go through different states and finally join the hosted cluster as OpenShift Container Platform nodes. The agents pass through the transition phases in the following order:
-
binding -
discovering -
insufficient -
installing -
installing-in-progress -
added-to-existing-cluster
-
-
Run the following command to see the status of a specific scaled agent:
$ oc -n <hosted_control_plane_namespace> get agent \ -o jsonpath='{range .items[*]}BMH: {@.metadata.labels.agent-install\.openshift\.io/bmh} Agent: {@.metadata.name} State: {@.status.debugInfo.state}{"\n"}{end}'Example outputBMH: Agent: 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d State: known-unbound BMH: Agent: 5e498cd3-542c-e54f-0c58-ed43e28b568a State: insufficient -
Run the following command to see the transition phases:
$ oc -n <hosted_control_plane_namespace> get agentExample outputNAME CLUSTER APPROVED ROLE STAGE 50c23cda-cedc-9bbd-bcf1-9b3a5c75804d hosted-forwarder true auto-assign 5e498cd3-542c-e54f-0c58-ed43e28b568a true auto-assign da503cf1-a347-44f2-875c-4960ddb04091 hosted-forwarder true auto-assign -
Run the following command to generate the
kubeconfigfile to access the hosted cluster:$ hcp create kubeconfig --namespace <hosted_cluster_namespace> \ --name <hosted_cluster_name> > <hosted_cluster_name>.kubeconfig -
After the agents reach the
added-to-existing-clusterstate, verify that you can see the OpenShift Container Platform nodes by entering the following command:$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get nodesExample outputNAME STATUS ROLES AGE VERSION worker-zvm-0.hostedn.example.com Ready worker 5m41s v1.24.0+3882f8f worker-zvm-1.hostedn.example.com Ready worker 6m3s v1.24.0+3882f8f -
Enter the following command to verify that two machines were created when you scaled up the
NodePoolobject:$ oc -n <hosted_control_plane_namespace> get machine.cluster.x-k8s.ioExample outputNAME CLUSTER NODENAME PROVIDERID PHASE AGE VERSION hosted-forwarder-79558597ff-5tbqp hosted-forwarder-crqq5 worker-zvm-0.hostedn.example.com agent://50c23cda-cedc-9bbd-bcf1-9b3a5c75804d Running 41h 4.15.0 hosted-forwarder-79558597ff-lfjfk hosted-forwarder-crqq5 worker-zvm-1.hostedn.example.com agent://5e498cd3-542c-e54f-0c58-ed43e28b568a Running 41h 4.15.0 -
Run the following command to check the cluster version:
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusterversionExample outputNAME VERSION AVAILABLE PROGRESSING SINCE STATUS clusterversion.config.openshift.io/version 4.15.0 True False 40h Cluster version is 4.15.0 -
Run the following command to check the Cluster Operator status:
$ oc --kubeconfig <hosted_cluster_name>.kubeconfig get clusteroperatorsFor each component of your cluster, the output shows the following Cluster Operator statuses:
-
NAME -
VERSION -
AVAILABLE -
PROGRESSING -
DEGRADED -
SINCE -
MESSAGE
-