Installing the MetalLB Operator
As a cluster administrator, you can add the MetalLB Operator so that the Operator can manage the lifecycle for an instance of MetalLB on your cluster.
MetalLB and IP failover are incompatible. If you configured IP failover for your cluster, perform the steps to remove IP failover before you install the Operator.
Installing the MetalLB Operator from the software catalog by using the web console
As a cluster administrator, you can install the MetalLB Operator by using the OpenShift Container Platform web console.
-
Log in as a user with
cluster-adminprivileges.
-
In the OpenShift Container Platform web console, navigate to Ecosystem → Software Catalog.
-
Type a keyword into the Filter by keyword box or scroll to find the Operator you want. For example, type
metallbto find the MetalLB Operator.You can also filter options by Infrastructure Features. For example, select Disconnected if you want to see Operators that work in disconnected environments, also known as restricted network environments.
-
On the Install Operator page, accept the defaults and click Install.
-
To confirm that the installation is successful:
-
Navigate to the Ecosystem → Installed Operators page.
-
Check that the Operator is installed in the
openshift-operatorsnamespace and that its status isSucceeded.
-
-
If the Operator is not installed successfully, check the status of the Operator and review the logs:
-
Navigate to the Ecosystem → Installed Operators page and inspect the
Statuscolumn for any errors or failures. -
Navigate to the Workloads → Pods page and check the logs in any pods in the
openshift-operatorsproject that are reporting issues.
-
Installing from the software catalog using the CLI
To install the MetalLB Operator from the software catalog in OpenShift Container Platform without using the web console, you can use the OpenShift CLI (oc).
It is recommended that when using the CLI you install the Operator in the metallb-system namespace.
-
A cluster installed on bare-metal hardware.
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges.
-
Create a namespace for the MetalLB Operator by entering the following command:
$ cat << EOF | oc apply -f - apiVersion: v1 kind: Namespace metadata: name: metallb-system EOF -
Create an Operator group custom resource (CR) in the namespace:
$ cat << EOF | oc apply -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: metallb-operator namespace: metallb-system EOF -
Confirm the Operator group is installed in the namespace:
$ oc get operatorgroup -n metallb-systemExample outputNAME AGE metallb-operator 14m -
Create a
SubscriptionCR:-
Define the
SubscriptionCR and save the YAML file, for example,metallb-sub.yaml:apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: metallb-operator-sub namespace: metallb-system spec: channel: stable name: metallb-operator source: redhat-operators sourceNamespace: openshift-marketplace-
For the
spec.sourceparameter, must specify theredhat-operatorsvalue.
-
-
To create the
SubscriptionCR, run the following command:$ oc create -f metallb-sub.yaml
-
-
Optional: To ensure BGP and BFD metrics appear in Prometheus, you can label the namespace as in the following command:
$ oc label ns metallb-system "openshift.io/cluster-monitoring=true"
The verification steps assume the MetalLB Operator is installed in the metallb-system namespace.
-
Confirm the install plan is in the namespace:
$ oc get installplan -n metallb-systemExample outputNAME CSV APPROVAL APPROVED install-wzg94 metallb-operator.4.19.0-nnnnnnnnnnnn Automatic trueNote
Installation of the Operator might take a few seconds.
-
To verify that the Operator is installed, enter the following command and then check that output shows
Succeededfor the Operator:$ oc get clusterserviceversion -n metallb-system \ -o custom-columns=Name:.metadata.name,Phase:.status.phase
Starting MetalLB on your cluster
To start MetalLB on your cluster after installing the MetalLB Operator in OpenShift Container Platform, you create a single MetalLB custom resource.
-
Install the OpenShift CLI (
oc). -
Log in as a user with
cluster-adminprivileges. -
Install the MetalLB Operator.
-
Create a single instance of a MetalLB custom resource:
$ cat << EOF | oc apply -f - apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system EOF-
For the
metdata.namespaceparameter, substitutemetallb-systemwithopenshift-operatorsif you installed the MetalLB Operator using the web console.
-
Confirm that the deployment for the MetalLB controller and the daemon set for the MetalLB speaker are running.
-
Verify that the deployment for the controller is running:
$ oc get deployment -n metallb-system controllerExample outputNAME READY UP-TO-DATE AVAILABLE AGE controller 1/1 1 1 11m -
Verify that the daemon set for the speaker is running:
$ oc get daemonset -n metallb-system speakerExample outputNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE speaker 6 6 6 6 6 kubernetes.io/os=linux 18mThe example output indicates 6 speaker pods. The number of speaker pods in your cluster might differ from the example output. Make sure the output indicates one pod for each node in your cluster.
Deployment specifications for MetalLB
Deployment specifications in the MetalLB custom resource control how the MetalLB controller and speaker pods deploy and run in OpenShift Container Platform.
Use deployment specifications to manage the following tasks:
-
Select nodes for MetalLB pod deployment.
-
Manage scheduling by using pod priority and pod affinity.
-
Assign CPU limits for MetalLB pods.
-
Assign a container RuntimeClass for MetalLB pods.
-
Assign metadata for MetalLB pods.
Limit speaker pods to specific nodes
You can limit MetalLB speaker pods to specific nodes in OpenShift Container Platform by configuring a node selector in the MetalLB custom resource. Only nodes that run a speaker pod advertise load balancer IP addresses, so you control which nodes serve MetalLB traffic.
The most common reason to limit the speaker pods to specific nodes is to ensure that only nodes with network interfaces on specific networks advertise load balancer IP addresses.
If you limit the speaker pods to specific nodes and specify local for the external traffic policy of a service, then you must ensure that the application pods for the service are deployed to the same nodes.
speaker pods to worker nodesapiVersion: metallb.io/v1beta1
kind: MetalLB
metadata:
name: metallb
namespace: metallb-system
spec:
nodeSelector:
node-role.kubernetes.io/worker: ""
speakerTolerations:
- key: "Example"
operator: "Exists"
effect: "NoExecute"
-
In this example configuration, the
spec.nodeSelectorfield assigns thespeakerpods to worker nodes. You can specify labels that you assigned to nodes or any valid node selector. -
In this example configuration,
spec.speakerToTolerationspod that this toleration is attached to tolerates any taint that matches thekeyandeffectvalues by using theoperatorvalue.
After you apply a manifest with the spec.nodeSelector field, you can check the number of pods that the Operator deployed with the oc get daemonset -n metallb-system speaker command.
Similarly, you can display the nodes that match your labels with a command like oc get nodes -l node-role.kubernetes.io/worker=.
You can optionally allow the node to control which speaker pods should, or should not, be scheduled on them by using affinity rules. You can also limit these pods by applying a list of tolerations. For more information about affinity rules, taints, and tolerations, see the additional resources.
Configuring pod priority and pod affinity in a MetalLB deployment
To control scheduling of MetalLB controller and speaker pods in OpenShift Container Platform, you can assign pod priority and pod affinity in the MetalLB custom resource. You create a PriorityClass and set priorityClassName and affinity in the MetalLB spec, then apply the configuration.
The pod priority indicates the relative importance of a pod on a node and schedules the pod based on this priority. Set a high priority on your controller or speaker pod to ensure scheduling priority over other pods on the node.
Pod affinity manages relationships among pods. Assign pod affinity to the controller or speaker pods to control on what node the scheduler places the pod in the context of pod relationships. For example, you can use pod affinity rules to ensure that certain pods are located on the same node or nodes, which can help improve network communication and reduce latency between those components.
-
You are logged in as a user with
cluster-adminprivileges. -
You have installed the MetalLB Operator.
-
You have started the MetalLB Operator on your cluster.
-
Create a
PriorityClasscustom resource, such asmyPriorityClass.yaml, to configure the priority level. This example defines aPriorityClassnamedhigh-prioritywith a value of1000000. Pods that are assigned this priority class are considered higher priority during scheduling compared to pods with lower priority classes:apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority value: 1000000 -
Apply the
PriorityClasscustom resource configuration:$ oc apply -f myPriorityClass.yaml -
Create a
MetalLBcustom resource, such asMetalLBPodConfig.yaml, to specify thepriorityClassNameandpodAffinityvalues:apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: priorityClassName: high-priority affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostname speakerConfig: priorityClassName: high-priority affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchLabels: app: metallb topologyKey: kubernetes.io/hostnamewhere:
spec.controllerConfig.priorityClassName-
Specifies the priority class for the MetalLB controller pods. In this case, it is set to
high-priority. spec.controllerConfig.affinity.podAffinity-
Specifies that you are configuring pod affinity rules. These rules dictate how pods are scheduled in relation to other pods or nodes. This configuration instructs the scheduler to schedule pods that have the label
app: metallbonto nodes that share the same hostname. This helps to co-locate MetalLB-related pods on the same nodes, potentially optimizing network communication, latency, and resource usage between these pods.
-
Apply the
MetalLBcustom resource configuration by running the following command:$ oc apply -f MetalLBPodConfig.yaml
-
To view the priority class that you assigned to pods in the
metallb-systemnamespace, run the following command:$ oc get pods -n metallb-system -o custom-columns=NAME:.metadata.name,PRIORITY:.spec.priorityClassNameExample outputNAME PRIORITY controller-584f5c8cd8-5zbvg high-priority metallb-operator-controller-manager-9c8d9985-szkqg <none> metallb-operator-webhook-server-c895594d4-shjgx <none> speaker-dddf7 high-priority -
Verify that the scheduler placed pods according to pod affinity rules by viewing the metadata for the node of the pod. For example:
$ oc get pod -o=custom-columns=NODE:.spec.nodeName,NAME:.metadata.name -n metallb-system
Configuring pod CPU limits in a MetalLB deployment
To manage compute resources on nodes running MetalLB in OpenShift Container Platform, you can assign CPU limits to the controller and speaker pods in the MetalLB custom resource. This ensures that all pods on the node have the necessary compute resources to manage workloads and cluster housekeeping.
-
You are logged in as a user with
cluster-adminprivileges. -
You have installed the MetalLB Operator.
-
Create a
MetalLBcustom resource file, such asCPULimits.yaml, to specify thecpuvalue for thecontrollerandspeakerpods:apiVersion: metallb.io/v1beta1 kind: MetalLB metadata: name: metallb namespace: metallb-system spec: logLevel: debug controllerConfig: resources: limits: cpu: "200m" speakerConfig: resources: limits: cpu: "300m" -
Apply the
MetalLBcustom resource configuration:$ oc apply -f CPULimits.yaml
-
To view compute resources for a pod, run the following command, replacing
<pod_name>with your target pod:$ oc describe pod <pod_name>