Skip to content

Installing OpenShift Virtualization on IBM Cloud bare-metal nodes

Install OpenShift Virtualization on IBM Cloud bare-metal nodes using Assisted Installer. The cluster has 6 bare-metal nodes (3 control and 3 compute). An additional virtual machine is required for bootstrapping and to act as a Samba server, DHCP server, network gateway, and load balancer.

Prerequisites

  • An account in IBM Cloud with permissions to order and operate bare-metal nodes.

  • An IBM Cloud SSL VPN user, to access the SuperMicro IPMI interface of a node.

  • Install the OpenShift CLI (oc).

Configuring IBM Cloud for the new cluster

Configure and provision the IBM Cloud environment to establish the operational framework and nodes for your OpenShift Virtualization cluster.

Procedure
  1. Create a new virtual server instance in IBM Cloud at Virtual Server for Classic to be the Bastion server. This instance is used to run the installation and provide environment services.

  2. Change the default properties of the new virtual server instance to the following values. Use the provided defaults for all other values.

    • Type of virtual server: Public

    • Operating system: CentOS

    • Your public SSH RSA key

  3. Note the private VLAN and subnet the virtual server instance is assigned to at VLANs.

  4. Provision 6 bare-metal nodes in IBM Cloud at Bare metal server provision. Use the following values when provisioning the nodes:

    • Domain: A subdomain you can add records to.

    • Quantity: 6

    • Location: The same location as the virtual server instance.

    • Storage disks: RAID 1

    • Network Interface: Private

    • Private VLAN: The same as noted for the virtual server instance.

  5. Confirm all nodes are provisioned and ready at Device list.

  6. Rename the control plane nodes to control0-<domain-name>, control1-<domain-name>, and control2-<domain-name>. Replace <domain-name> with the domain used when provisioning the nodes.

  7. Rename the compute nodes to compute0-<domain-name>, compute1-<domain-name>, and compute2-<domain-name>. Replace <domain-name> with the domain used when provisioning the nodes.

  8. Configure the Bastion virtual server instance as a default network gateway.

  9. Configure DHCP by editing /etc/dhcp/dhcpd.conf on the Bastion virtual server instance. For example:

    # Set DNS name and DNS server's IP address or hostname
    option domain-name  <dns_domain_name>;
    option domain-name-servers  <dns_ip_addresses>;
    
    # Declare DHCP Server
    authoritative;
    
    # The default DHCP lease time
    default-lease-time <default_lease_value>;
    
    # Set the maximum lease time
    max-lease-time <max_lease_value>;
    
    # Set Network address, subnet mask and gateway
    
    subnet <subnet_ip_address> netmask <subnet_mask> {
      # Range of IP addresses to allocate
      range dynamic-bootp <dynamic_boot_lower_address> <dynamic_boot_upper_address>;
      # Provide broadcast address
      option broadcast-address <broadcast_ip_address>;
      # Set default gateway
      option routers <default_gateway_ip_address>;

    where:

    <dns_domain_name>

    The default domain name for DNS clients.

    <dns_ip_addresses>

    A comma-seperated list of DNS server IP addresses.

    <default_lease_value>

    The default number of seconds a client keeps an assigned address.

    <max_lease_value>

    The maximum number of seconds a client keeps an assigned address.

    <subnet_ip_address>

    The start of the subnet IP address range.

    <subnet_mask>

    The subnet mask of the subnet IP address range.

    <broad_ip_address>

    The broadcast IP address to use when to use sending a message to every device on the subnet.

    <default_gateway_ip_address>

    The default gateway of the subnet.

  10. Restart DHCP on the Bastion virtual server instance:

    $ systemctl restart dhcpd
  11. Enable IP forwarding on the Bastion virtual server instance:

    $ sysctl -w net.ipv4.ip_forward=1
  12. Verify IP forwarding is enabled on the Bastion virtual server instance:

    $ sysctl -p /etc/sysctl.conf
  13. Restart the network service on the Bastion virtual server instance:

    $ service network restart
  14. Verify if firewalld is enabled on the Bastion virtual server instance:

    $ firewall-cmd --state
  15. If the firewalld service is not enabled on the Bastion virtual server instance, enable the service:

    $ systemctl enable firewalld
  16. Start the firewalld service:

    $ systemctl start firewalld
  17. Add network address translation (NAT) rules to the firewalld service:

    $ firewall-cmd --add-masquerade --permanent
  18. Restart the firewalld service:

    $ firewall-cmd --reload

Initializing the new cluster configuration

Initialize the new cluster configuration using the OpenShift Virtualization Assisted Installer service and Samba on the Bastion virtual server instance.

Procedure
  1. Log in to the Assisted Installer service.

  2. Create a new cluster. The new cluster has the following properties:

    • Cluster name: The name used to identify the cluster under the base domain.

    • Base domain: The domain used to provision the bare-metal nodes.

  3. Click Next.

  4. Click Generate Discovery ISO.

  5. Provide your public SSH RSA key when prompted.

  6. Copy and save the generated wget command for the ISO file. This will be used later to connect to the cluster nodes.

  7. Install Samba server on the Bastion virtual server instance:

    $ dnf install samba
  8. Enable Samba server on the Bastion virtual server instance:

    $ systemctl enable smb --now
  9. Configure NAT rules for the Samba server:

    $ firewall-cmd --permanent --zone=FedoraWorkstation --add-service=samba
    $ firewall-cmd --reload
  10. Configure a root user password:

    $ sudo smbpasswd -a root
  11. Create a share directory:

    $ mkdir <share_directory>

    Replace <share_directory> with the share directory name.

  12. Navigate to the share directory and download the Assisted Installer ISO file using the generated wget command.

Configuring cluster networking and access

Configure networking and access to allow for remote management of the cluster.

Procedure
  1. Edit /etc/samba/smb.conf to use the following configuration:

    [global]
          log level = 3
              workgroup = SAMBA
              security = user
    
              passdb backend = tdbsam
    
              printing = cups
              printcap name = cups
              load printers = yes
              cups options = raw
    
          server min protocol = NT1
          ntlm auth = yes
    
    [share]
          comment = ISO Files
          path = /root/share
          browseable = yes
          public = no
          read only = no
          directory mode = 0555
          valid users = root

    Note

    For a more detailed example of the smb.conf file, see the smb.conf.example file in the same directory.

  2. Save the file.

  3. Verify the new Samba configuration:

    $ testparm
  4. Restart the Samba service:

    $ systemctl restart smb
  5. Verify that the Samba service is running and active:

    $ systemctl status smb
  6. Configure SSL VPN access to IBM Cloud:

    1. Perform the procedure at Getting started with IBM Cloud Virtual Private Networking in the IBM Cloud documentation.

    2. Download and install the MotionPro SSL VPN client.

    3. Connect to the appropriate IBM Cloud endpoint:

      $ sudo MotionPro --host $<vpn_endpoint> --user $<vpn_username> --passwd $<vpn_password>

      where:

      <vpn_endpoint>

      The appropriate SSL VPN endpoint.

      <vpn_username>

      The SSL VPN user name you configured.

      <vpn_password>

      The SSL VPN password you configured.

      Note

      Connecting to the IBM Cloud SSL VPN will disconnect you from any open VPN connections.

Completing the cluster configuration

Complete the cluster configuration by installing software on the control plane and compute nodes and configuring DNS for external access.

Procedure
  1. For each bare-metal server, perform the following tasks:

    1. Access the server using the IPMI console.

      Note

      The IP address and credentials for IPMI console access is available in the Remote management section for each server.

    2. Mount the Assisted Installer ISO file with the following attributes:

      • Virtual Media: CD-ROM Image

      • Share host: The private IP address of the Bastion server.

      • Path to image: The location of the Assisted Installer ISO file.

      • User: root

      • Password: The root user password you configured.

    3. Click Save and Mount.

    4. Verify the ISO mounted successfully.

    5. Restart the server by selecting Remote ControlPower ControlReset ServerPerform Action.

  2. Return to the Assisted Installer service.

  3. Select the Install OpenShift Virtualization and Install OpenShift Data Foundation checkboxes in the Assisted Installer options.

  4. Select a role for each host.

    Note

    The cluster consists of 3 control plane and 3 compute nodes.

  5. Wait for the Assisted Installer interface to indicate each node is ready.

  6. Click Next.

  7. Select Cluster Managed Network.

  8. Select the API VIP and Ingress VIP checkboxes to obtain them from DHCP or leave them unchecked to enter static values.

  9. Click Install.

  10. For each bare-metal server, perform the following tasks:

    1. Access the server using the IPMI console.

      Note

      The IP address and credentials for IPMI console access is available in the Remote management section for each server.

    2. Select Virtual MediaCD-Rom Image.

    3. Click Unmount.

    4. Select Remote ControlPower ControlReset ServerPerform Action to restart the server.

  11. Locate the Cluster Credentials section of the installation summary.

  12. Perform the following tasks in the Cluster Credentials section:

    1. Download the kubeconfig file.

    2. Save the kubeadmin password.

  13. Install haproxy on the Bastion virtual server instance.

  14. Configure haproxy for your environment. The following is an example configuration:

    #---------------------------------------------------------------------
    # Example configuration for a possible web application.  See the
    # full configuration options online.
    #
    #   https://www.haproxy.org/download/1.8/doc/configuration.txt
    #
    #---------------------------------------------------------------------
    
    #---------------------------------------------------------------------
    # Global settings
    #---------------------------------------------------------------------
    global
      # to have these messages end up in /var/log/haproxy.log you will
      # need to:
      #
      # 1) configure syslog to accept network log events.  This is done
      # by adding the '-r' option to the SYSLOGD_OPTIONS in
      # /etc/sysconfig/syslog
      #
      # 2) configure local2 events to go to the /var/log/haproxy.log
      #   file. A line like the following can be added to
      #   /etc/sysconfig/syslog
      #
      # local2.*                    /var/log/haproxy.log
      #
      log       127.0.0.1 local2
    
      chroot    /var/lib/haproxy
      pidfile   /var/run/haproxy.pid
      maxconn   4000
      user      haproxy
      group     haproxy
      daemon
    
      # turn on stats unix socket
      stats socket /var/lib/haproxy/stats
    
      # utilize system-wide crypto-policies
      #ssl-default-bind-ciphers PROFILE=SYSTEM
      #ssl-default-server-ciphers PROFILE=SYSTEM
    
    #---------------------------------------------------------------------
    # common defaults that all the 'listen' and 'backend' sections will
    # use if not designated in their block
    #---------------------------------------------------------------------
    defaults
      mode                  tcp
      log                   global
      option                httplog
      option                dontlognull
      option http-server-close
      option forwardfor     except 127.0.0.0/8
      option                redispatch
      retries               3
      timeout http-request  10s
      timeout queue         1m
      timeout connect       10s
      timeout client        1m
      timeout server        1m
      timeout http-keep-alive 10s
      timeout check         10s
      maxconn               3000
    #---------------------------------------------------------------------
    # main frontend which proxys to the backends
    #---------------------------------------------------------------------
    
    frontend api
      bind <api_ip_address>:<api_port>
      default_backend controlplaneapi
    
    frontend apiinternal
      bind <apiinternal_ip_address>:<apiinternal_port>
      default_backend controlplaneapiinternal
    
    frontend secure
      bind <frontend_secure_ip_address>:<frontend_secure_port>
      default_backend secure
    
    frontend insecure
      bind <frontend_insecure_ip_address>:<frontend_insecure_port>
      default_backend insecure
    
    #---------------------------------------------------------------------
    # static backend
    #---------------------------------------------------------------------
    
    backend controlplaneapi
      balance source
      server api <controlplaneapi_ip_address>:<controlplaneapi_port> check
    
    backend controlplaneapiinternal
      balance source
      server api <controlplaneapiinternal_ip_address>:<controlplaneapiinternal_port> check
    
    backend secure
      balance source
      server ingress <backend_secure_ip_address>:<backend_secure_port> check
    
    backend insecure
      balance source
      server ingress <backend_insecure_ip_address>:<backend_insecure_port> check

    where:

    <api_ip_address>:<api_port>

    The front end IP address and port used by the Kubernetes API server.

    <apiinternal_ip_address>:<apiinternal_port>

    The front end IP address and port used for internal cluster management.

    <frontend_secure_ip_address>:<frontend_secure_port>

    The front end IP address and port used for HTTPS traffic for hosted applications.

    <frontend_insecure_ip_address>:<frontend_insecure_port>

    The front end IP address and port used for HTTP traffic for hosted applications.

    <controlplaneapi_ip_address>:<controlplaneapi_port>

    The back end IP address and port used by the Kubernetes API server.

    <controlplaneapiinternal_ip_address>:<controlplaneapiinternal_port>

    The back end IP address and port used for internal cluster management.

    <backend_secure_ip_address>:<backend_secure_port>

    The back end IP address and port used for HTTPS traffic for hosted applications.

    <backend_insecure_ip_address>:<backend_insecure_port>

    The back end IP address and port used for HTTP traffic for hosted applications.

    Note

    Replace the example values with values applicable to your network configuration.

  15. Save the haproxy configuration.

  16. Configure two DNS Address records (A records) for the subdomain that are externally available over the Internet:

    <bastion_public_ip_address> api.<cluster_name>.<cluster_domain>
    <bastion_public_ip_address> *.apps..<cluster_name>.<cluster_domain>

    where:

    <bastion_public_ip_address>

    The externally available IP address of the Bastion virtual server instance.

    <cluster_name>

    The name assigned to the cluster.

    <cluster_domain>

    The domain assigned to the cluster.

Verification
  1. Perform the following tasks to verify cluster access using command line access:

    1. Set your environment with the kubeconfig file:

      $ export KUBECONFIG=<kubeconfig_file_path>

      where:

      <kubeconfig_file_path>

      The path to the downloaded kubeconfig file.

    2. Check cluster node status:

      $ oc get nodes

      Note

      The command output should show all nodes as Ready in the STATUS column and the ROLES column should show that control plane and compute nodes are present.

    3. Check the cluster version:

      $ oc get clusterversion

      Note

      The command output should say Condition: Available.

  2. Perform the following tasks to verify cluster access using the web console:

    1. Paste the access URL provided by Assisted Installer into your web browser.

      Note

      By default, clusters use self-signed certificates. This may cause your browser to display a message that says Connection not private or a similar warning. You can close this warning and continue.

    2. Navigate to the URL.

    3. Log in to the cluster with the username kubeadmin and the kubeadmin password provided in the Cluster Credentials section.