Installing a cluster on AWS
In OpenShift Container Platform version 4.19, you can install a cluster on Amazon Web Services (AWS) that uses the default configuration options.
Prerequisites
-
You reviewed details about the OpenShift Container Platform installation and update processes.
-
You read the documentation on selecting a cluster installation method and preparing it for users.
-
You configured an AWS account to host the cluster.
Important
If you have an AWS profile stored on your computer, it must not use a temporary session token that you generated while using a multi-factor authentication device. The cluster continues to use your current AWS credentials to create AWS resources for the entire life of the cluster, so you must use key-based, long-term credentials. To generate appropriate keys, see Managing Access Keys for IAM Users in the AWS documentation. You can supply the keys when you run the installation program.
-
If you use a firewall, you configured it to allow the sites that your cluster requires access to.
Deploying the cluster
You can install OpenShift Container Platform on a compatible cloud platform.
Important
You can run the create cluster command of the installation program only once, during initial installation.
-
You have configured an account with the cloud platform that hosts your cluster.
-
You have the OpenShift Container Platform installation program and the pull secret for your cluster.
-
You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions.
-
In the directory that contains the installation program, initialize the cluster deployment by running the following command:
$ ./openshift-install create cluster --dir <installation_directory> \ --log-level=info- For
<installation_directory>, specify the directory name to store the files that the installation program creates. - To view different installation details, specify
warn,debug, orerrorinstead ofinfo.
When specifying the directory:
-
Verify that the directory has the
executepermission. This permission is required to run Terraform binaries under the installation directory. -
Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version.
- For
-
Provide values at the prompts:
-
Optional: Select an SSH key to use to access your cluster machines.
Note
For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your
ssh-agentprocess uses. -
Select aws as the platform to target.
-
If you do not have an Amazon Web Services (AWS) profile stored on your computer, enter the AWS access key ID and secret access key for the user that you configured to run the installation program.
Note
The AWS access key ID and secret access key are stored in
~/.aws/credentialsin the home directory of the current user on the installation host. You are prompted for the credentials by the installation program if the credentials for the exported profile are not present in the file. Any credentials that you provide to the installation program are stored in the file. -
Select the AWS region to deploy the cluster to.
-
Select the base domain for the Route 53 service that you configured for your cluster.
-
Enter a descriptive name for your cluster.
-
Paste the pull secret from Red Hat OpenShift Cluster Manager.
-
-
Optional: Remove or disable the
AdministratorAccesspolicy from the IAM account that you used to install the cluster.Note
The elevated permissions provided by the
AdministratorAccesspolicy are required only during installation.
When the cluster deployment completes successfully:
-
The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the
kubeadminuser. -
Credential information also outputs to
<installation_directory>/.openshift_install.log.
Important
Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster.
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com
INFO Login to the console with user: "kubeadmin", and password: "password"
INFO Time elapsed: 36m22s
Important
-
The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending
node-bootstrappercertificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. -
It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation.
Logging in to the cluster by using the CLI
To log in to your cluster as the default system user, export the kubeconfig file. This configuration enables the CLI to authenticate and connect to the specific API server created during OpenShift Container Platform installation.
The kubeconfig file is specific to a cluster and is created during OpenShift Container Platform installation.
-
You deployed an OpenShift Container Platform cluster.
-
You installed the OpenShift CLI (
oc).
-
Export the
kubeadmincredentials by running the following command:$ export KUBECONFIG=<installation_directory>/auth/kubeconfigwhere:
<installation_directory>-
Specifies the path to the directory that stores the installation files.
-
Verify you can run
occommands successfully using the exported configuration by running the following command:$ oc whoamiExample outputsystem:admin
Logging in to the cluster by using the web console
The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console.
-
You have access to the installation host.
-
You completed a cluster installation and all cluster Operators are available.
-
Obtain the password for the
kubeadminuser from thekubeadmin-passwordfile on the installation host:$ cat <installation_directory>/auth/kubeadmin-passwordNote
Alternatively, you can obtain the
kubeadminpassword from the<installation_directory>/.openshift_install.loglog file on the installation host. -
List the OpenShift Container Platform web console route:
$ oc get routes -n openshift-console | grep 'console-openshift'Note
Alternatively, you can obtain the OpenShift Container Platform route from the
<installation_directory>/.openshift_install.loglog file on the installation host.Example outputconsole console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None -
Navigate to the route detailed in the output of the preceding command in a web browser and log in as the
kubeadminuser.
Next steps
-
If necessary, you can Remote health reporting.
-
If necessary, you can remove cloud provider credentials.