Installing the Distributed Tracing Platform
Installing the Distributed Tracing Platform involves the following steps:
-
Installing the Tempo Operator.
-
Setting up a supported object store and creating a secret for the object store credentials.
-
Configuring the permissions and tenants.
-
Depending on your use case, installing your choice of deployment:
-
Microservices-mode
TempoStackinstance -
Monolithic-mode
TempoMonolithicinstance
-
Installing the Tempo Operator
You can install the Tempo Operator by using the web console or the command line.
Installing the Tempo Operator by using the web console
You can install the Tempo Operator from the OpenShift Container Platform web console.
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-adminrole. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-adminrole. -
You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, Google Cloud Storage. For more information, see "Object storage setup".
Warning
Object storage is required and not included with the Distributed Tracing Platform. You must choose and set up object storage by a supported provider before installing the Distributed Tracing Platform.
-
In the web console, search for
Tempo Operator.Tip
In OpenShift Container Platform 4.19 or earlier, go to Operators → OperatorHub.
In OpenShift Container Platform 4.20 or later, go to Ecosystem → Software Catalog.
-
Select the Tempo Operator that is provided by Red Hat.
Important
The following selections are the default presets for this Operator:
-
Update channel → stable
-
Installation mode → All namespaces on the cluster
-
Installed Namespace → openshift-tempo-operator
-
Update approval → Automatic
-
-
Select the Enable Operator recommended cluster monitoring on this Namespace checkbox.
-
Select Install → Install → View Operator.
-
In the Details tab of the page of the installed Operator, under ClusterServiceVersion details, verify that the installation Status is Succeeded.
Installing the Tempo Operator by using the CLI
You can install the Tempo Operator from the command line.
-
An active OpenShift CLI (
oc) session by a cluster administrator with thecluster-adminrole.Tip
-
Ensure that your OpenShift CLI (
oc) version is up to date and matches your OpenShift Container Platform version. -
Run
oc login:$ oc login --username=<your_username>
-
-
You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, Google Cloud Storage. For more information, see "Object storage setup".
Warning
Object storage is required and not included with the Distributed Tracing Platform. You must choose and set up object storage by a supported provider before installing the Distributed Tracing Platform.
-
Create a project for the Tempo Operator by running the following command:
$ oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: "true" name: openshift-tempo-operator EOF -
Create an Operator group by running the following command:
$ oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF -
Create a subscription by running the following command:
$ oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF
-
Check the Operator status by running the following command:
$ oc get csv -n openshift-tempo-operator
Object storage setup
You can use the following configuration parameters when setting up a supported object storage.
Important
Using object storage requires setting up a supported object store and creating a secret for the object store credentials before deploying a TempoStack or TempoMonolithic instance.
| Storage provider | Secret parameters |
|---|---|
|
|
MinIO |
See MinIO Operator.
|
IBM Cloud Object Storage (COS) |
|
Amazon S3 |
|
Amazon S3 with Security Token Service (STS) |
|
Microsoft Azure Blob Storage |
|
Google Cloud Storage on Google Cloud |
|
Setting up the Amazon S3 storage with the Security Token Service
You can set up the Amazon S3 storage with the Security Token Service (STS) and AWS Command Line Interface (AWS CLI). Optionally, you can also use the Cloud Credential Operator (CCO).
Important
Using the Distributed Tracing Platform with the Amazon S3 storage and STS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
You have installed the latest version of the AWS CLI.
-
If you intend to use the CCO, you have installed and configured the CCO in your cluster.
-
Create an AWS S3 bucket.
-
Create the following
trust.jsonfile for the AWS Identity and Access Management (AWS IAM) policy for the purpose of setting up a trust relationship between the AWS IAM role, which you will create in the next step, and the service account of either theTempoStackorTempoMonolithicinstance:trust.json{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::<aws_account_id>:oidc-provider/<oidc_provider>" }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "<oidc_provider>:sub": [ "system:serviceaccount:<openshift_project_for_tempo>:tempo-<tempo_custom_resource_name>" "system:serviceaccount:<openshift_project_for_tempo>:tempo-<tempo_custom_resource_name>-query-frontend" ] } } } ] }- The OpenID Connect (OIDC) provider that you have configured on the OpenShift Container Platform.
- The namespace in which you intend to create either a
TempoStackorTempoMonolithicinstance. Replace<tempo_custom_resource_name>with themetadataname that you define in yourTempoStackorTempoMonolithiccustom resource.Tip
You can also get the value for the OIDC provider by running the following command:
$ oc get authentication cluster -o json | jq -r '.spec.serviceAccountIssuer' | sed 's~http[s]*://~~g'
-
Create an AWS IAM role by attaching the created
trust.jsonpolicy file. You can do this by running the following command:$ aws iam create-role \ --role-name "tempo-s3-access" \ --assume-role-policy-document "file:///tmp/trust.json" \ --query Role.Arn \ --output text -
Attach an AWS IAM policy to the created AWS IAM role. You can do this by running the following command:
$ aws iam attach-role-policy \ --role-name "tempo-s3-access" \ --policy-arn "arn:aws:iam::aws:policy/AmazonS3FullAccess" -
If you are not using the CCO, skip this step. If you are using the CCO, configure the cloud provider environment for the Tempo Operator. You can do this by running the following command:
$ oc patch subscription <tempo_operator_sub> \ -n <tempo_operator_namespace> \ --type='merge' -p '{"spec": {"config": {"env": [{"name": "ROLEARN", "value": "'"<role_arn>"'"}]}}}'- The name of the Tempo Operator subscription.
- The namespace of the Tempo Operator.
- The AWS STS requires adding the
ROLEARNenvironment variable to the Tempo Operator subcription. As the<role_arn>value, add the Amazon Resource Name (ARN) of the AWS IAM role that you created in step 3.
-
In the OpenShift Container Platform, create an object storage secret with keys as follows:
apiVersion: v1 kind: Secret metadata: name: <secret_name> stringData: bucket: <s3_bucket_name> region: <s3_region> role_arn: <s3_role_arn> type: Opaque -
When the object storage secret is created, update the relevant custom resource of the Distributed Tracing Platform instance as follows:
ExampleTempoStackcustom resourceapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> namespace: <namespace> spec: # ... storage: secret: name: <secret_name> type: s3 credentialMode: token-cco # ...- The secret that you created in the previous step.
- If you are not using the CCO, omit this line. If you are using the CCO, add this parameter with the
token-ccovalue.ExampleTempoMonolithiccustom resourceapiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <name> namespace: <namespace> spec: # ... storage: traces: backend: s3 s3: secret: <secret_name> credentialMode: token-cco # ... - The secret that you created in the previous step.
- If you are not using the CCO, omit this line. If you are using the CCO, add this parameter with the
token-ccovalue.
-
AWS Identity and Access Management Documentation (AWS documentation)
-
AWS Command Line Interface Documentation (AWS documentation)
-
Identify AWS resources with Amazon Resource Names (ARNs) (AWS documentation)
Setting up the Azure storage with the Security Token Service
You can set up the Azure storage with the Security Token Service (STS) by using the Azure Command Line Interface (Azure CLI).
Important
Using the Distributed Tracing Platform with the Azure storage and STS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
You have installed the latest version of the Azure CLI.
-
You have created an Azure storage account.
-
You have created an Azure blob storage container.
-
Create an Azure managed identity by running the following command:
$ az identity create \ --name <identity_name> \ --resource-group <resource_group> \ --location <region> \ --subscription <subscription_id>- The name you have chosen for the managed identity.
- The Azure resource group where you want the identity to be created.
- The Azure region, which must be the same region as for the resource group.
- The Azure subscription ID.
-
Create a federated identity credential for the OpenShift Container Platform service account for use by all components of the Distributed Tracing Platform except the Query Frontend. You can do this by running the following command:
$ az identity federated-credential create \ --name <credential_name> \ --identity-name <identity_name> \ --resource-group <resource_group> \ --issuer <oidc_provider> \ --subject <tempo_service_account_subject> \ --audiences <audience>- Federated identity credentials allow OpenShift Container Platform service accounts to authenticate as an Azure managed identity without storing secrets or using an Azure service principal identity.
- The name you have chosen for the federated credential.
- The URL of the OpenID Connect (OIDC) provider for your cluster.
- The service account subject for your cluster in the following format:
system:serviceaccount:<namespace>:tempo-<tempostack_instance_name>. - The expected audience, which is to be used for validating the issued tokens for the federated identity credential. This is commonly set to
api://AzureADTokenExchange.Tip
You can get the URL of the OpenID Connect (OIDC) issuer for your cluster by running the following command:
$ oc get authentication cluster -o json | jq -r .spec.serviceAccountIssuer
-
Create a federated identity credential for the OpenShift Container Platform service account for use by the Query Frontend component of the Distributed Tracing Platform. You can do this by running the following command:
$ az identity federated-credential create \ --name <credential_name>-frontend \ --identity-name <identity_name> \ --resource-group <resource_group> \ --issuer <cluster_issuer> \ --subject <tempo_service_account_query_frontend_subject> \ --audiences <audience> | jq- Federated identity credentials allow OpenShift Container Platform service accounts to authenticate as an Azure managed identity without storing secrets or using an Azure service principal identity.
- The name you have chosen for the frontend federated identity credential.
- The service account subject for your cluster in the following format:
system:serviceaccount:<namespace>:tempo-<tempostack_instance_name>.
-
Assign the Storage Blob Data Contributor role to the Azure service principal identity of the created Azure managed identity. You can do this by running the following command:
$ az role assignment create \ --assignee <assignee_name> \ --role "Storage Blob Data Contributor" \ --scope "/subscriptions/<subscription_id>- The Azure service principal identity of the Azure managed identity that you created in step 1.
Tip
You can get the
<assignee_name>value by running the following command:$ az ad sp list --all --filter "servicePrincipalType eq 'ManagedIdentity'" | jq -r --arg idName <identity_name> '.[] | select(.displayName == $idName) | .appId'`
- The Azure service principal identity of the Azure managed identity that you created in step 1.
-
Fetch the client ID of the Azure managed identity that you created in step 1:
CLIENT_ID=$(az identity show \ --name <identity_name> \ --resource-group <resource_group> \ --query clientId \ -o tsv)- Copy and paste the
<identity_name>value from step 1. - Copy and paste the
<resource_group>value from step 1.
- Copy and paste the
-
Create an OpenShift Container Platform secret for the Azure workload identity federation (WIF). You can do this by running the following command:
$ oc create -n <tempo_namespace> secret generic azure-secret \ --from-literal=container=<azure_storage_azure_container> \ --from-literal=account_name=<azure_storage_azure_accountname> \ --from-literal=client_id=<client_id> \ --from-literal=audience=<audience> \ --from-literal=tenant_id=<tenant_id>- The name of the Azure Blob Storage container.
- The name of the Azure Storage account.
- The client ID of the managed identity that you fetched in the previous step.
- Optional: Defaults to
api://AzureADTokenExchange. - Azure Tenant ID.
-
When the object storage secret is created, update the relevant custom resource of the Distributed Tracing Platform instance as follows:
ExampleTempoStackcustom resourceapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> namespace: <namespace> spec: # ... storage: secret: name: <secret_name> type: azure # ...- The secret that you created in the previous step.
Example
TempoMonolithiccustom resourceapiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <name> namespace: <namespace> spec: # ... storage: traces: backend: azure azure: secret: <secret_name> # ... - The secret that you created in the previous step.
- The secret that you created in the previous step.
-
Install the Azure CLI on Linux (Azure documentation)
Setting up the Google Cloud storage with the Security Token Service
You can set up the Google Cloud Storage (GCS) with the Security Token Service (STS) by using the Google Cloud CLI.
Important
Using the Distributed Tracing Platform with the GCS and STS is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
-
You have installed the latest version of the Google Cloud CLI.
-
Create a GCS bucket on the Google Cloud.
-
Create or reuse a service account with Google’s Identity and Access Management (IAM):
SERVICE_ACCOUNT_EMAIL=$(gcloud iam service-accounts create <iam_service_account_name> \ --display-name="Tempo Account" \ --project <project_id> \ --format='value(email)' \ --quiet)- The name of the service account on the Google Cloud.
- The project ID of the service account on the Google Cloud.
-
Bind the required Google Cloud roles to the created service account at the project level. You can do this by running the following command:
$ gcloud projects add-iam-policy-binding <project_id> \ --member "serviceAccount:$SERVICE_ACCOUNT_EMAIL" \ --role "roles/storage.objectAdmin" -
Retrieve the
POOL_IDvalue of the Google Cloud Workload Identity Pool that is associated with the cluster. How you can retrieve this value depends on your environment, so the following command is only an example:$ OIDC_ISSUER=$(oc get authentication.config cluster -o jsonpath='{.spec.serviceAccountIssuer}') \ && POOL_ID=$(echo "$OIDC_ISSUER" | awk -F'/' '{print $NF}' | sed 's/-oidc$//') -
Add the IAM policy bindings. You can do this by running the following commands:
$ gcloud iam service-accounts add-iam-policy-binding "$SERVICE_ACCOUNT_EMAIL" \ --role="roles/iam.workloadIdentityUser" \ --member="principal://iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/subject/system:serviceaccount:<tempo_namespace>:tempo-<tempo_name>" \ --project=<project_id> \ --quiet \ && gcloud iam service-accounts add-iam-policy-binding "$SERVICE_ACCOUNT_EMAIL" \ --role="roles/iam.workloadIdentityUser" \ --member="principal://iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/subject/system:serviceaccount:<tempo_namespace>:tempo-<tempo_name>-query-frontend" \ --project=<project_id> \ --quiet && gcloud storage buckets add-iam-policy-binding "gs://$BUCKET_NAME" \ --role="roles/storage.admin" \ --member="serviceAccount:$SERVICE_ACCOUNT_EMAIL" \ --condition=None- The
$SERVICE_ACCOUNT_EMAILis the output of the command in step 2.
- The
-
Create a credential file for the
key.jsonkey of the storage secret for use by theTempoStackcustom resource. You can do this by running the following command:$ gcloud iam workload-identity-pools create-cred-config \ "projects/<project_number>/locations/global/workloadIdentityPools/<pool_id>/providers/<provider_id>" \ --service-account="$SERVICE_ACCOUNT_EMAIL" \ --credential-source-file=/var/run/secrets/storage/serviceaccount/token \ --credential-source-type=text \ --output-file=<output_file_path>- The
credential-source-fileparameter must always point to the/var/run/secrets/storage/serviceaccount/tokenpath because the Operator mounts the token from this path. - The path for saving the output file.
- The
-
Get the correct audience by running the following command:
$ gcloud iam workload-identity-pools providers describe "$PROVIDER_NAME" --format='value(oidc.allowedAudiences[0])' -
Create a storage secret for the Distributed Tracing Platform by running the following command.
$ oc -n <tempo_namespace> create secret generic gcs-secret \ --from-literal=bucketname="<bucket_name>" \ --from-literal=audience="<audience>" \ --from-file=key.json=<output_file_path>- The bucket name of the Google Cloud Storage.
- The audience that you got in the previous step.
- The credential file that you created in step 6.
-
When the object storage secret is created, update the relevant custom resource of the Distributed Tracing Platform instance as follows:
ExampleTempoStackcustom resourceapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> namespace: <namespace> spec: # ... storage: secret: name: <secret_name> type: gcs # ...- The secret that you created in the previous step.
Example
TempoMonolithiccustom resourceapiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <name> namespace: <namespace> spec: # ... storage: traces: backend: gcs gcs: secret: <secret_name> # ... - The secret that you created in the previous step.
- The secret that you created in the previous step.
-
Install the gcloud CLI (Google Cloud Documentation)
-
Service accounts overview (Google Cloud Documentation)
Setting up IBM Cloud Object Storage
You can set up IBM Cloud Object Storage by using the OpenShift CLI (oc).
-
You have installed the latest version of OpenShift CLI (
oc). For more information, see "Getting started with the OpenShift CLI" in Configure: CLI tools. -
You have installed the latest version of IBM Cloud Command Line Interface (
ibmcloud). For more information, see "Getting started with the IBM Cloud CLI" in IBM Cloud Docs. -
You have configured IBM Cloud Object Storage. For more information, see "Choosing a plan and creating an instance" in IBM Cloud Docs.
-
You have an IBM Cloud Platform account.
-
You have ordered an IBM Cloud Object Storage plan.
-
You have created an instance of IBM Cloud Object Storage.
-
-
On IBM Cloud, create an object store bucket.
-
On IBM Cloud, create a service key for connecting to the object store bucket by running the following command:
$ ibmcloud resource service-key-create <ibm_bucket_name> Writer \ --instance-name <ibm_bucket_name> --parameters '{"HMAC":true}' -
On IBM Cloud, create a secret with the bucket credentials by running the following command:
$ oc -n <namespace> create secret generic <ibm_cos_secret> \ --from-literal=bucket="<ibm_bucket_name>" \ --from-literal=endpoint="<ibm_bucket_endpoint>" \ --from-literal=access_key_id="<ibm_bucket_access_key>" \ --from-literal=access_key_secret="<ibm_bucket_secret_key>" -
On OpenShift Container Platform, create an object storage secret with keys as follows:
apiVersion: v1 kind: Secret metadata: name: <ibm_cos_secret> stringData: bucket: <ibm_bucket_name> endpoint: <ibm_bucket_endpoint> access_key_id: <ibm_bucket_access_key> access_key_secret: <ibm_bucket_secret_key> type: Opaque -
On OpenShift Container Platform, set the storage section in the
TempoStackcustom resource as follows:apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack # ... spec: # ... storage: secret: name: <ibm_cos_secret> type: s3 # ...- Name of the secret that contains the IBM Cloud Storage access and secret keys.
-
Getting started with the IBM Cloud CLI (IBM Cloud Docs)
-
Choosing a plan and creating an instance (IBM Cloud Docs)
-
Getting started with IBM Cloud Object Storage: Before you begin (IBM Cloud Docs)
Configuring the permissions and tenants
Before installing a TempoStack or TempoMonolithic instance, you must define one or more tenants and configure their read and write access. You can configure such an authorization setup by using a cluster role and cluster role binding for the Kubernetes Role-Based Access Control (RBAC). By default, no users are granted read or write permissions. For more information, see "Configuring the read permissions for tenants" and "Configuring the write permissions for tenants".
Note
The OpenTelemetry Collector of the Red Hat build of OpenTelemetry can send trace data to a TempoStack or TempoMonolithic instance by using the service account with RBAC for writing the data.
| Component | Tempo Gateway service | OpenShift OAuth | TokenReview API |
SubjectAccessReview API |
|---|---|---|---|---|
Authentication |
X |
X |
X |
|
Authorization |
X |
X |
Configuring the read permissions for tenants
You can configure the read permissions for tenants from the Administrator view of the web console or from the command line.
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-adminrole. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-adminrole.
-
Define the tenants by adding the
tenantNameandtenantIdparameters with your values of choice to theTempoStackcustom resource (CR):Tenant example in aTempoStackCRapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: # ... tenants: mode: openshift authentication: - tenantName: dev tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" # ...- A
tenantNamevalue of the user’s choice. - A
tenantIdvalue of the user’s choice.
- A
-
Add the tenants to a cluster role with the read (
get) permissions to read traces.Example RBAC configuration in aClusterRoleresourceapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: - dev - prod resourceNames: - traces verbs: - 'get'- Lists the tenants,
devandprodin this example, which are defined by using thetenantNameparameter in the previous step. - Enables the read operation for the listed tenants.
- Lists the tenants,
-
Grant authenticated users the read permissions for trace data by defining a cluster role binding for the cluster role from the previous step.
Example RBAC configuration in aClusterRoleBindingresourceapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated- Grants all authenticated users the read permissions for trace data.
Configuring the write permissions for tenants
You can configure the write permissions for tenants from the Administrator view of the web console or from the command line.
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-adminrole. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-adminrole. -
You have installed the OpenTelemetry Collector and configured it to use an authorized service account with permissions. For more information, see "Creating the required RBAC resources automatically" in the Red Hat build of OpenTelemetry documentation.
-
Create a service account for use with OpenTelemetry Collector.
apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector namespace: <project_of_opentelemetry_collector_instance> -
Add the tenants to a cluster role with the write (
create) permissions to write traces.Example RBAC configuration in aClusterRoleresourceapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: - dev resourceNames: - traces verbs: - 'create'- Lists the tenants.
- Enables the write operation.
-
Grant the OpenTelemetry Collector the write permissions by defining a cluster role binding to attach the OpenTelemetry Collector service account.
Example RBAC configuration in aClusterRoleBindingresourceapiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector namespace: otel- The service account that you created in a previous step. The client uses it when exporting trace data.
-
Configure the
OpenTelemetryCollectorcustom resource as follows:-
Add the
bearertokenauthextension and a valid token to the tracing pipeline service. -
Add the tenant name in the
otlp/otlphttpexporters as theX-Scope-OrgIDheaders. -
Enable TLS with a valid certificate authority file.
Sample OpenTelemetry CR configurationapiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: <project_of_tempostack_instance> spec: mode: deployment serviceAccount: otel-collector config: | extensions: bearertokenauth: filename: "/var/run/secrets/kubernetes.io/serviceaccount/token" exporters: otlp/dev: endpoint: sample-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: "dev" otlphttp/dev: endpoint: https://sample-gateway.<project_of_tempostack_instance>.svc.cluster.local:8080/api/traces/v1/dev tls: insecure: false ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: "dev" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev] # ...- Service account configured with write permissions.
- Bearer Token extension to use service account token.
- The service account token. The client sends the token to the tracing pipeline service as the bearer token header.
- Specify either the OTLP gRPC Exporter (
otlp/dev) or the OTLP HTTP Exporter (otlphttp/dev). - Enabled TLS with a valid service CA file.
- Header with tenant name.
- Specify either the OTLP gRPC Exporter (
otlp/dev) or the OTLP HTTP Exporter (otlphttp/dev). - The exporter you specified in
exporterssection of the CR.
-
Installing a TempoStack instance
You can install a TempoStack instance by using the web console or command line.
Installing a TempoStack instance by using the web console
You can install a TempoStack instance from the Administrator view of the web console.
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-adminrole. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-adminrole. -
You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, Google Cloud Storage. For more information, see "Object storage setup".
Warning
Object storage is required and not included with the Distributed Tracing Platform. You must choose and set up object storage by a supported provider before installing the Distributed Tracing Platform.
-
You have defined one or more tenants and configured the read and write permissions. For more information, see "Configuring the read permissions for tenants" and "Configuring the write permissions for tenants".
-
Go to Home → Projects → Create Project to create a permitted project of your choice for the
TempoStackinstance that you will create in a subsequent step. Project names beginning with theopenshift-prefix are not permitted. -
Go to Workloads → Secrets → Create → From YAML to create a secret for your object storage bucket in the project that you created for the
TempoStackinstance. For more information, see "Object storage setup".Example secret for Amazon S3 and MinIO storageapiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque -
Create a
TempoStackinstance.Note
You can create multiple
TempoStackinstances in separate projects on the same cluster.-
Go to Ecosystem → Installed Operators.
-
Select TempoStack → Create TempoStack → YAML view.
-
In the YAML view, customize the
TempoStackcustom resource (CR):ExampleTempoStackCR for AWS S3 and MinIO storage and two tenantsapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <permitted_project_of_tempostack_instance> spec: storage: secret: name: <secret_name> type: <secret_provider> storageSize: <value>Gi resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift authentication: - tenantName: dev tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb" template: gateway: enabled: true queryFrontend: jaegerQuery: enabled: true- This CR creates a
TempoStackdeployment, which is configured to receive Jaeger Thrift over the HTTP and OpenTelemetry Protocol (OTLP). - The project that you have chosen for the
TempoStackdeployment. Project names beginning with theopenshift-prefix are not permitted. - Red Hat supports only the custom resource options that are available in the Red Hat OpenShift Distributed Tracing Platform documentation.
- Specifies the storage for storing traces.
- The secret you created in step 2 for the object storage that had been set up as one of the prerequisites.
- The value of the
namefield in themetadatasection of the secret. For example:minio. - The accepted values are
azurefor Azure Blob Storage;gcsfor Google Cloud Storage; ands3for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation. For example:s3. - The size of the persistent volume claim for the Tempo Write-Ahead Logging (WAL). The default is
10Gi. For example:1Gi. - Optional.
- The value must be
openshift. - The list of tenants.
- The tenant name, which is used as the value for the
X-Scope-OrgIdHTTP header. - The unique identifier of the tenant. Must be unique throughout the lifecycle of the
TempoStackdeployment. The Distributed Tracing Platform uses this ID to prefix objects in the object storage. You can reuse the value of the UUID ortempoNamefield. - Enables a gateway that performs authentication and authorization.
- Exposes the Jaeger UI, which visualizes the data, via a route at
http://<gateway_ingress>/api/traces/v1/<tenant_name>/search.
- This CR creates a
-
Select Create.
-
-
Use the Project: dropdown list to select the project of the
TempoStackinstance. -
Go to Ecosystem → Installed Operators to verify that the Status of the
TempoStackinstance is Condition: Ready. -
Go to Workloads → Pods to verify that all the component pods of the
TempoStackinstance are running. -
Access the Tempo console:
-
Go to Networking → Routes and Ctrl+F to search for
tempo. -
In the Location column, open the URL to access the Tempo console.
Note
The Tempo console initially shows no trace data following the Tempo console installation.
-
Installing a TempoStack instance by using the CLI
You can install a TempoStack instance from the command line.
-
An active OpenShift CLI (
oc) session by a cluster administrator with thecluster-adminrole.Tip
-
Ensure that your OpenShift CLI (
oc) version is up to date and matches your OpenShift Container Platform version. -
Run the
oc logincommand:$ oc login --username=<your_username>
-
-
You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, Google Cloud Storage. For more information, see "Object storage setup".
Warning
Object storage is required and not included with the Distributed Tracing Platform. You must choose and set up object storage by a supported provider before installing the Distributed Tracing Platform.
-
You have defined one or more tenants and configured the read and write permissions. For more information, see "Configuring the read permissions for tenants" and "Configuring the write permissions for tenants".
-
Run the following command to create a permitted project of your choice for the
TempoStackinstance that you will create in a subsequent step:$ oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <permitted_project_of_tempostack_instance> EOF- Project names beginning with the
openshift-prefix are not permitted.
- Project names beginning with the
-
In the project that you created for the
TempoStackinstance, create a secret for your object storage bucket by running the following command:$ oc apply -f - << EOF <object_storage_secret> EOFFor more information, see "Object storage setup".
Example secret for Amazon S3 and MinIO storageapiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque -
Create a
TempoStackinstance in the project that you created for it:Note
You can create multiple
TempoStackinstances in separate projects on the same cluster.-
Customize the
TempoStackcustom resource (CR):ExampleTempoStackCR for AWS S3 and MinIO storage and two tenantsapiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <permitted_project_of_tempostack_instance> spec: storage: secret: name: <secret_name> type: <secret_provider> storageSize: <value>Gi resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift authentication: - tenantName: dev tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb" template: gateway: enabled: true queryFrontend: jaegerQuery: enabled: true- This CR creates a
TempoStackdeployment, which is configured to receive Jaeger Thrift over the HTTP and OpenTelemetry Protocol (OTLP). - The project that you have chosen for the
TempoStackdeployment. Project names beginning with theopenshift-prefix are not permitted. - Red Hat supports only the custom resource options that are available in the Red Hat OpenShift Distributed Tracing Platform documentation.
- Specifies the storage for storing traces.
- The secret you created in step 2 for the object storage that had been set up as one of the prerequisites.
- The value of the
namefield in themetadatasection of the secret. For example:minio. - The accepted values are
azurefor Azure Blob Storage;gcsfor Google Cloud Storage; ands3for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation. For example:s3. - The size of the persistent volume claim for the Tempo Write-Ahead Logging (WAL). The default is
10Gi. For example:1Gi. - Optional.
- The value must be
openshift. - The list of tenants.
- The tenant name, which is used as the value for the
X-Scope-OrgIdHTTP header. - The unique identifier of the tenant. Must be unique throughout the lifecycle of the
TempoStackdeployment. The Distributed Tracing Platform uses this ID to prefix objects in the object storage. You can reuse the value of the UUID ortempoNamefield. - Enables a gateway that performs authentication and authorization.
- Exposes the Jaeger UI, which visualizes the data, via a route at
http://<gateway_ingress>/api/traces/v1/<tenant_name>/search.
- This CR creates a
-
Apply the customized CR by running the following command:
$ oc apply -f - << EOF <tempostack_cr> EOF
-
-
Verify that the
statusof allTempoStackcomponentsisRunningand theconditionsaretype: Readyby running the following command:$ oc get tempostacks.tempo.grafana.com simplest -o yaml -
Verify that all the
TempoStackcomponent pods are running by running the following command:$ oc get pods -
Access the Tempo console:
-
Query the route details by running the following command:
$ oc get route -
Open
https://<route_from_previous_step>in a web browser.Note
The Tempo console initially shows no trace data following the Tempo console installation.
-
Installing a TempoMonolithic instance
Important
The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can install a TempoMonolithic instance by using the web console or command line.
The TempoMonolithic custom resource (CR) creates a Tempo deployment in monolithic mode.
All components of the Tempo deployment, such as the compactor, distributor, ingester, querier, and query frontend, are contained in a single container.
A TempoMonolithic instance supports storing traces in in-memory storage, a persistent volume, or object storage.
Tempo deployment in monolithic mode is preferred for a small deployment, demonstration, and testing.
Note
The monolithic deployment of Tempo does not scale horizontally.
If you require horizontal scaling, use the TempoStack CR for a Tempo deployment in microservices mode.
Installing a TempoMonolithic instance by using the web console
Important
The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can install a TempoMonolithic instance from the Administrator view of the web console.
-
You are logged in to the OpenShift Container Platform web console as a cluster administrator with the
cluster-adminrole. -
For Red Hat OpenShift Dedicated, you must be logged in using an account with the
dedicated-adminrole. -
You have defined one or more tenants and configured the read and write permissions. For more information, see "Configuring the read permissions for tenants" and "Configuring the write permissions for tenants".
-
Go to Home → Projects → Create Project to create a permitted project of your choice for the
TempoMonolithicinstance that you will create in a subsequent step. Project names beginning with theopenshift-prefix are not permitted. -
Decide which type of supported storage to use for storing traces: in-memory storage, a persistent volume, or object storage.
Example secret for Amazon S3 and MinIO storage
Object storage is not included with the Distributed Tracing Platform and requires setting up an object store by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, or Google Cloud Storage.
Additionally, opting for object storage requires creating a secret for your object storage bucket in the project that you created for the
TempoMonolithicinstance. You can do this in Workloads → Secrets → Create → From YAML.For more information, see "Object storage setup".
apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque -
Create a
TempoMonolithicinstance:Note
You can create multiple
TempoMonolithicinstances in separate projects on the same cluster.-
Go to Ecosystem → Installed Operators.
-
Select TempoMonolithic → Create TempoMonolithic → YAML view.
-
In the YAML view, customize the
TempoMonolithiccustom resource (CR).ExampleTempoMonolithicCRapiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <permitted_project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> size: <value>Gi s3: secret: <secret_name> tls: enabled: true caName: <ca_certificate_configmap_name> jaegerui: enabled: true route: enabled: true resources: total: limits: memory: <value>Gi cpu: <value>m multitenancy: enabled: true mode: openshift authentication: - tenantName: dev tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb"- This CR creates a
TempoMonolithicdeployment with trace ingestion in the OTLP protocol. - The project that you have chosen for the
TempoMonolithicdeployment. Project names beginning with theopenshift-prefix are not permitted. - Red Hat supports only the custom resource options that are available in the Red Hat OpenShift Distributed Tracing Platform documentation.
- Specifies the storage for storing traces.
- Type of storage for storing traces: in-memory storage, a persistent volume, or object storage. The value for a persistent volume is
pv. The accepted values for object storage ares3,gcs, orazure, depending on the used object store type. The default value ismemoryfor thetmpfsin-memory storage, which is only appropriate for development, testing, demonstrations, and proof-of-concept environments because the data does not persist when the pod is shut down. - Memory size: For in-memory storage, this means the size of the
tmpfsvolume, where the default is2Gi. For a persistent volume, this means the size of the persistent volume claim, where the default is10Gi. For object storage, this means the size of the persistent volume claim for the Tempo Write-Ahead Logging (WAL), where the default is10Gi. - Optional: For object storage, the type of object storage. The accepted values are
s3,gcs, andazure, depending on the used object store type. - Optional: For object storage, the value of the
namein themetadataof the storage secret. The storage secret must be in the same namespace as theTempoMonolithicinstance and contain the fields specified in "Table 1. Required secret parameters" in the section "Object storage setup". - Optional.
- Optional: Name of a
ConfigMapobject that contains a CA certificate. - Exposes the Jaeger UI, which visualizes the data, via a route at
http://<gateway_ingress>/api/traces/v1/<tenant_name>/search. - Enables creation of the route for the Jaeger UI.
- Optional.
- Lists the tenants.
- The tenant name, which is used as the value for the
X-Scope-OrgIdHTTP header. - The unique identifier of the tenant. Must be unique throughout the lifecycle of the
TempoMonolithicdeployment. This ID will be added as a prefix to the objects in the object storage. You can reuse the value of the UUID ortempoNamefield.
- This CR creates a
-
Select Create.
-
-
Use the Project: dropdown list to select the project of the
TempoMonolithicinstance. -
Go to Ecosystem → Installed Operators to verify that the Status of the
TempoMonolithicinstance is Condition: Ready. -
Go to Workloads → Pods to verify that the pod of the
TempoMonolithicinstance is running. -
Access the Jaeger UI:
-
Go to Networking → Routes and Ctrl+F to search for
jaegerui.Note
The Jaeger UI uses the
tempo-<metadata_name_of_TempoMonolithic_CR>-jaegeruiroute. -
In the Location column, open the URL to access the Jaeger UI.
-
-
When the pod of the
TempoMonolithicinstance is ready, you can send traces to thetempo-<metadata_name_of_TempoMonolithic_CR>:4317(OTLP/gRPC) andtempo-<metadata_name_of_TempoMonolithic_CR>:4318(OTLP/HTTP) endpoints inside the cluster.The Tempo API is available at the
tempo-<metadata_name_of_TempoMonolithic_CR>:3200endpoint inside the cluster.
Installing a TempoMonolithic instance by using the CLI
Important
The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.
You can install a TempoMonolithic instance from the command line.
-
An active OpenShift CLI (
oc) session by a cluster administrator with thecluster-adminrole.Tip
-
Ensure that your OpenShift CLI (
oc) version is up to date and matches your OpenShift Container Platform version. -
Run the
oc logincommand:$ oc login --username=<your_username>
-
-
You have defined one or more tenants and configured the read and write permissions. For more information, see "Configuring the read permissions for tenants" and "Configuring the write permissions for tenants".
-
Run the following command to create a permitted project of your choice for the
TempoMonolithicinstance that you will create in a subsequent step:$ oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <permitted_project_of_tempomonolithic_instance> EOF- Project names beginning with the
openshift-prefix are not permitted.
- Project names beginning with the
-
Decide which type of supported storage to use for storing traces: in-memory storage, a persistent volume, or object storage.
Example secret for Amazon S3 and MinIO storage
Object storage is not included with the Distributed Tracing Platform and requires setting up an object store by a supported provider: Red Hat OpenShift Data Foundation, MinIO, Amazon S3, Azure Blob Storage, or Google Cloud Storage.
Additionally, opting for object storage requires creating a secret for your object storage bucket in the project that you created for the
TempoMonolithicinstance. You can do this by running the following command:$ oc apply -f - << EOF <object_storage_secret> EOFFor more information, see "Object storage setup".
apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque -
Create a
TempoMonolithicinstance in the project that you created for it.Tip
You can create multiple
TempoMonolithicinstances in separate projects on the same cluster.-
Customize the
TempoMonolithiccustom resource (CR).ExampleTempoMonolithicCRapiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <permitted_project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> size: <value>Gi s3: secret: <secret_name> tls: enabled: true caName: <ca_certificate_configmap_name> jaegerui: enabled: true route: enabled: true resources: total: limits: memory: <value>Gi cpu: <value>m multitenancy: enabled: true mode: openshift authentication: - tenantName: dev tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb"- This CR creates a
TempoMonolithicdeployment with trace ingestion in the OTLP protocol. - The project that you have chosen for the
TempoMonolithicdeployment. Project names beginning with theopenshift-prefix are not permitted. - Red Hat supports only the custom resource options that are available in the Red Hat OpenShift Distributed Tracing Platform documentation.
- Specifies the storage for storing traces.
- Type of storage for storing traces: in-memory storage, a persistent volume, or object storage. The value for a persistent volume is
pv. The accepted values for object storage ares3,gcs, orazure, depending on the used object store type. The default value ismemoryfor thetmpfsin-memory storage, which is only appropriate for development, testing, demonstrations, and proof-of-concept environments because the data does not persist when the pod is shut down. - Memory size: For in-memory storage, this means the size of the
tmpfsvolume, where the default is2Gi. For a persistent volume, this means the size of the persistent volume claim, where the default is10Gi. For object storage, this means the size of the persistent volume claim for the Tempo Write-Ahead Logging (WAL), where the default is10Gi. - Optional: For object storage, the type of object storage. The accepted values are
s3,gcs, andazure, depending on the used object store type. - Optional: For object storage, the value of the
namein themetadataof the storage secret. The storage secret must be in the same namespace as theTempoMonolithicinstance and contain the fields specified in "Table 1. Required secret parameters" in the section "Object storage setup". - Optional.
- Optional: Name of a
ConfigMapobject that contains a CA certificate. - Exposes the Jaeger UI, which visualizes the data, via a route at
http://<gateway_ingress>/api/traces/v1/<tenant_name>/search. - Enables creation of the route for the Jaeger UI.
- Optional.
- Lists the tenants.
- The tenant name, which is used as the value for the
X-Scope-OrgIdHTTP header. - The unique identifier of the tenant. Must be unique throughout the lifecycle of the
TempoMonolithicdeployment. This ID will be added as a prefix to the objects in the object storage. You can reuse the value of the UUID ortempoNamefield.
- This CR creates a
-
Apply the customized CR by running the following command:
$ oc apply -f - << EOF <tempomonolithic_cr> EOF
-
-
Verify that the
statusof allTempoMonolithiccomponentsisRunningand theconditionsaretype: Readyby running the following command:$ oc get tempomonolithic.tempo.grafana.com <metadata_name_of_tempomonolithic_cr> -o yaml -
Run the following command to verify that the pod of the
TempoMonolithicinstance is running:$ oc get pods -
Access the Jaeger UI:
-
Query the route details for the
tempo-<metadata_name_of_tempomonolithic_cr>-jaegeruiroute by running the following command:$ oc get route -
Open
https://<route_from_previous_step>in a web browser.
-
-
When the pod of the
TempoMonolithicinstance is ready, you can send traces to thetempo-<metadata_name_of_tempomonolithic_cr>:4317(OTLP/gRPC) andtempo-<metadata_name_of_tempomonolithic_cr>:4318(OTLP/HTTP) endpoints inside the cluster.The Tempo API is available at the
tempo-<metadata_name_of_tempomonolithic_cr>:3200endpoint inside the cluster.