Monitoring Kubernetes
This guide allows you to collect logs and metrics from your Kubernetes system, sending them to ClickStack for visualization and analysis. For demo data we use optionally use the ClickStack fork of the official OpenTelemetry demo.
Prerequisites
This guide requires you to have:
- A Kubernetes cluster (v1.20+ recommended) with at least 32 GiB of RAM and 100GB of disk space available on one node for ClickHouse.
- Helm v3+
kubectl, configured to interact with your cluster
Deployment options
You can follow this guide using either of the following deployment options:
-
Open Source ClickStack: Deploy ClickStack entirely within your Kubernetes cluster, including:
- ClickHouse
- HyperDX
- MongoDB (used for dashboard state and configuration)
-
Managed ClickStack, with ClickHouse and the ClickStack UI (HyperDX) managed in ClickHouse Cloud. This eliminates the need to run ClickHouse or HyperDX inside your cluster.
To simulate application traffic, you can optionally deploy the ClickStack fork of the OpenTelemetry Demo Application. This generates telemetry data including logs, metrics, and traces. If you already have workloads running in your cluster, you can skip this step and monitor existing pods, nodes, and containers.
Install cert-manager (Optional)
If your setup needs TLS certificates, install cert-manager using Helm:
Deploy the OpenTelemetry Demo (Optional)
This step is optional and intended if you have no existing pods to monitor. Although users with existing services deployed in their Kubernetes environment can skip, this demo does include instrumented microservices which generate trace and session replay data - allowing users to explore all features of ClickStack.
The following deploys the ClickStack fork of the OpenTelemetry Demo application stack within a Kubernetes cluster, tailored for observability testing and showcasing instrumentation. It includes backend microservices, load generators, telemetry pipelines, supporting infrastructure (e.g., Kafka, Redis), and SDK integrations with ClickStack.
All services are deployed to the otel-demo namespace. Each deployment includes:
- Automatic instrumentation with OTel and ClickStack SDKS for traces, metrics, and logs.
- All services send their instrumentation to a
my-hyperdx-hdx-oss-v2-otel-collectorOpenTelemetry collector (not deployed) - Forwarding of resource tags to correlate logs, metrics and traces via the environment variable
OTEL_RESOURCE_ATTRIBUTES.
On deployment of the demo, confirm all pods have been successfully created and are in the Running state:
Demo Architecture
The demo is composed of microservices written in different programming languages that talk to each other over gRPC and HTTP and a load generator that uses Locust to fake user traffic. The original source code for this demo has been modified to use ClickStack instrumentation.
Credit: https://opentelemetry.io/docs/demo/architecture/
Further details on the demo can be found in:
Add the ClickStack Helm chart repository
To deploy ClickStack, we use the official Helm chart.
This requires us to add the HyperDX Helm repository:
Deploy ClickStack
With the Helm chart installed, you can deploy ClickStack to your cluster. You can either run all components, including ClickHouse and HyperDX, within your Kubernetes environment, or just deploy the collector and rely on Managed ClickStack for ClickHouse and the UI HyperDX.
ClickStack Open Source (self-managed)
The following command installs ClickStack to the otel-demo namespace. The helm chart deploys:
- A ClickHouse instance
- HyperDX
- The ClickStack distribution of the OTel collector
- MongoDB for storage of HyperDX application state
You might need to adjust the storageClassName according to your Kubernetes cluster configuration.
Users not deploying the OTel demo can modify this, selecting an appropriate namespace.
This chart also installs ClickHouse and the OTel collector. For production, it is recommended that you use the clickhouse and OTel collector operators and/or use Managed ClickStack.
To disable clickhouse and OTel collector, set the following values:
Managed ClickStack
If you'd rather use Managed ClickStack, you can deploy ClickStack and disable the included ClickHouse.
The chart currently always deploys both HyperDX and MongoDB. While these components offer an alternative access path, they are not integrated with ClickHouse Cloud authentication. These components are intended for administrators in this deployment model, providing access to the secure ingestion key needed to ingest through the deployed OTel collector, but should not be exposed to end users.
To verify the deployment status, run the following command and confirm all components are in the Running state. Note that ClickHouse will be absent if you're using Managed ClickStack:
Access the HyperDX UI
Even when using Managed ClickStack, the local HyperDX instance deployed in the Kubernetes cluster is still required. It provides an ingestion key managed by the OpAMP server bundled with HyperDX, with secures ingestion through the deployed OTel collector - a capability not currently available in Managed ClickStack.
For security, the service uses ClusterIP and is not exposed externally by default.
To access the HyperDX UI, port forward from 3000 to the local port 8080.
Navigate http://localhost:8080 to access the HyperDX UI.
Create a user, providing a username and password that meets the complexity requirements.
Retrieve ingestion API key
Ingestion to the OTel collector deployed by the ClickStack collector is secured with an ingestion key.
Navigate to Team Settings and copy the Ingestion API Key from the API Keys section. This API key ensures data ingestion through the OpenTelemetry collector is secure.
Create API Key Kubernetes Secret
Create a new Kubernetes secret with the Ingestion API Key and a config map containing the location of the OTel collector deployed with the ClickStack helm chart. Later components will use this to allow ingest into the collector deployed with the ClickStack Helm chart:
Restart the OpenTelemetry demo application pods to take into account the Ingestion API Key.
Trace and log data from demo services should now begin to flow into HyperDX.
Add the OpenTelemetry Helm repo
To collect Kubernetes metrics, we will deploy a standard OTel collector, configuring this to send data securely to our ClickStack collector using the above ingestion API key.
This requires us to install the OpenTelemetry Helm repo:
Deploy Kubernetes collector components
To collect logs and metrics from both the cluster itself and each node, we'll need to deploy two separate OpenTelemetry collectors, each with its own manifest. The two manifests provided - k8s_deployment.yaml and k8s_daemonset.yaml - work together to collect comprehensive telemetry data from your Kubernetes cluster.
-
k8s_deployment.yamldeploys a single OpenTelemetry Collector instance responsible for collecting cluster-wide events and metadata. It gathers Kubernetes events, cluster metrics, and enriches telemetry data with pod labels and annotations. This collector runs as a standalone deployment with a single replica to avoid duplicate data. -
k8s_daemonset.yamldeploys a DaemonSet-based collector that runs on every node in your cluster. It collects node-level and pod-level metrics, as well as container logs, using components likekubeletstats,hostmetrics, and Kubernetes attribute processors. These collectors enrich logs with metadata and send them to HyperDX using the OTLP exporter.
Together, these manifests enable full-stack observability across the cluster, from infrastructure to application-level telemetry, and send the enriched data to ClickStack for centralized analysis.
First, install the collector as a deployment:
k8s_deployment.yaml
Next, deploy the collector as a DaemonSet for node and pod-level metrics and logs:
k8s_daemonset.yaml
k8s_daemonset.yamlExplore Kubernetes data in HyperDX
Navigate to your HyperDX UI - either using your Kubernetes-deployed instance or via Managed ClickStack.
If using Managed ClickStack, simply log in to your ClickHouse Cloud service and select "ClickStack" from the left menu. You will be automatically authenticated and will not need to create a user. Data sources for logs, metrics and traces will be pre-created for you. To access the local deployed HyperDX, you can port forward using the local command and access HyperDX at http://localhost:8080. In production, we recommend using an ingress with TLS if you are not using Managed ClickStack. For example:Managed ClickStack
ClickStack Open Source
To explore the Kubernetes data, navigate to the dedicated present dashboard at /kubernetes e.g. http://localhost:8080/kubernetes.
Each of the tabs, Pods, Nodes, and Namespaces, should be populated with data.