A few weeks ago we introduced the Telemetry Controller, a controller that can turn telemetry event streams – logs, metrics, and traces – into Kubernetes resources. It provides a robust multi-tenant API on top of OpenTelemetry that allows you to describe what telemetry data you need, and where it should be forwarded.

Think of the Telemetry Controller as an endpoint agent: it is responsible for collecting the telemetry data from the log source and forwarding it either to the destination of that data – for example, a storage or analytics solution like OpenObserve – or to an aggregator that the Logging operator provides. The big benefit is that the Telemetry Controller provides isolation and access control for telemetry data at the endpoint, so you can route data from multi-tenant endpoints to tenant-specific destinations primarily based on the Kubernetes namespace of the workload or any other metadata attached to the telemetry signal. 

Until now, you could perform such data separation only in the aggregation layer, and the data of different tenants was mixed on its way from the endpoint to the aggregator. Also, unless you’ve deployed a separate collector for every tenant, a misconfiguration in the log collection of a tenant could have broken the log collection of the other tenants.

The Telemetry Controller provides an interface where you can create tenants to isolate teams or customers based on the namespaces they are allowed to use. Your users can create subscriptions which add dynamic selection and forwarding rules for their Kubernetes workload logs based on namespaces, label selectors, and other Kubernetes metadata like container or host name.

Telemetry Controller flow in multi-tenant scenarios

The Telemetry Controller supports OTLP destinations out of the box, and can also send logs to an aggregator to transform, enrich, or filter telemetry data further in a scalable manner. This is how it supports Kubernetes workloads to process and forward telemetry data using Axoflow, our advanced telemetry pipeline solution.

Why Loki?

Grafana Loki is one of the most popular log aggregation systems used in the cloud-native world, and one of the most used destinations of the Logging operator. This blog post shows how you can send telemetry data directly to Loki from your Telemetry Controller managed log sources. This is important to note, because the Logging operator, which is a CNCF Sandbox Project and which Axoflow is the core maintainer of, already supports sending logs to Loki, but only through an aggregator (Fluentd or syslog-ng) first.

Setting up Loki and the Telemetry Controller

In this post we show you the steps needed to configure sending all or some of a tenant’s logs to Loki without having to understand the details of how the OpenTelemetry Collector should be configured efficiently to achieve this.
In the demo, we will set up telemetry forwarding for a cluster, where a single tenant, dev resides. The workloads and subscriptions of the dev tenant are in the same namespace. Data will be routed to a Grafana Loki instance.

Send Kubernetes logs to Grafana Loki with Telemetry Controller

Prerequisites

  • Create a KinD cluster
  • Install cert-manager:
    helm install cert-manager jetstack/cert-manager --namespace cert-manager --create-namespace --set installCRDs=true --version v1.13.3
  • Install the OpenTelemetry Operator:
    kubectl apply -f https://github.com/open-telemetry/opentelemetry-operator/releases/download/v0.96.0/opentelemetry-operator.yaml

Set up Loki

Create a free trial account for Grafana Cloud.

Log in, then select Add new connection > Hosted logs.

Grafana Loki Hosted logs

Generate an ingestion token for ingestion and copy the URL.

Grafana Loki create ingestion token

Save the URL into an environment variable.

LOKI_URL="insert your url here"

Install the Telemetry Controller

The following steps show you how to install and configure the Telemetry Controller.

Deploy the Telemetry Controller

kubectl apply -k 'github.com/kube-logging/telemetry-controller/config/default?ref=0.0.3'

Create namespaces for the example scenario

kubectl create ns collector # Namespace where the collector DaemonSet will be deployed
kubectl create ns dev # Namespace for the dev environment

Deploy the log pipeline

First, set up a Collector:

kubectl apply -f - <<EOF
apiVersion: telemetry.kube-logging.dev/v1alpha1
kind: Collector
metadata:
  name: example-collector
spec:
  # Since the Collector is cluster-scoped, controlNamespace specifies the namespace where the DaemonSet will be deployed 
  controlNamespace: collector
  # The tenantSelector LabelSelector is used to select the Tenants that are assigned to this Collector
  tenantSelector:
    matchLabels:
      collectorLabel: example
EOF

IMPORTANT: The collector does not specify any routing by default. You need to configure at least one Tenant and one Subscription to forward any logs.

In the example scenario, one tenant is present on the cluster, routing all logs to the deployed Loki instance.
kubectl apply -f - <<EOF
apiVersion: telemetry.kube-logging.dev/v1alpha1
kind: Tenant
metadata:
  labels:
    # This label ensures that the tenant is selected by the Collector
    collectorLabel: example
  name: dev
spec:
  # The subscriptionNamespaceSelectors field is used to specify the namespaces, which will be considered when listing Subscriptions
  subscriptionNamespaceSelectors:
    - matchLabels:
        kubernetes.io/metadata.name: dev
  # The logSourceNamespaceSelectors field is used to specify the namespaces, which will be considered when selecting telemetry data for the tenant
  logSourceNamespaceSelectors:
    - matchLabels:
        kubernetes.io/metadata.name: dev
EOF

Now you, as a member of the development team (one of the tenants), have to define a Subscription and an OtelOutput to select the logs created in your tenant workspace and forward them to Loki. Outputs can be configured and selected in any namespace, which is useful in multi-tenant scenarios. For example, your operations team can pre-configure log sink endpoints, and the developer teams (which can be split to multiple tenants) can use them in their Subscriptions.

Create a Subscription.

kubectl apply -f - <<EOF
apiVersion: telemetry.kube-logging.dev/v1alpha1
kind: Subscription
metadata:
  name: subscription-dev
  namespace: dev
spec:
  # This statement will select all messages from the namespaces specified as log source for the tenant
  ottl: 'route()'
  # Multiple outputs can be set for a Subscription. In this case, logs will be sent to a Loki instance
  outputs:
    - name: loki
      namespace: dev
EOF

Create an Output. Make sure to reference the URL you’ve saved earlier in the endpoint field.

kubectl apply -f - <<EOF
apiVersion: telemetry.kube-logging.dev/v1alpha1
kind: OtelOutput
metadata:
  name: loki
  namespace: dev
spec:
  loki:
    # Selecting the Loki endpoint in Grafana Cloud
    endpoint: ${LOKI_URL}
    tls:
      insecure: true
EOF

Deploy your workload

With the telemetry pipeline in place, you need applications that provide the log messages. These are your actual workloads, but for this demo we will use our log-generator tool, which creates random web server logs. Install the log-generator:

helm install --generate-name --wait --namespace dev oci://ghcr.io/kube-logging/helm-charts/log-generator --version 0.7.0

Since your telemetry pipeline is already set up, it automatically sends the log messages to Grafana Loki. Navigate to the Explore tab in the Grafana Cloud UI, and select your Loki data source.

Telemetry Controller and Grafana Loki

You can select the common Kubernetes labels to select a specific group of logs. To select your dev tenant’s logs, select the dev tenant label. Besides tenants, you can filter for pod name, and namespace name as well, since the Telemetry Controller automatically adds these labels to the messages.

Telemetry Controller and Grafana Loki

You should see the logs of the log generator.

Summary

As you could see from this blog post, and also our introduction about the Telemetry Controller, using the Telemetry Controller comes with several benefits:

  • No need to deeply understand how the Collector should be configured to forward data based on some very common filters, like pod labels in Kubernetes.
  • No need to complicate the configuration to cover tenants and isolation.
  • No need to understand how the Collector configuration changes from one release to the next. The Telemetry Controller aims to always implement and follow the OpenTelemetry community best practices.

 

Follow Our Progress!

We are excited to be realizing our vision above with a full Axoflow product suite.

Subscribe for Product News

  • Technology oriented content only.
  • Not more than 1-3 posts per month.
  • You can unsubscribe any time.

By signing up you agree to receive promotional messages
according to Axoflow's Terms of Services.