Recently, we have released our custom cloud-ready images of syslog-ng. Continuing the cloud native integration of syslog-ng this article explains how to apply these images to solve an important Kubernetes use-case: the collection of Kubernetes metadata.
syslog-ng has a long history for being a trusted tool that provides unprecedented performance, stability and flexibility, both when deployed in a collector or an aggregator role. This Kubernetes integration bridges the gap between cloud native and enterprise deployments, especially when they operate side-by-side.
This post shows you how to install this syslog-ng image using the AxoSyslog Helm charts and use it to send Kubernetes logs into OpenSearch, as a means to demonstrate how it all works.
If you are planning to try AxoSyslog in production we recommend doing that via Logging operator that supports syslog-ng as a aggregator agent since version 4.0.
syslog-ng kubernetes() source
In the syslog-ng configuration, the entities where syslog-ng reads or receives logs from are called sources. The kubernetes()
source collects logs from the Kubernetes cluster where syslog-ng is running. It follows the design of CRI logging, reading container logs from /var/log/containers
or /var/log/pods
.
source s_kubernetes {
kubernetes(
base-dir("/var/log/containers") # Optional. Where to look for the containers' log files. Default: "/var/log/containers"
cluster-name("k8s") # Optional. The name of the cluster, used in formatting the HOST field. Default: "k8s"
prefix(".k8s.") # Optional. Prefix for the metadata name-value pairs' names. Default: ".k8s."
key-delimiter(".") # Optional. Delimiter for multi-depth name-value pairs' names. Default: "."
);
};
Thanks to syslog-ng’s versatile python binding support, after reading the logs from the container log files, the kubernetes()
source enriches them with various metadata, acquired via the kubernetes python client.
Syslog-ng 4.1.1 retrieves the following metadata fields:
syslog-ng name-value pair | source |
.k8s.namespace_name | Container log file name. |
.k8s.pod_name | Container log file name. |
.k8s.pod_uuid | Container log file name or python kubernetes.client.CoreV1Api. |
.k8s.container_name | Container log file name or python kubernetes.client.CoreV1Api. |
.k8s.container_id | Container log file name. |
.k8s.container_image | python kubernetes.client.CoreV1Api. |
.k8s.container_hash | python kubernetes.client.CoreV1Api. |
.k8s.docker_id | python kubernetes.client.CoreV1Api. |
.k8s.labels.* | python kubernetes.client.CoreV1Api. |
.k8s.annotations.* | python kubernetes.client.CoreV1Api. |
You can find more details on how these are calculated in the source codes of syslog-ng’s kubernetes SCL, KubernetesAPIEnrichment, and kubernetes-client’s CoreV1Api. We are working on extending this list with even more fields, like namespace labels, so stay tuned!
axosyslog-collector Helm chart
Helm is the package manager of Kubernetes. It helps to define, install, and upgrade even the most complex Kubernetes applications. Axoflow’s axosyslog-charts repository houses the axosyslog-collector Helm chart, which we will use for demonstrating the kubernetes()
source. The chart uses the Alpine-based syslog-ng docker image built by Axoflow. The axosyslog-collector is a rudimentary chart for collecting Kubernetes logs, it can:
- forward logs conforming to RFC3164 via TCP/UDP connections, or
- send them to OpenSearch.
If you need a complete Kubernetes logging solution, we recommend checking the Logging Operator.
Demo: Sending Kubernetes logs to OpenSearch
The following tutorial shows you how to send Kubernetes logs to OpenSearch.
Prerequisites
You will need a Kubernetes cluster. We used minikube with docker driver and Helm. We used a Ubuntu 22.04 (amd64) machine, but it should work on any system that can run minikube (2 CPUs, 2GB of free memory, 20GB of free disk space).
The OpenSearch service needs a relatively large mmap count setting, so set it to at least 262144, for example:
sysctl -w vm.max_map_count=262144
Generate logs
Install kube-logging/log-generator to generate logs.
helm repo add kube-logging https://kube-logging.github.io/helm-charts
helm repo update
helm install --generate-name --wait kube-logging/log-generator
Check that it’s running:
kubectl get pods
The output should look like:
NAME READY STATUS RESTARTS AGE
log-generator-1681984863-5946c559b9-ftrrn 1/1 Running 0 8s
Set up OpenSearch
Install an OpenSearch cluster with Helm:
helm repo add opensearch https://opensearch-project.github.io/helm-charts/
helm repo update
helm install --generate-name --wait opensearch/opensearch
helm install --generate-name --wait opensearch/opensearch-dashboards
Now you should have 5 pods. Check that they exist:
kubectl get pods
The output should look like:
NAME READY STATUS RESTARTS AGE
log-generator-1681984863-5946c559b9-ftrrn 1/1 Running 0 3m39s
opensearch-cluster-master-0 1/1 Running 0 81s
opensearch-cluster-master-1 1/1 Running 0 81s
opensearch-cluster-master-2 1/1 Running 0 81s
opensearch-dashboards-1681999620-59f64f98f7-bjwwh 1/1 Running 0 44s
Forward the 5601 port of the OpenSearch Dashboards service (replace the name of the pod with your pod).
kubectl port-forward opensearch-dashboards-1681999620-59f64f98f7-bjwwh 8080:5601
The output should look like:
Forwarding from 127.0.0.1:8080 -> 5601
Forwarding from [::1]:8080 -> 5601
You can login to the dashboard at http://localhost:8080
with admin/admin. You will soon create an Index Pattern here, but first you have to send some logs from syslog-ng.
Set up axosyslog-collector
First, add the AxoSyslog Helm repository:
helm repo add axosyslog https://axoflow.github.io/axosyslog-charts
helm repo update
Create a YAML file (axoflow-demo.yaml) to configure the collector.
image:
# Use the nightly build of https://github.com/axoflow/axosyslog-docker
tag: nightly
config:
version: "4.1"
sources:
kubernetes:
# Collect kubernetes logs
enabled: true
destinations:
# Send logs to OpenSearch
opensearch:
- address: "opensearch-cluster-master"
index: "test-axoflow-index"
user: "admin"
password: "admin"
tls:
# Do not validate the server's TLS certificate.
peerVerify: false
# Send the syslog fields + the metadata from .k8s.* in JSON format
template: "$(format-json --scope rfc5424 --exclude DATE --key ISODATE @timestamp=${ISODATE} k8s=$(format-json .k8s.* --shift-levels 2 --exclude .k8s.log))"
Check what the syslog-ng.conf file will look like with your custom values:
helm template -f axoflow-demo.yaml -s templates/config.yaml axosyslog/axosyslog-collector
The output should look like:
# Source: axosyslog-collector/templates/config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
labels:
helm.sh/chart: axosyslog-collector-0.1.0
app.kubernetes.io/name: axosyslog-collector
app.kubernetes.io/instance: release-name
app.kubernetes.io/version: "4.1.1"
app.kubernetes.io/managed-by: Helm
name: release-name-axosyslog-collector
data:
syslog-ng.conf: |
@version: 4.1
@include "scl.conf"
options {
stats(
level(1)
);
};
log {
source { kubernetes(); };
destination {
elasticsearch-http(
url("https://opensearch-cluster-master:9200/_bulk")
index("test-axoflow-index")
type("")
template("$(format-json --scope rfc5424 --exclude DATE --key ISODATE @timestamp=${ISODATE} k8s=$(format-json .k8s.* --shift-levels 2 --exclude .k8s.log))")
user("admin")
password("admin")
tls(
peer-verify(no)
)
);
};
};
Install the axosyslog-collector chart:
helm install --generate-name --wait -f axoflow-demo.yaml axosyslog/axosyslog-collector
The output should look like:
NAME: axosyslog-collector-1682002179
LAST DEPLOYED: Thu Apr 20 16:49:39 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
1. Watch the axosyslog-collector-1682002179 container start.
Check your pods:
kubectl get pods --namespace=default -l app=axosyslog-collector-1682002179 -w
kubectl get pods
The output should look like:
NAME READY STATUS RESTARTS AGE
log-generator-1681984863-5946c559b9-ftrrn 1/1 Running 0 13m
opensearch-cluster-master-0 1/1 Running 0 11m
opensearch-cluster-master-1 1/1 Running 0 11m
opensearch-cluster-master-2 1/1 Running 0 11m
opensearch-dashboards-1681999620-59f64f98f7-bjwwh 1/1 Running 0 10m
axosyslog-collector-1682002179-pjlkn 1/1 Running 0 6s
Check the logs in OpenSearch
Head back to OpenSearch Dashboards, and create an Index Pattern called “test-axoflow-index”: http://localhost:8080/app/management/opensearch-dashboards/indexPatterns
. At Step 2, set the Time field to “@timestamp”.
Now you can see your logs on the Discover view http://localhost:8080/app/discover
. Opening the detailed view for a log entry shows you the fields sent to OpenSearch.
Summary
From this post you have learned:
- How to install a syslog-ng log collector using our AxoSyslog charts, which is one of the first elements of our new open source project, the AxoSyslog cloud native syslog-ng distribution.
- Configure it to send your logs into OpenSearch.
- Learned how AxoSyslog enriches your log messages with Kubernetes metadata.
Follow Our Progress!
We are excited to be realizing our vision above with a full Axoflow product suite.