From this blog post, you’ll learn how to:
- Install OpenObserve on a Kubernetes cluster
- Install the Logging operator on the cluster
- Configure the Logging operator to send logs to OpenObserve
Install OpenObserve
Let’s install OpenObserve using the official Kubernetes manifest.
Create a namespace for OpenObserve.
kubectl create namespace openobserve
Deploy OpenObserve into the namespace.
kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
name: openobserve
namespace: openobserve
spec:
clusterIP: None
selector:
app: openobserve
ports:
- name: http
port: 5080
targetPort: 5080
---
# create statefulset
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: openobserve
namespace: openobserve
labels:
name: openobserve
spec:
serviceName: openobserve
replicas: 1
selector:
matchLabels:
name: openobserve
app: openobserve
template:
metadata:
labels:
name: openobserve
app: openobserve
spec:
securityContext:
fsGroup: 2000
runAsUser: 10000
runAsGroup: 3000
runAsNonRoot: true
# terminationGracePeriodSeconds: 0
containers:
- name: openobserve
image: public.ecr.aws/zinclabs/openobserve:v0.7.2
env:
- name: ZO_ROOT_USER_EMAIL
value: root@example.com
- name: ZO_ROOT_USER_PASSWORD
value: Complexpass#123
- name: ZO_DATA_DIR
value: /data
# command: ["/bin/bash", "-c", "while true; do sleep 1; done"]
imagePullPolicy: Always
resources:
limits:
cpu: 4096m
memory: 2048Mi
requests:
cpu: 256m
memory: 50Mi
ports:
- containerPort: 5080
name: http
volumeMounts:
- name: data
mountPath: /data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
# storageClassName: default
# NOTE: You can increase the storage size
resources:
requests:
storage: 10Gi
EOF
Check the openobserve namespace to make sure that everything was deployed correctly.
kubectl get pods -n openobserve
Expected output:
NAME READY STATUS RESTARTS AGE
openobserve-0 1/1 Running 0 24s
Forward OpenObserve’s UI port, so you can access it locally on http://localhost:5080
kubectl -n openobserve port-forward svc/openobserve 5080:5080 &
Install the Log generator
Install the log generator application to generate sample logs that the Logging operator can collect and send to our OpenObserve instance.
helm install --generate-name --wait --namespace generator --create-namespace oci://ghcr.io/kube-logging/helm-charts/log-generator --version 0.7.0
Check if everything is deployed properly, by checking the generator
namespace.
kubectl -n generator get pod
The output should be similar to:
NAME READY STATUS RESTARTS AGE
log-generator-1705325564-864d7f5d5-l8g7x 1/1 Running 0 15s
Install the Logging operator
Install the Logging operator Helm chart. You need at least version 4.5.0 for the OpenObserve support.
helm upgrade --install logging-operator --namespace logging-operator --create-namespace oci://ghcr.io/kube-logging/helm-charts/logging-operator --version 4.5.0
Check the pods to make sure that everything was deployed properly:
kubectl -n logging-operator get pods
Expected output:
NAME READY STATUS RESTARTS AGE
logging-operator-76794d6cdf-zfdk9 1/1 Running 0 57s
If everything is in its place, it’s time to define the log pipeline.
Define the log pipeline
To receive logs in OpenObserve we have to:
- Collect the logs of the log generator.
- Send them to the OpenObserve endpoint.
Set up the logging resource to deploy the syslog-ng aggregator
The Logging custom resource needs an aggregator, which is going to be syslog-ng.
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: FluentbitAgent
metadata:
name: logging
spec: {}
---
kind: Logging
apiVersion: logging.banzaicloud.io/v1beta1
metadata:
name: logging
spec:
controlNamespace: logging
syslogNG:
jsonKeyDelim: '#'
sourceDateParser: {}
In this case, we set the json key delimiter character to ‘#’, otherwise json structured log messages with keys including ‘.’ characters would be parsed incorrectly.
Define the flow to select logs of log-generator
Let’s specify a filter to select logs from pods with the matching labels for log-generator, and select the output where these messages will be forwarded.
kind: SyslogNGClusterFlow
apiVersion: logging.banzaicloud.io/v1beta1
metadata:
name: log-generator
namespace: logging
spec:
match:
regexp:
value: json#kubernetes#labels#app.kubernetes.io/name
pattern: log-generator
type: string
globalOutputRefs:
- openobserve-output
Define the output to send logs to the OpenObserve instance
Extract the default password using the following command:
kubectl get pods --namespace openobserve openobserve-0 -o jsonpath='{.spec.containers[*].env[?(@.name=="ZO_ROOT_USER_PASSWORD")].value}'; echo
Output:
Complexpass#123
Use this password to create the secret for the OpenObserve output of the Logging operator:
kubectl create secret --namespace logging generic openobserve \
--from-literal=password='Complexpass#123'
This credential will be used in the output configuration, like so:
apiVersion: logging.banzaicloud.io/v1beta1
kind: SyslogNGClusterOutput
metadata:
name: openobserve-output
namespace: logging
spec:
openobserve:
url: "http://openobserve.openobserve.svc.cluster.local"
organization: "default"
stream: "default"
user: "username"
password:
valueFrom:
secretKeyRef:
name: openobserve
key: password
For details regarding the different configuration options, see the Logging operator documentation.
Deploy the log pipeline
Save all the snippets above to a file named openobserve_logging.yaml, separated with YAML document separators (---
) and apply all pipeline components to the cluster. The file should look like this:
# Complete YAML file
apiVersion: v1
kind: Namespace
metadata:
name: logging
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: FluentbitAgent
metadata:
name: quickstart
spec: {}
---
kind: Logging
apiVersion: logging.banzaicloud.io/v1beta1
metadata:
name: logging
spec:
controlNamespace: logging
syslogNG:
jsonKeyDelim: '#'
sourceDateParser: {}
---
kind: SyslogNGClusterFlow
apiVersion: logging.banzaicloud.io/v1beta1
metadata:
name: log-generator
namespace: logging
spec:
match:
regexp:
value: json#kubernetes#labels#app.kubernetes.io/name
pattern: log-generator
type: string
globalOutputRefs:
- openobserve-output
---
apiVersion: logging.banzaicloud.io/v1beta1
kind: SyslogNGClusterOutput
metadata:
name: openobserve-output
namespace: logging
spec:
openobserve:
url: "http://openobserve.openobserve.svc.cluster.local"
organization: "default"
stream: "default"
user: "root@example.com"
password:
valueFrom:
secretKeyRef:
name: openobserve
key: password
And you can deploy it by running:
kubectl -n logging apply -f openobserve_logging.yaml
After deploying the pipeline, you should see the syslog-ng pods come up after a successful configuration check.
kubectl -n logging get pod
Expected output:
NAME READY STATUS RESTARTS AGE
logging-fluentbit-bkpsp 1/1 Running 0 88s
logging-syslog-ng-0 2/2 Running 0 31s
You can also check that all resources are active and have no issues by running:
kubectl get logging-all -n logging
Expected output:
NAME AGE
fluentbitagent.logging.banzaicloud.io/logging 2m3s
NAME LOGGINGREF CONTROLNAMESPACE WATCHNAMESPACES PROBLEMS
logging.logging.banzaicloud.io/logging logging
NAME ACTIVE PROBLEMS
syslogngclusterflow.logging.banzaicloud.io/log-generator true
NAME ACTIVE PROBLEMS
syslogngclusteroutput.logging.banzaicloud.io/openobserve-output true
Query logs using the OpenObserve web UI
Log entries are now flowing from the log generator to the OpenObserve instance. To access the web UI, use the default user email and the default password.
kubectl get pods --namespace openobserve openobserve-0 -o jsonpath='{.spec.containers[*].env[?(@.name=="ZO_ROOT_USER_EMAIL")].value}'; echo; \
kubectl get pods --namespace openobserve openobserve-0 -o jsonpath='{.spec.containers[*].env[?(@.name=="ZO_ROOT_USER_PASSWORD")].value}';
echo
Expected output for the default credentials:
root@example.com
Complexpass#123
After logging in, click on Logs on the left, then explore the default stream. You should see the log messages of the log generator:

Summary
As you have seen, you can forward your Kubernetes logs to OpenObserve with syslog-ng and the Logging operator using AxoSyslog, the cloud-native syslog-ng distribution.
At Axoflow, we are working on an end-to-end observability pipeline solution that simplifies the control of your telemetry infrastructure with a vendor-agnostic approach – for example, it would make routing your logs to a new OpenObserve destination easier. Read more about Axoflow!

Follow Our Progress!
We are excited to be realizing our vision above with a full Axoflow product suite.
