This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Axoflow Platform
Axoflow is a security data curation pipeline that helps you get high quality, reduced security data, automatically, without coding. Better quality data allows your organization and SOC team to detect and respond to threats faster, use AI, and reduce compliance breaches and costs.
Collect security data from anywhere
Axoflow allows you to collect data from any source, including:
- cloud services
- cloud-native sources like OpenTelemetry or Kubernetes
- traditional sources like Microsoft Windows endpoints (WEC) and Linux servers (syslog)
- networking appliances like firewalls
- applications.
Shift left
Axoflow deals with tasks like data classification, parsing, curation, reduction, and routing. Except for routing, these tasks are commonly performed in the SIEM, often manually. Axoflow automates all of these processing steps and shifts them left into the data pipeline, so:
- it’s automatically applied to all your security data,
- all destinations benefit from it (multi-SIEM and storage+SIEM scenarios),
- data format is optimized for the specific destination (for example, Splunk, Azure Sentinel, Google Pub/Sub),
- unneeded and redundant data can be dropped before having to pay for it, reducing data volume and storage costs.
Curation
Axoflow can automatically classify, curate, enrich, and optimize data right at the edge – limiting excess network overhead and costs as well as downstream manual data-engineering in the analytics tools. Axoflow has over 100 zero-maintenance connectors that automatically identify and classify the incoming data, and apply automatic, device- and source-specific curation steps.

Policy-based routing
Axoflow can configure your aggregator and edge devices to intelligently route data based on the high-level flows you configure. Flows can use labels and other properties of the transferred data as well as your inventory to automatically map your business and compliance policies into configuration files, without coding.
Management
Axoflow gives you a vendor-agnostic management plane for best-in-class visibility into your data pipeline. You can:
- automatically discover and identify existing logging infrastructure,
- visualize the complete edge-to-edge flow of security data to understand the contribution of sources to the data pipeline, and
- monitor the elements of the pipeline and the data flow.

Try Axoflow
Would you like to try Axoflow? Request a zero-commitment demo or a sandbox environment to experience the power of our platform.
Ready to get started? Go to our Getting started section for the first steps.
1 - Introduction
2 - Axoflow architecture
The Axoflow provides an end-to-end pipeline automating the collection, management and loading of your security data in a vendor-agnostic way. The following figure highlights the Axoflow data flow:

Axoflow architecture
The architecture of Axoflow is comprised of two main elements: the Axoflow Console and the Data Plane.
- The Axoflow Console is primarily concerned with highlighting the metadata of each event. This includes the source from which it originated, the size in bytes (and event count over time), its destination, and any other element which describes the data.
- The Data Plane includes collector agents and processing engines (like AxoRouter) that collect, classify, filter, transform, and deliver telemetry data to its proper destinations (SIEMs, storage), and provide metrics to the Axoflow Console. The components of the Data Plane can be managed from the Axoflow Console, or can be independent.
Pipeline components
A telemetry pipeline consists of the following high-level components:
-
Data Sources: Data sources are the endpoints of the pipeline that generate the logs and other telemetry data you want to collect. For example, firewalls and other appliances, Kubernetes clusters, application servers, and so on can all be data sources. Data sources send their data either directly to a destination, or to a router.
Axoflow provides several log collecting agents and solutions to collect data in different environments, including connectors for cloud services, Kubernetes clusters, Linux servers, and Windows servers.
-
Routers: Router (also called relays or aggregators) collect the data from a set of data sources and transport them to the destinations.
AxoRouter can collect, curate, and enrich the data: it automatically identifies your log sources and fixes common errors in the incoming data. It also converts the data into a format that best suits the destination to optimize ingestion speed and data quality.
-
Destinations: Destinations are your SIEM and storage solutions where the telemetry pipeline delivers your security data.
Your telemetry pipeline can consist of managed and unmanaged components. You can deploy and configure managed components from the Axoflow Console. Axoflow provides several managed components that help you collect or fetch data from your various data sources, or act as routers.
Axoflow Console
Axoflow Console is the data visualization and management UI of Axoflow. Available both as a SaaS and an on-premises solution, it collects and visualizes the metrics received from the pipeline components to provide insight into the details of your telemetry pipeline and the data it processes. It also allows you to:
AxoRouter
AxoRouter is a router (aggregator) and data curation engine: it collects all kinds of telemetry and security data and has all the low-level functions you would expect of log-forwarding agents and routers. AxoRouter can also curate and enrich the collected data: it automatically identifies your log sources and fixes common errors in the incoming data: for example, it corrects missing hostnames, invalid timestamps, formatting errors, and so on.
AxoRouter also has a range of zero-maintenance connectors for various networking and security products (for example, switches, firewalls, and web gateways), so it can classify the incoming data (by recognizing the product that is sending it), and apply various data curation and enrichment steps to reduce noise and improve data quality.
Before sending your data to its destination, AxoRouter automatically converts the data into a format that best suits the destination to optimize ingestion speed and data quality. For example, when sending data to Splunk, setting the proper sourcetype and index is essential.
In addition to curating and formatting your data, AxoRouter also collects detailed metrics about the processed data. These real-time metrics give you insight into the status of the telemetry pipeline and its components, providing end-to-end observability into your data flows.

Axolet
Axolet is a monitoring and management agent that integrates with the local log collector (like AxoSyslog, Splunk Connect for Syslog, or syslog-ng) that runs on the data source and provides detailed metrics about the host and its data traffic to the Axoflow Console.
3 - Deployment scenarios
Thanks to its flexible deployment modes, you can quickly insert an Axoflow and its processing node (called AxoRouter) transparently into your data pipeline and gain instant benefits:
- Data reduction
- Improved SIEM accuracy
- Configuration UI for routing data
- Automatic data classification
- Metrics about log ingestion, processing, and data drops
- Analytics about the transported data
- Health check, highlighting anomalies
After the first step, you can further integrate your pipeline, by deploying Axoflow agents to collect and manage your security data, or onboarding your existing log collector agents (for example, syslog-ng). Let’s see what these scenarios look like in detail.
3.1 - Axoflow Console deployment
You can use Axoflow Console:
We also support hybrid environments.
The Axoflow Console provides the UI for accessing metrics and analytics, deploying and configuring AxoRouter instances, configuring data flows, and so on.
3.2 - Transparent router mode
AxoRouter is a powerful aggregator and data processing engine that can receive data from a wide variety of sources, including:
- OpenTelemetry,
- syslog,
- Windows Event Forwarding, or HTTP.
In transparent mode, you deploy AxoRouter in front of your SIEM and configure your sources or aggregators to send the logs to AxoRouter instead of the SIEM. The transparent deployment method is a quick, minimally invasive way to get instant benefits and value from Axoflow, and:
Axoflow Console as SaaS

Axoflow Console on premises

3.3 - Router and edge deployment
Axoflow provides agents to collect data from all kinds of sources:
- Kubernetes clusters,
- cloud sources,
- security appliances,
- Linux servers,
- Microsoft Windows hosts.
(If you’d prefer to keep using your existing syslog infrastructure instead of the Axoflow agents, see Onboard existing syslog infrastructure).
You can deploy the AxoRouter data aggregator on Linux and Kubernetes.

Using the Axoflow collector agents gives you:
- Reliable transport: Between its components, Axoflow transports security data using the reliable OpenTelemetry protocol (OTLP) for high performance, and to avoid losing messages.
- Managed components: You can configure, manage, monitor, and troubleshoot all these components from the Axoflow Console. For example, you can sample the data flowing through each component.
- Metrics: Detailed metrics from every collector provide unprecedented insight into the status of your data pipeline.
3.4 - Onboard existing syslog infrastructure
If your organization already has a syslog architecture in place, Axoflow provides ways to reuse it. This allows you to integrate your existing infrastructure with Axoflow, and optionally – in a later phase – replace your log collectors with the agents provided by Axoflow.
Managed AxoRouter deployments

This deployment mode is similar to the previous one, but instead of writing configurations manually, you use the centralized management UI of Axoflow Console to manage your AxoRouter instances. This provides the tightest integration and the most benefits. In addition to the unmanaged use case, it gives you:
Unmanaged AxoRouter deployments

In this mode, you install AxoRouter on the data source to replace its local collector agent, and manage it manually. That way you get the functional benefits of using AxoRouter (our aggregator and data curation engine) to collect and classify your data but can manage its configuration as you see fit. This gives you all the benefits of the read-only mode (since AxoRouter includes Axolet as well), and in addition, it provides:
- Advanced and more detailed metrics about the log ingestion, processing, data drops, delays
- More detailed analytics about the transported data
- Access to the FilterX processing engine
- Ability to receive OpenTelemetry data
- Acts as a Windows Event Collector server, allowing you to collect Windows events
- Optimized output for the specific SIEMs
- Data reduction
Read-only mode

In this scenario, you install Axolet on the data source. Axolet is a monitoring (and management) agent that integrates with the local log collector, like AxoSyslog, Splunk Connect for Syslog, or syslog-ng, and sends detailed metrics about the host and its data traffic to the Axoflow Console. This allows you to use the Axoflow Console to:
4 - Try for free
To see Axoflow in action, you have a number of options:
- Watch some of the product videos on our blog.
- Request a demo sandbox environment. This pre-deployed environment contains a demo pipeline complete with destinations, routers, and sources that generate traffic, and you can check the metrics, use the analytics, modify the flows, and so on to get a feel of using a real Axoflow deployment.
- Request a free evaluation version. That’s an empty cloud deployment you can use for testing and PoC: you can onboard your own pipeline elements, add sources and destinations, and see how Axoflow performs with your data!
- Request a live demo where our engineers show you how Axoflow works, and can discuss your specific use cases in detail.
5 - Getting started
This guide shows you how to get started with Axoflow. You’re going to install AxoRouter, and configure or create a source to send data to AxoRouter. You’ll also configure AxoRouter to forward the received data to your destination SIEM or storage provider. The resulting topology will look something like this:

Why use Axoflow
Using the Axoflow security data pipeline automatically corrects and augments the security data you collect, resulting in high-quality, curated, SIEM-optimized data. It also removes redundant data to reduce storage and SIEM costs. In addition, it allows automates pipeline configuration and provides metrics and alerts for your telemetry data flows.
Prerequisites
You’ll need:
-
An Axoflow subscription, access to a free evaluation version, or an on-premise deployment.
-
A data source. This can be any host that you can configure to send syslog or OpenTelemetry data to your AxoRouter instance that you’ll install. If you don’t want to change the configuration of an existing device, you can use a virtual machine or a docker container on your local computer.
-
A host that you’ll install AxoRouter on. This can be a separate Linux host, or a virtual machine running on your local computer.
AxoRouter should work on most Red Hat and Debian compatible Linux distributions. For production environments, we recommend using Red Hat 9.
-
Access to a supported SIEM or storage provider, like Splunk or Amazon S3. For a quick test of Axoflow, you can use a free Splunk or OpenObserve account as well.
Network access
The hosts must be able to access the following domains related to the Axoflow Console:
Log in to the Axoflow Console
Verify that you have access to the Axoflow Console.
- Open
https://<your-tenant-id>.axoflow.io/
in your browser.
- Log in using Google Authentication.
Add a destination
Add the destination where you’re sending your data. For a quick test, you can use a free Splunk or OpenObserve account.
Add a Splunk Cloud destination. For other destinations, see Destinations.
Prerequisites
-
Enable the HTTP Event Collector (HEC) on your Splunk deployment if needed. On Splunk Cloud Platform deployments, HEC is enabled by default.
-
Create a token for Axoflow to use in the destination. When creating the token, use the syslog source type.
For details, see Set up and use HTTP Event Collector in Splunk Web.
-
If you’re using AxoRouter, create the indexes where Axoflow sends the log data. Which index is needed depends on the sources you have, but create at least the following event indices: axoflow
, infraops
, netops
, netfw
, osnix
(for unclassified messages). Check your sources in the Sources section for a detailed lists on which indices their data is sent.
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + > Destination.
-
Configure the destination.
-
Select Splunk.
-
Select Dynamic. This will allow you to set a default index, source, and source type for messages that aren’t automatically identified.

-
Enter your Splunk URL into the Hostname field, for example, <your-splunk-tenant-id>.splunkcloud.com
for Splunk Cloud Platform free trials, or <your-splunk-tenant-id>.splunkcloud.com
for Splunk Cloud Platform instances.
-
Enter the name of the Default Index. The data will be sent into this index if no other index is set during the processing of the message (based on automatic classification, or by the processing steps of the Flow). Make sure that the index exists in Splunk.
-
Enter the Default Source and Default Source Type. These will be assigned to the messages that have no source or source type set during the processing of the message (based on automatic classification, or by the processing steps of the Flow).
-
Enter the token you’ve created into the Token field.
-
Disable the Verify server certificate option unless your deployment has a valid, non-self-signed certificate. Free Splunk Cloud accounts have self-signed certificates.
-
(Optional) You can set other options as needed for your environment. For details, see Splunk Cloud.
-
Select Create.
Deploy an AxoRouter instance
Deploy an AxoRouter instance that will route, curate, and enrich your log data.
Deploy AxoRouter on Linux. For other platforms, see AxoRouter.
-
Select Provisioning > Select type and platform.

-
Select the type (AxoRouter) and platform (Linux). The one-liner installation command is displayed.

If needed, set the Advanced options (for example, proxy settings) to modify the installation parameters. Usually, you don’t have to use advanced options unless the Axoflow support team instructs you to do so.
-
Open a terminal on the host where you want to install AxoRouter.
-
Run the one-liner, then follow the on-screen instructions.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
Example output:
Do you want to install AxoRouter now? [Y]
y
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4142 100 4142 0 0 19723 0 --:--:-- --:--:-- --:--:-- 19818
Verifying packages...
Preparing packages...
axorouter-0.40.0-1.aarch64
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 31.6M 100 31.6M 0 0 2092k 0 0:00:15 0:00:15 --:--:-- 2009k
Verifying packages...
Preparing packages...
axolet-0.40.0-1.aarch64
Created symlink /etc/systemd/system/multi-user.target.wants/axolet.service → /usr/lib/systemd/system/axolet.service.
Now continue with onboarding the host on the Axoflow web UI.
-
Register the host.
-
Reload the Provisioning page. There should be a registration request for the new AxoRouter deployment. Select ✓.

-
Select Register to register the host. You can add a description and labels (in label:value
format) to the host.

-
Select the Topology page. The new AxoRouter instance is displayed.
Add a source
Configure a host to send data to AxoRouter.
Configure a generic syslog host. For sources that are specifically supported by Axoflow, see Sources.
-
Log in to your device. You need administrator privileges to perform the configuration.
-
If needed, enable syslog forwarding on the device.
-
Set AxoRouter as the syslog server. Typically, you can configure the following parameters:
-
Name or IP Address of the syslog server: Set the address of your AxoRouter.
-
Protocol: If possible, set TCP or TLS.
-
Syslog Format: If possible, set RFC5424 (or equivalent), otherwise leave the default.
-
Port: Set a port appropriate for the protocol and syslog format you have configured.
By default, AxoRouter accepts data on the following ports:
- 514 TCP and UDP for RFC3164 (BSD-syslog) formatted traffic.
- 601 TCP for RFC5424 (IETF-syslog) formatted traffic.
- 6514 TCP for TLS-encrypted syslog traffic.
- 4317 TCP for OpenTelemetry log data.
To receive data on other ports or other protocols, configure the source connectors of the AxoRouter host.
Make sure to enable the ports you’re using on the firewall of your host.
-
Add the appliance to Axoflow Console.
- Open the Axoflow Console and select Topology.
- Select + > Source.
- Select your source.
- Enter the parameters of the source, like IP address and FQDN.
- Select Create.
Note
If your syslog source is running
syslog-ng
, Splunk Connect for Syslog (SC4S), or AxoSyslog as its log forwarder agent, consider installing Axolet on the host and instrumenting the configuration of the log forwarder to receive detailed metrics about the host and the processed data. For details, see
Manage and monitor the pipeline.
Add a path
Create a path between the source source and the AxoRouter instance.
-
Select Topology > + > Path.

-
Select your data source in the Source host field.

-
Select the target router or aggregator this source is sending its data to in the Target host field, for example, axorouter
.
-
Select the Target connector. The connector determines how the destination receives the data (for example, using which protocol or port).
-
Select Create. The new path appears on the Topology page.

Create a flow
Create a flow to route the traffic from your AxoRouter instance to your destination.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

Check the metrics on the Topology page
Open the Topology page and verify that your AxoRouter instance is connected both to the source and the destination. If it isn’t, select + > Path to connect them.
If you have traffic flowing from the source to your AxoRouter instance, the Topology page shows the amount of data flowing on the path. Click the AxoRouter instance, then select Analytics to visualize the data flow.
Tap into the log flow
Log tapping in Axoflow samples the log flow. You can use labels to filter for specific messages (like ones with parse errors) and tap only those messages. To not get overwhelmed with events, Axoflow automatically samples the output: if many messages match the selected filter, only a subset is shown (about 1 message per second). Using log tapping, you can quickly troubleshoot both parsing/curation errors and destination ingest (API) errors, and check:
- What was in the original message?
- What is sent in the final payload to the destination?
Tap into the log flow.
-
Click your AxoRouter instance on the Topology page, then select ⋮ > Tap log flow.

-
Tap into the log flow.
- To see the input data, select Input log flow > Start.
- To see the output data, select Output log flow > Start.
You can use labels to filter the messages and sample only the matching ones.

-
When the logs you’re interested in show up, click Stop Log Tap, then click a log message to see its details.

-
If you don’t know what the message means, select AI Analytics to ask our AI to interpret it.

Troubleshooting
In case you run into problems, or you’re not getting any data in Splunk, check the logs of your AxoRouter instance:
-
Select Topology, then select your AxoRouter instance.
-
Select ⋮ > Tap agent logs > Start. Axoflow displays the log messages of AxoRouter. Check the logs for error messages. Some common errors include:
Redirected event for unconfigured/disabled/deleted index=netops with source="source::axo" host="host::axosyslog-almalinux" sourcetype="sourcetype::fortigate_event" into the LastChanceIndex. So far received events from 1 missing index(es).
: The Splunk index where AxoRouter is trying to send data doesn’t exist. Check which index is missing in the error message and create it in Splunk. (For a list of recommended indices, see the Splunk destination prerequisites.)
http: error sending HTTP request; url='https://prd-p-sp2id.splunkcloud.com:8088/services/collector/event/1.0?index=&source=&sourcetype=', error='SSL peer certificate or SSH remote key was not OK', worker_index='0', driver='splunk--flow-axorouter4-almalinux#0', location='/usr/share/syslog-ng/include/scl/splunk/splunk.conf:104:3'
: Your Splunk deployment uses an invalid or self-signed certificate, and the Verify server certificate option is enabled in the Splunk destination of Axoflow. Either fix the certificate in Splunk, or: select Topology > <your-splunk-destination>, disable Verify server certificate, then select Update.
6 - Concepts
This section describes the main concepts of Axoflow.
6.1 - Automatic data processing
When forwarding data to your SIEM, poor quality data and malformed logs that lack critical fields like timestamps or hostnames need to be fixed. The usual solutions fix the problem in the SIEM, and involve complex regular expressions, which are difficult to create and maintain. SIEM users often rely on vendors to manage these rules, but support is limited, especially for less popular devices. Axoflow offers a unique solution by automatically processing, curating, and classifying data before it’s sent to the SIEM, ensuring accurate, structured, and optimized data, reducing ingestion costs and improving SIEM performance.
The problem
The main issue related to classification is that many devices send malformed messages: missing timestamp, missing hostname, invalid message format, and so on. Such errors can cause different kinds of problems:
- Log messages are often routed to different destinations based on the sender hostname. Missing or invalid hostnames mean that the message is not attributed to the right host, and often doesn’t arrive at its intended destination.
- Incorrect timestamp or timezone hampers investigations during an incident, resulting in potentially critical data failing to show up (or extraneous data appearing) in queries for a particular period.
- Invalid data can lead to memory leaks or resource overload in the processing software (and to unusable monitoring dashboards) when a sequence number or other rapidly varying field is mistakenly parsed as the hostname, program name, or other low cardinality field.
Overall, they decrease the quality of security data you’re sending to your SIEM tools, which increases false positives, requires secondary data processing to clean, and increases query time – all of which ends up costing firms a lot more.
For instance, this is a log message from a SonicWall
firewall appliance:
<133> id=firewall sn=C0EFE33057B0 time="2024-10-07 14:56:47 UTC" fw=172.18.88.37 pri=6 c=1024 m=537 msg="Connection Closed" f=2 n=316039228 src=192.0.0.159:61254:X1: dst=10.0.0.7:53:X3:SIMILDC01 proto=udp/dns sent=59 rcvd=134 vpnpolicy="ELG Main"
A well-formed syslog message should look like this:
<priority>timestamp hostname application: message body
As you can see, the SonicWall format is completely invalid after the initial <priority>
field. Instead of the timestamp, hostname, and application name comes the free-form part of the message (in this case a whitespace-separated key=value list). Unless you extract the hostname and timestamp from the content of this malformed message, you won’t be able to reconstruct the course of events during a security incident.
Axoflow provides data processing, curation, and classification intelligence that’s built into the data pipeline, so it processes and fixes the data before it’s sent to the SIEM.
Our solution
Our data engine and database solution automatically processes the incoming data: AxoRouter recognizes and classifies the incoming data, applies device-specific fixes for the errors, then enriches and optimizes the formatting for the specific destination (SIEM). This approach has several benefits:
- Cost reduction: All data is processed before it’s sent to the SIEM. That way, we can automatically reduce the amount of data sent to the SIEM (for example, by removing empty and redundant fields), cutting your data ingestion costs.
- Structured data: Axoflow recognizes the format of the incoming data payload (for example, JSON, CSV, LEEF, free text), and automatically parses the payload into a structured map. This allows us to have detailed, content-based metrics and alerts, and also makes it easy for you to add custom transformations if needed.
- SIEM-independent: The structured data representation allows us to support multiple SIEMs (and other destinations) and optimize the data for every destination.
- Performance: Compared to the commonly used regular expressions, it’s more robust and has better performance, allowing you to process more data with fewer resources.
- Maintained by Axoflow: We maintain the database; you don’t have work with it. This includes updates for new product versions and adding new devices. We proactively monitor and check the new releases of main security devices for logging-related changes and update our database. (If something’s not working as expected, you can easily submit log samples and we’ll fix it ASAP). Currently, we have over 80 application adapters in our database.
The automatic classification and curation also adds labels and metadata that can be used to make decisions and route your data. Messages with errors are also tagged with error-specific tags. For example, you can easily route all firewall and security logs to your Splunk deployment, and exclude logs (like debug logs) that have no security relevance and shouldn’t be sent to the SIEM.
Axoflow Console allows you to quickly drill down to find log flows with issues, and to tap into the log flow and see samples of the specific messages that are processed, along with the related parsing information, like tags that describe the errors of invalid messages.
message.utf8_sanitized
: The message is not valid UTF-8.
syslog.missing_timestamp
: The message has no timestamp.
syslog.invalid_hostname
: The hostname field doesn’t seem to be valid, for example, it contains invalid characters.
syslog.missing_pri
: The priority (PRI) field is missing from the message.
syslog.unexpected_framing
: An octet count was found in front of the message, suggested invalid framing.
syslog.rfc3164_missing_header
: The date and the host are missing from the message – practically that’s the entire header of RFC3164-formatted messages.
syslog.rfc5424_unquoted_sdata_value
: The message contains an incorrectly quoted RFC5424 SDATA field.
message.parse_error
: Some other parsing error occurred.

6.2 - Host attribution and inventory
Axoflow’s built-in inventory solution enriches your security data with critical metadata (like the origin host) so you can pinpoint the exact source of every data entry, enabling precise, label-based routing and more informed security decisions.
Enterprises and organizations collect security data (like syslog and other event data) from various data sources, including network devices, security devices, servers, and so on. When looking at a particular log entry during a security incident, it’s not always trivial to determine what generated it. Was it an appliance or an application? Which team is the owner of the application or device? If it was a network device like a switch or a Wi-Fi access point (of which even medium-sized organizations have dozens), can you tell which one it was, and where is it located?
Cloud-native environments, like Kubernetes have addressed this issue using resource metadata. You can attach labels to every element of the infrastructure: containers, pods, nodes, and so on, to include region, role, owner or other custom metadata. When collecting log data from the applications running in containers, the log collector agent can retrieve these labels and attach them to the log data as metadata. This helps immensely to associate routing decisions and security conclusions with the source systems.
In non-cloud environments, like traditionally operated physical or virtual machine clusters, the logs of applications, appliances, and other data sources lack such labels. To identify the log source, you’re stuck with the sender’s IP address, the sender’s hostname, and in some cases the path and filename of the log file, plus whatever information is available in the log message.
Why isn’t the IP address or hostname enough?
- Source IP addresses are poor identifiers, as they aren’t strongly tied to the host, especially when they’re dynamically allocated using DHCP or when a group of senders are behind a NAT and have the same address.
- Some organizations encode metadata into the hostname or DNS record. However, compared to the volume of log messages, DNS resolving is slow. Also, the DNS record is available at the source, but might not be available at the log aggregation device or the log server. As a result, this data is often lost.
- Many data sources and devices omit important information from their messages. For example, the log messages of Cisco routers by default omit the hostname. (They can be configured to send their hostname, but unfortunately, often they aren’t.) You can resolve their IP address to obtain the hostname of the device, but that leads back to the problem in the previous point.
- In addition, all the above issues are rooted in human configuration practices, which tend to be the main cause of anomalies in the system.
Correlating data to data source
Some SIEMs (like IBM QRadar) rely on the sender IP address to identify the data source. Others, like Splunk, delegate the task of providing metadata to the collector agent, but log sources often don’t supply metadata, and even if they do, the information is based on IP address or DNS.

Certain devices, like SonicWall firewalls include a unique device ID in the content of their log messages. Having access to this ID makes attribution straightforward, but logging solutions rarely extract such information. AxoRouter does.
Axoflow builds an inventory to match the messages in your data flow to the available data sources, based on data including:
-
IP address, host name, and DNS records of the source (if available),
-
serial number, device ID, and other unique identifiers extracted from the messages, and
-
metadata based on automatic classification of the log messages, like product and vendor name.
Axoflow (or more precisely, AxoRouter) classifies the processed log data to identify common vendors and products, and automatically applies labels to attach this information.
You can also integrate the telemetry pipeline with external asset management systems to further enrich your security data with custom labels and additional contextual information about your data sources.
6.3 - Policy based routing
Axoflow’s host attribution gives you unprecedented possibilities to route your data. You can tell exactly what kind of data you want to send where. Not only based on technical parameters like sender IP or application name, but using a much more fine-grained inventory that includes information about the devices, their roles, and their location. For example:
- The vendor and the product (for example, a SonicWall firewall)
- The physical location of the device (geographical location, rack number)
- Owner of the device (organization/department/team)

Axoflow automatically adds metadata labels based on information extracted from the data payload, but you can also add static custom labels to hosts manually, or dynamically using flows.
Paired with the information from the content of the messages, all this metadata allows you to formulate high-level, declarative policies and alerts using the metadata labels, for example:
- Route the logs of devices to the owner’s Splunk index.
- Route every firewall and security log to a specific index.
- Let the different teams manage and observe their own devices.
- Granular access control based on relevant log content (for example, identify a proxy log-stream based on the upstream address), instead of just device access.
- Ensure that specific logs won’t cross geographic borders, violating GDPR or other compliance requirements.

6.4 - Reliable transport
Between its components, Axoflow transports data using the reliable OpenTelemetry protocol (OTLP) for high performance, and to avoid losing messages. Under the hood, OpenTelemetry uses gRPC, and has superior performance compared to other log transport protocols, consumes less bandwidth (because it uses protocol buffers), and also provides features like:
- on-the-wire compression
- authentication
- application layer acknowledgement
- batched data sending
- multi-worker scalability on the client and the server.
7 - Manage and monitor the pipeline
7.1 - Provision pipeline elements
The following sections describe how to register a logging host into the Axoflow Console.
Hosts with supported collectors
If the host is running one of the following log collector agents and you can install Axolet on the host to receive detailed metrics about the host, the agent, and data flow the agent processes.
-
Install Axolet on the host. For details, see Axolet.
-
Configure the log collector agent of the host to integrate with Axolet. For details, see the following pages:
7.1.1 - AxoRouter
7.1.1.1 - Install AxoRouter on Kubernetes
To install AxoRouter on a Kubernetes cluster, complete the following steps. For other platforms, see AxoRouter.
Prerequisites
Kubernetes version 1.29 and newer
Minimal resource requirements
- CPU: at least
100m
- Memory:
256MB
- Storage:
8Gi
Network access
The hosts must be able to access the following domains related to the Axoflow Console:
Install AxoRouter
-
Open the Axoflow Console.
-
Select Provisioning.
-
Select the Host type > AxoRouter > Kubernetes. The one-liner installation command is displayed.
-
Open a terminal and set your Kubernetes context to the cluster where you want to install AxoRouter.
-
Run the one-liner, and follow the on-screen instructions.
Current kubernetes context: minikube
Server Version: v1.28.3
Installing to new namespace: axorouter
Do you want to install AxoRouter now? [Y]
-
Register the host.
-
Reload the Provisioning page. There should be a registration request for the new AxoRouter deployment. Select ✓.

-
Select Register to register the host. You can add a description and labels (in label:value
format) to the host.

-
Select the Topology page. The new AxoRouter instance is displayed.
Create a flow
- If you haven’t already done so, create a new destination.
-
Create a flow to connect the new AxoRouter to the destination.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

Send logs to AxoRouter
By default, AxoRouter accepts data on the following ports:
- 514 TCP and UDP for RFC3164 (BSD-syslog) formatted traffic.
- 601 TCP for RFC5424 (IETF-syslog) formatted traffic.
- 6514 TCP for TLS-encrypted syslog traffic.
- 4317 TCP for OpenTelemetry log data.
To receive data on other ports or other protocols, configure the source connectors of the AxoRouter host.
Make sure to enable the ports you’re using on the firewall of your host.
7.1.1.1.1 - Advanced installation options
When installing AxoRouter, you can set a number of advanced options if needed for your environment. Setting the advanced options in the Axoflow Console automatically updates the one-liner command that you can copy and run.

Alternatively, before running the one-liner you can use one of the following methods:
Proxy settings
Use the http_proxy=
, https_proxy=
, no_proxy=
parameters to configure HTTP proxy settings for the installer. To configure the Axolet service to use the proxy settings, enable the AXOLET_AVOID_PROXY
parameter as well. Lowercase variable names are preferred because they work universally.
Installation options
You can pass the following parameters to the installation script as environment variables, or as URL parameters.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
AxoRouter image override
|
|
Default value: |
empty string |
Environment variable |
IMAGE |
URL parameter |
image |
Description: Deploy the specified AxoRouter image.
Helm chart
|
|
Default value: |
oci://us-docker.pkg.dev/axoflow-registry-prod/axoflow/charts/axorouter-syslog |
Environment variable |
HELM_CHART |
URL parameter |
helm_chart |
Description: The path or URL of the AxoRouter Helm chart.
Helm chart version
|
|
Default value: |
Current Axoflow version |
Environment variable |
HELM_CHART_VERSION |
URL parameter |
helm_chart_version |
Description: Deploy the specified version of the Helm chart.
|
|
Default value: |
empty string |
Environment variable |
HELM_EXTRA_ARGS |
URL parameter |
helm_extra_args |
Description: Additional arguments passed to Helm during the installation.
Helm release name
|
|
Default value: |
axorouter |
Environment variable |
HELM_RELEASE_NAME |
URL parameter |
helm_release_name |
Description: Name of the Helm release.
Image repository
|
|
Default value: |
us-docker.pkg.dev/axoflow-registry-prod/axoflow/axorouter |
Environment variable |
IMAGE_REPO |
URL parameter |
image_repo |
Description: Deploy AxoRouter from a custom image repository.
Image version
|
|
Default value: |
Current Axoflow version |
Environment variable |
IMAGE_VERSION |
URL parameter |
image_version |
Description: Deploy the specified AxoRouter version.
Namespace
|
|
Default value: |
axorouter |
Environment variable |
NAMESPACE |
URL parameter |
namespace |
Description: The namespace where AxoRouter is installed.
Axolet parameters
API server host
|
|
Default value: |
|
Environment variable |
|
URL parameter |
api_server_host |
Description: Override the host part of the API endpoint for the host.
Axolet executable path
|
|
Default value: |
|
Environment variable |
AXOLET_EXECUTABLE |
URL parameter |
axolet_executable |
Description: Path to the Axolet executable.
Axolet image override
|
|
Default value: |
empty string |
Environment variable |
AXOLET_IMAGE |
URL parameter |
axolet_image |
Description: Deploy the specified Axolet image.
Axolet image repository
|
|
Default value: |
us-docker.pkg.dev/axoflow-registry-prod/axoflow/axolet |
Environment variable |
AXOLET_IMAGE_REPO |
URL parameter |
axolet_image_repo |
Description: Deploy Axolet from a custom image repository.
Axolet image version
|
|
Default value: |
Current Axoflow version |
Environment variable |
AXOLET_IMAGE_VERSION |
URL parameter |
axolet_image_version |
Description: Deploy the specified Axolet version.
Initial GUID
|
|
Default value: |
|
Environment variable |
|
URL parameter |
initial_guid |
Description: Set a static GUID.
7.1.1.2 - Install AxoRouter on Linux
AxoRouter is a key building block of Axoflow that collects, aggregates, transforms and routes all kinds of telemetry and security data automatically. AxoRouter for Linux includes a Podman container running AxoSyslog, Axolet, and other components.
To install AxoRouter on a Linux host, complete the following steps. For other platforms, see AxoRouter.
What the install script does
The AxoRouter installer will first install a software package which deploys a systemd service.
What the install script does
When you deploy AxoRouter, you run a command that installs the required software packages, configures them and sets up the connection with Axoflow.
The installer script installs the axorouter packages, then executes the configure-axorouter
command with the right parameters. (If you’ve installed the packages in advance, the installer script only executes the configure-axorouter
command.)
The configure-axorouter
command is designed to be run as root (sudo), but you can configure axorouter to run as a non-root user. The configure-axorouter
command is executed with a configuration snippet on its standard input which contains a token required for registering into the management platform.
The script performs the following main steps:
- Generates a unique identifier (GUID).
- Initiates a cryptographic handshake process to Axoflow.
- Creates the initial configuration file for AxoRouter under
/etc/axorouter/
.
- Installs a statically linked executable to
/usr/local/bin/axorouter
.
- Creates the systemd service unit file
/etc/systemd/system/axorouter.service
, then enables and starts that service.
- The service waits for an approval on Axoflow. Once you approve the host registration request, Axoflow issues a client certificate to AxoRouter.
- AxoRouter starts to send telemetry data to Axoflow, and keeps sending them as long as the agent is registered and the certificate is valid.
Prerequisites
Minimal resource requirements
- CPU: at least
100m
- Memory:
256MB
- Storage:
8Gi
Network access
The hosts must be able to access the following domains related to the Axoflow Console:
Install AxoRouter
-
Select Provisioning > Select type and platform.

-
Select the type (AxoRouter) and platform (Linux). The one-liner installation command is displayed.

If needed, set the Advanced options (for example, proxy settings) to modify the installation parameters. Usually, you don’t have to use advanced options unless the Axoflow support team instructs you to do so.
-
Open a terminal on the host where you want to install AxoRouter.
-
Run the one-liner, then follow the on-screen instructions.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
Example output:
Do you want to install AxoRouter now? [Y]
y
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4142 100 4142 0 0 19723 0 --:--:-- --:--:-- --:--:-- 19818
Verifying packages...
Preparing packages...
axorouter-0.40.0-1.aarch64
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 31.6M 100 31.6M 0 0 2092k 0 0:00:15 0:00:15 --:--:-- 2009k
Verifying packages...
Preparing packages...
axolet-0.40.0-1.aarch64
Created symlink /etc/systemd/system/multi-user.target.wants/axolet.service → /usr/lib/systemd/system/axolet.service.
Now continue with onboarding the host on the Axoflow web UI.
-
Register the host.
-
Reload the Provisioning page. There should be a registration request for the new AxoRouter deployment. Select ✓.

-
Select Register to register the host. You can add a description and labels (in label:value
format) to the host.

-
Select the Topology page. The new AxoRouter instance is displayed.
Create a flow
- If you haven’t already done so, create a new destination.
-
Create a flow to connect the new AxoRouter to the destination.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

Send logs to AxoRouter
Configure your hosts to send data to AxoRouter.
-
For appliances that are specifically supported by Axoflow, see Sources.
-
For other appliances and generic Linux devices, see Generic tips.
-
For a quick test without an actual source, you can also do the following (requires nc
to be installed on the AxoRouter host):
-
Open the Axoflow Console, select Topology, then select the AxoRouter instance you’ve deployed.
-
Select ⋮ > Tap log flow > Input log flow. Select Start.
-
Open a terminal on your AxoRouter host.
-
Run the following command to send 120 test messages (2 per second) in a loop to AxoRouter:
for i in `seq 1 120`; do echo "<165> fortigate date=$(date -u +%Y-%m-%d) time=$(date -u +"%H:%M:%S%Z") devname=us-east-1-dc1-a-dmz-fw devid=FGT60D4614044725 logid=0100040704 type=event subtype=system level=notice vd=root logdesc=\"System performance statistics\" action=\"perf-stats\" cpu=2 mem=35 totalsession=61 disk=2 bandwidth=158/138 setuprate=2 disklograte=0 fazlograte=0 msg=\"Performance statistics: average CPU: 2, memory: 35, concurrent sessions: 61, setup-rate: 2\""; sleep 0.5; done | nc -v 127.0.0.1 514
Alternatively, you can send logs in an endless loop:
while true; do echo "<165> fortigate date=$(date -u +%Y-%m-%d) time=$(date -u +"%H:%M:%S%Z") devname=us-east-1-dc1-a-dmz-fw devid=FGT60D4614044725 logid=0100040704 type=event subtype=system level=notice vd=root logdesc=\"System performance statistics\" action=\"perf-stats\" cpu=2 mem=35 totalsession=61 disk=2 bandwidth=158/138 setuprate=2 disklograte=0 fazlograte=0 msg=\"Performance statistics: average CPU: 2, memory: 35, concurrent sessions: 61, setup-rate: 2\""; sleep 1; done | nc -v 127.0.0.1 514
Manage AxoRouter
This section describes how to start, stop and check the status of the AxoRouter service on Linux.
Start AxoRouter
To start AxoRouter, execute the following command. For example:
systemctl start axorouter
If the service starts successfully, no output will be displayed.
The following message indicates that AxoRouter can not start (see Check AxoRouter status):
Job for axorouter.service failed because the control process exited with error code. See `systemctl status axorouter.service` and `journalctl -xe` for details.
Stop AxoRouter
To stop AxoRouter
-
Execute the following command.
systemctl stop axorouter
-
Check the status of the AxoRouter service (see Check AxoRouter status).
Restart AxoRouter
To restart AxoRouter, execute the following command.
systemctl restart axorouter
Reload the configuration without restarting AxoRouter
To reload the configuration file without restarting AxoRouter, execute the following command.
systemctl reload axorouter
Check the status of AxoRouter service
To check the status of AxoRouter service
-
Execute the following command.
systemctl --no-pager status axorouter
-
Check the Active:
field, which shows the status of the AxoRouter service. The following statuses are possible:
7.1.1.2.1 - Advanced installation options
When installing AxoRouter, you can set a number of advanced options if needed for your environment. Setting the advanced options in the Axoflow Console automatically updates the one-liner command that you can copy and run.

Alternatively, before running the one-liner you can use one of the following methods:
Proxy settings
Use theHTTP proxy, HTTPS proxy, No proxy parameters to configure HTTP proxy settings for the installer. To avoid using the proxy for the Axolet service, enable the Avoid proxy parameter as well. Lowercase variable names are preferred because they work universally.
Installation options
You can pass the following parameters to the installation script as environment variables, or as URL parameters.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
AxoRouter capabilities
|
|
Default value: |
CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SYSLOG CAP_BPF |
Environment variable |
AXO_AXOROUTER_CAPS |
URL parameter |
axorouter_caps |
Description: Capabilities added to the AxoRouter container.
AxoRouter config mount path
|
|
Default value: |
/etc/axorouter/user-config |
Environment variable |
AXO_AXOROUTER_CONFIG_MOUNT_INSIDE |
URL parameter |
axorouter_config_mount_inside |
Description: Mount path for custom user configuration.
AxoRouter image override
|
|
Default value: |
us-docker.pkg.dev/axoflow-registry-prod/axoflow/axorouter |
Environment variable |
AXO_IMAGE |
URL parameter |
image |
Description: Deploy the specified AxoRouter image.
|
|
Default value: |
empty string |
Environment variable |
AXO_PODMAN_ARGS |
URL parameter |
extra_args |
Description: Additional arguments passed to the AxoRouter container.
Image repository
|
|
Default value: |
us-docker.pkg.dev/axoflow-registry-prod/axoflow/axorouter |
Environment variable |
AXO_IMAGE_REPO |
URL parameter |
image_repo |
Description: Deploy AxoRouter from a custom image repository.
Image version
|
|
Default value: |
Current Axoflow version |
Environment variable |
AXO_IMAGE_VERSION |
URL parameter |
image_version |
Description: Deploy the specified AxoRouter version.
|
|
Default value: |
auto |
Available values: |
auto , dep , rpm , tar , none |
Environment variable |
AXO_INSTALL_PACKAGE |
URL parameter |
install_package |
Description: File format of the installer package.
Start router
|
|
Default value: |
true |
Available values: |
true , false |
Environment variable |
AXO_START_ROUTER |
URL parameter |
start_router |
Description: Start AxoRouter after installation.
Axolet parameters
API server host
|
|
Default value: |
|
Environment variable |
|
URL parameter |
api_server_host |
Description: Override the host part of the API endpoint for the host.
Avoid proxy
|
|
Default value: |
false |
Available values: |
true , false |
Environment variable |
AXO_AVOID_PROXY |
URL parameter |
avoid_proxy |
Description: Do not use proxy for the Axolet process.
Axolet capabilities
|
|
Default value: |
CAP_SYS_PTRACE CAP_SYS_CHROOT |
Environment variable |
AXO_CAPS |
URL parameter |
caps |
Description: Capabilities added to the Axolet service.
Configuration directory
|
|
Default value: |
/etc/axolet |
Environment variable |
AXO_CONFIG_DIR |
URL parameter |
config_dir |
Description: The directory where the configuration files are stored.
HTTP proxy
|
|
Default value: |
empty string |
Environment variable |
AXO_HTTP_PROXY |
URL parameter |
http_proxy |
Description: Use a proxy to access Axoflow Console from the host.
HTTPS proxy
|
|
Default value: |
empty string |
Environment variable |
AXO_HTTPS_PROXY |
URL parameter |
https_proxy |
Description: Use a proxy to access Axoflow Console from the host.
No proxy
|
|
Default value: |
empty string |
Environment variable |
AXO_NO_PROXY |
URL parameter |
no_proxy |
Description: Comma-separated list of hosts that shouldn’t use proxy to access Axoflow Console from the host.
Overwrite config
|
|
Default value: |
false |
Available values: |
true , false |
Environment variable |
AXO_CONFIG_OVERWRITE |
URL parameter |
config_overwrite |
Description: Overwrite the configuration when reinstalling the service.
Service group
|
|
Default value: |
root |
Environment variable |
AXO_GROUP |
URL parameter |
group |
Description: The group running the Axolet service.
Service user
|
|
Default value: |
root |
Environment variable |
AXO_USER |
URL parameter |
user |
Description: The user running the Axolet service.
Start service
|
|
Default value: |
true |
Available values: |
true , false |
Environment variable |
AXO_START |
URL parameter |
start |
Description: Start the Axolet service after installation.
WEC parameters
These parameters are related to the Windows Event Collector server that can be run on AxoRouter. For details, see Windows Event Collector (WEC).
WEC Image repository
|
|
Default value: |
us-docker.pkg.dev/axoflow-registry-prod/axoflow/axorouter-wec |
Environment variable |
AXO_WEC_IMAGE_REPO |
URL parameter |
wec_image_repo |
Description: Deploy the Windows Event Collector server from a custom image repository.
WEC Image version
|
|
Default value: |
Current Axoflow version |
Environment variable |
AXO_WEC_IMAGE_VERSION |
URL parameter |
wec_image_version |
Description: Deploy the specified Windows Event Collector server version.
7.1.1.2.2 - Run AxoRouter as non-root
To run AxoRouter as a non-root user, set the AXO_USER
and AXO_GROUP
environment variables to the user’s username and groupname on the host you want to deploy AxoRouter. For details, see Advanced installation options.
Operators must have access to the following commands:
-
/usr/bin/systemctl * axolet.service
: Controls the axolet.service
systemd unit. Usually *
is start
, stop
, restart
, enable
, and status
. Used by the operators for troubleshooting.
-
/usr/local/bin/configure-axolet
: Creates initial axolet
configuration and enables/starts the axolet
service. Executed by the bootstrap script.
-
Command to install and upgrade the axolet
package. Executed by the bootstrap script if the packages aren’t already installed.
- On RPM-based Linux distributions:
/usr/bin/rpm -Uv axo*.rpm
- On DEB-based Linux distributions:
/usr/bin/dpkg -i axo*.deb
-
/usr/bin/systemctl * axorouter.service
: Controls the axorouter.service
systemd unit. Usually *
is start
, stop
, restart
, enable
, and status
. Used by the operators for troubleshooting.
-
/usr/local/bin/configure-axorouter
: Creates the initial axorouter configuration and enables/starts the axorouter service. Executed by the bootstrap script.
-
Command to install and upgrade the axorouter and the axolet package. Executed by the bootstrap script if the packages aren’t already installed.
- On RPM-based Linux distributions:
/usr/bin/rpm -Uv axo*.rpm
- On DEB-based Linux distributions:
/usr/bin/dpkg -i axo*.deb
You can permit the syslogng
user to run these commands by running on of the following:
sudo tee /etc/sudoers.d/configure-axoflow <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axolet
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axolet.service
# for rpm installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/rpm -Uv axo*.rpm
A
sudo tee /etc/sudoers.d/configure-axorouter <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axorouter
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axorouter.service
# for rpm installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/rpm -Uv axo*.rpm
A
sudo tee /etc/sudoers.d/configure-axorouter <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axorouter
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axorouter.service
# for deb installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/dpkg -i axo*.deb
A
sudo tee /etc/sudoers.d/configure-axorouter <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axorouter
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axorouter.service
# for deb installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/rpm -Uv axo*.deb
A
7.1.2 - AxoSyslog
Onboarding allows you to collect metrics about the host, display the host on the Topology page, and to tap into the log flow.
Onboarding requires you to modify the host and the configuration of the logging agent running on the host.
- Level 1: Install Axolet on the host. Axolet collects metrics from the host and sends them to the Axoflow Console, so you can check host-level metrics on the Metrics & Health page of the host, and displays the host on the Topology page.
- Level 2: Instrument the configuration of the logging agent to provide detailed metrics about the traffic flow. This allows you to display data about the host on the Analytics page.
- Level 3: Instrument the configuration of the logging agent to allow you to access the logs of the logging agent and to tap into the log flow from the Axoflow Console. The exact steps for this integration step depend on the configuration of your logging agent. Contact us so our professional services can help you with the integration.
To onboard an existing AxoSyslog instance into Axoflow, complete the following steps.
-
Install Axolet on the host, then approve its registration on the Provisioning page of the Axoflow Console.
-
The AxoSyslog host is now visible on the Topology page of the Axoflow Console as a source.
-
If you've already added the AxoRouter instance or the destination where this host is sending data to the Axoflow Console, add a path to connect the host to the AxoRouter or the destination.
-
Select Topology > + > Path.

-
Select your data source in the Source host field.

-
Select the target router or aggregator this source is sending its data to in the Target host field, for example, axorouter
.
-
Select the Target connector. The connector determines how the destination receives the data (for example, using which protocol or port).
-
Select Create. The new path appears on the Topology page.

-
Access the AxoSyslog host and edit the configuration of AxoSyslog. Set the statistics-related global options like this (if the options
block already exists, add these lines to the bottom of the block):
options {
stats(
level(2)
freq(0) # Inhibit statistics output to stdout
);
};
-
(Optional) To get detailed metrics and analytics about the traffic that flows through the host, instrument your AxoSyslog configuration as follows:
Note
You can use Axolet with an un-instrumented AxoSyslog configuration file, but that limits available metrics to host statistics (for example, disk, memory, queue information). You won’t access data about the actual traffic flowing through the host. To collect traffic-related metrics, instrument configuration with
metrics-probe()
stanzas. The example below shows how to instrument the configuration to highlight common macros such as
$HOST
and
$PROTOCOL
. If you want to customize the collected metrics or need help with the instrumentation,
contact us.
-
Download the following configuration snippet to the AxoSyslog host, for example, as /etc/syslog-ng/conf.d/axoflow-instrumentation.conf
.
-
Include it in at the top of your configuration file:
@version: current
@include "axoflow-instrumentation.conf"
-
Edit every destination statement to include a parser { metrics-output(destination(<custom-ID-for-the-destination>)); };
line, for example:
destination d_file {
channel {
parser { metrics-output(destination(my-file-destination)); };
destination { file("/dev/null" persist-name("d_s3")); };
};
};
-
Reload the configuration of AxoSyslog.
systemctl reload syslog-ng
7.1.3 - Splunk Connect for Syslog (SC4S)
Onboarding allows you to collect metrics about the host, display the host on the Topology page, and to tap into the log flow.
Onboarding requires you to modify the host and the configuration of the logging agent running on the host.
- Level 1: Install Axolet on the host. Axolet collects metrics from the host and sends them to the Axoflow Console, so you can check host-level metrics on the Metrics & Health page of the host, and displays the host on the Topology page.
- Level 2: Instrument the configuration of the logging agent to provide detailed metrics about the traffic flow. This allows you to display data about the host on the Analytics page.
- Level 3: Instrument the configuration of the logging agent to allow you to access the logs of the logging agent and to tap into the log flow from the Axoflow Console. The exact steps for this integration step depend on the configuration of your logging agent. Contact us so our professional services can help you with the integration.
To generate metrics for the Axoflow platform from an existing Splunk Connect for Syslog (SC4S) instance, you need to configure SC4S to generate these metrics. Complete the following steps.
-
If you haven’t already done so, install Axolet on the host, then approve its registration on the Provisioning page of the Axoflow Console.
-
Download the following code snippet as axoflow-instrumentation.conf
.
-
If you are running SC4S under podman or docker, copy the file into the /opt/sc4s/local/config/destinations
directory. In other deployment methods this might be different, check the SC4S documentation for details.
-
Check if the metrics are appearing, for example, run the following command on the SC4S host:
syslog-ng-ctl stats prometheus | grep classified
7.1.4 - syslog-ng
Onboarding allows you to collect metrics about the host, display the host on the Topology page, and to tap into the log flow.
Onboarding requires you to modify the host and the configuration of the logging agent running on the host.
- Level 1: Install Axolet on the host. Axolet collects metrics from the host and sends them to the Axoflow Console, so you can check host-level metrics on the Metrics & Health page of the host, and displays the host on the Topology page.
- Level 2: Instrument the configuration of the logging agent to provide detailed metrics about the traffic flow. This allows you to display data about the host on the Analytics page.
- Level 3: Instrument the configuration of the logging agent to allow you to access the logs of the logging agent and to tap into the log flow from the Axoflow Console. The exact steps for this integration step depend on the configuration of your logging agent. Contact us so our professional services can help you with the integration.
To onboard an existing syslog-ng instance into Axoflow, complete the following steps.
-
Install Axolet on the host, then approve its registration on the Provisioning page of the Axoflow Console.
-
The syslog-ng host is now visible on the Topology page of the Axoflow Console as a source.
-
If you've already added the AxoRouter instance or the destination where this host is sending data to the Axoflow Console, add a path to connect the host to the AxoRouter or the destination.
-
Select Topology > + > Path.

-
Select your data source in the Source host field.

-
Select the target router or aggregator this source is sending its data to in the Target host field, for example, axorouter
.
-
Select the Target connector. The connector determines how the destination receives the data (for example, using which protocol or port).
-
Select Create. The new path appears on the Topology page.

-
Access the syslog-ng host and edit the configuration of syslog-ng. Set the statistics-related global options like this (if the options
block already exists, add these lines to the bottom of the block):
options {
stats-level(2);
stats-freq(0); # Inhibit statistics output to stdout
};
-
(Optional) To get detailed metrics and analytics about the traffic that flows through the host, instrument your syslog-ng configuration as follows:
Note
You can use Axolet with an un-instrumented syslog-ng configuration file, but that limits available metrics to host statistics (for example, disk, memory, queue information). You won’t access data about the actual traffic flowing through the host. To collect traffic-related metrics, instrument configuration with
metrics-probe()
stanzas. The example below shows how to instrument the configuration to highlight common macros such as
$HOST
and
$PROTOCOL
. If you want to customize the collected metrics or need help with the instrumentation,
contact us.
-
Download the following configuration snippet to the syslog-ng host, for example, as /etc/syslog-ng/conf.d/axoflow-instrumentation.conf
.
-
Include it in at the top of your configuration file:
@version: current
@include "axoflow-instrumentation.conf"
-
Edit every destination statement to include a parser { metrics-output(destination(<custom-ID-for-the-destination>)); };
line, for example:
destination d_file {
channel {
parser { metrics-output(destination(my-file-destination)); };
destination { file("/dev/null" persist-name("d_s3")); };
};
};
-
Reload the configuration of syslog-ng.
systemctl reload syslog-ng
7.1.5 - Axolet
Axolet is the agent software for Axoflow. Its primary purpose is to discover the log collector services that are running on the host (for example, AxoSyslog or SC4S), and report their operating statistics (counters) to Axoflow.
7.1.5.1 - Install Axolet
Axolet is the agent software for Axoflow. Its primary purpose is to discover the log collector services that are running on the host (for example, AxoSyslog or SC4S), and report their operating statistics (counters) to Axoflow.
It is simple to install Axolet on individual hosts, or collectively via orchestration tools such as chef or puppet.
What the install script does
When you deploy Axolet, you run a command that installs the required software packages, configures them and sets up the connection with Axoflow.
The installer script installs the axolet packages, then executes the configure-axolet
command with the right parameters. (If you’ve installed the packages in advance, the installer script only executes the configure-axolet
command.)
The configure-axolet
command is designed to be run as root (sudo), but you can configure axolet to run as a non-root user. The configure-axolet
command is executed with a configuration snippet on its standard input which contains a token required for registering into the management platform.
The script performs the following main steps:
- Generates a unique identifier (GUID).
- Initiates a cryptographic handshake process to Axoflow.
- Creates the initial configuration file for Axolet under
/etc/axolet/
.
- Installs a statically linked executable to
/usr/local/bin/axolet
.
- Creates the systemd service unit file
/etc/systemd/system/axolet.service
, then enables and starts that service.
- The service waits for an approval on Axoflow. Once you approve the host registration request, Axoflow issues a client certificate to Axolet.
- Axolet starts to send telemetry data to Axoflow, and keeps sending them as long as the agent is registered and the certificate is valid.

Prerequisites
Axolet should work on most Red Hat and Debian compatible Linux distributions. For production environments, we recommend using Red Hat Enterprise Linux 9.
Network access
The hosts must be able to access the following domains related to the Axoflow Console:
Install Axolet
To install Axolet on a host and onboard it to Axoflow, complete the following steps. If you need to reinstall Axolet for some reason, see Reinstall Axolet.
-
Open the Axoflow Console at https://<your-tenant-id>.cloud.axoflow.io/
.
-
Select Provisioning > Select type and platform.

-
Select Edge > Linux.

The curl
command can be run manually or inserted into a template in any common software deployment package. When run, a script is downloaded that sets up the Axolet
process to run automatically at boot time via systemd
. For advanced installation options, see Advanced installation options.
-
Copy the deployment one-liner and run it on the host you are onboarding into Axoflow.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
Example output:
Do you want to install Axolet now? [Y]
y
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 31.6M 100 31.6M 0 0 2127k 0 0:00:15 0:00:15 --:--:-- 2075k
Verifying packages...
Preparing packages...
axolet-0.40.0-1.aarch64
Created symlink /etc/systemd/system/multi-user.target.wants/axolet.service → /usr/lib/systemd/system/axolet.service.
Now continue with onboarding the host on the Axoflow web UI.
-
On the Axoflow Console, reload the Provisioning page. A registration request for the new host should be displayed. Accept it.
-
Axolet starts sending metrics from the host. Check the Topology page to see the new host.
-
Continue to onboard the host as described for your specific log collector agent.
Manage Axolet
This section describes how to start, stop and check the status of the Axolet service on Linux.
Start Axolet
To start Axolet, execute the following command. For example:
systemctl start axolet
If the service starts successfully, no output will be displayed.
The following message indicates that Axolet can not start (see Check Axolet status):
Job for axolet.service failed because the control process exited with error code. See `systemctl status axolet.service` and `journalctl -xe` for details.
Stop Axolet
To stop Axolet
-
Execute the following command.
systemctl stop axolet
-
Check the status of the Axolet service (see Check Axolet status).
Restart Axolet
To restart Axolet, execute the following command.
systemctl restart axolet
Reload the configuration without restarting Axolet
To reload the configuration file without restarting Axolet, execute the following command.
systemctl reload axolet
Check the status of Axolet service
To check the status of Axolet service
-
Execute the following command.
systemctl --no-pager status axolet
-
Check the Active:
field, which shows the status of the Axolet service. The following statuses are possible:
Upgrade Axolet
To upgrade Axolet, re-run the installation script.
Run axolet as non-root
You can run Axolet as non-root user, but operators must have access to the following commands:
-
/usr/bin/systemctl * axolet.service
: Controls the axolet.service
systemd unit. Usually *
is start
, stop
, restart
, enable
, and status
. Used by the operators for troubleshooting.
-
/usr/local/bin/configure-axolet
: Creates initial axolet
configuration and enables/starts the axolet
service. Executed by the bootstrap script.
-
Command to install and upgrade the axolet
package. Executed by the bootstrap script if the packages aren’t already installed.
- On RPM-based Linux distributions:
/usr/bin/rpm -Uv axo*.rpm
- On DEB-based Linux distributions:
/usr/bin/dpkg -i axo*.deb
You can permit the syslogng
user to run these commands by running on of the following:
sudo tee /etc/sudoers.d/configure-axoflow <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axolet
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axolet.service
# for rpm installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/rpm -Uv axo*.rpm
A
sudo tee /etc/sudoers.d/configure-axoflow <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axolet
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axolet.service
# for deb installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/dpkg -i axo*.deb
A
7.1.5.2 - Advanced installation options
When installing Axolet, you can set a number of advanced options if needed for your environment. Setting the advanced options in the Axoflow Console automatically updates the one-liner command that you can copy and run.

Alternatively, before running the one-liner you can use one of the following methods:
Proxy settings
Use theHTTP proxy, HTTPS proxy, No proxy parameters to configure HTTP proxy settings for the installer. To avoid using the proxy for the Axolet service, enable the Avoid proxy parameter as well. Lowercase variable names are preferred because they work universally.
Installation options
You can pass the following parameters to the installation script as environment variables, or as URL parameters.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
API server host
|
|
Default value: |
|
Environment variable |
|
URL parameter |
api_server_host |
Description: Override the host part of the API endpoint for the host.
Avoid proxy
|
|
Default value: |
false |
Available values: |
true , false |
Environment variable |
AXOLET_AVOID_PROXY |
URL parameter |
avoid_proxy |
Description: If set to true
, the value of the *_proxy
variables will only be used for downloading the installer, but not for the axolet
service itself. If set to false
, the Axolet service will use the variables from the installer.
Capabilities
|
|
Default value: |
CAP_SYS_PTRACE |
Available values: |
Whitespace-separated list of capability names with CAP_ prefix. |
Environment variable |
AXOLET_CAPS |
URL parameter |
caps |
Description: Ambient Linux capabilities the axolet
service will use.
Configuration directory
|
|
Default value: |
/etc/axolet |
Environment variable |
AXOLET_CONFIG_DIR |
URL parameter |
config_dir |
Description: The directory where the configuration files are stored.
HTTP proxy
|
|
Default value: |
empty string |
Environment variable |
AXOLET_HTTP_PROXY |
URL parameter |
http_proxy |
Description: Use a proxy to access Axoflow Console from the host.
HTTPS proxy
|
|
Default value: |
empty string |
Environment variable |
AXOLET_HTTPS_PROXY |
URL parameter |
https_proxy |
Description: Use a proxy to access Axoflow Console from the host.
No proxy
|
|
Default value: |
empty string |
Environment variable |
AXOLET_NO_PROXY |
URL parameter |
no_proxy |
Description: Comma-separated list of hosts that shouldn’t use proxy to access Axoflow Console from the host.
Overwrite config
|
|
Default value: |
false |
Available values: |
true , false |
Environment variable |
AXOLET_CONFIG_OVERWRITE |
URL parameter |
config_overwrite |
Description: If set to true
, the configuration process will overwrite existing configuration (/etc/axolet/config.json
). This means that the agent will get a new GUID and it will require approval on the Axoflow Console.
|
|
Default value: |
auto |
Available values: |
auto , dep , rpm , tar , none |
Environment variable |
AXOLET_INSTALL_PACKAGE |
URL parameter |
install_package |
Description: File format of the installer package.
Service group
|
|
Default value: |
root |
Environment variable |
AXOLET_GROUP |
URL parameter |
group |
Description: Name of the group and Axolet will be running as. It should be either root
or the group syslog-ng
is running as.
Service user
|
|
Default value: |
root |
Environment variable |
AXOLET_USER |
URL parameter |
user |
Description: Name of the user Axolet will be running as. It should be either root
or the user syslog-ng
is running as. See also Run axolet as non-root.
Start service
|
|
Default value: |
AXOLET_START=true |
Available values: |
true , false |
Environment variable |
AXOLET_START |
URL parameter |
start |
Description: Start axolet agent at the end of installation. Use false for preparing golden images. In this case axolet
will generate a new GUID on the first boot after cloning the image.
If you are preparing a host for cloning with Axolet already installed, set the following environment variable in your (root) shell session, before running the one-liner command. For example:
export START_AXOLET=false
curl ... # Run the command copied from the Provisioning page
This way Axolet will only start and initialize after the first reboot.
7.1.5.3 - Run axolet as non-root
If the log collector agent (AxoSyslog or syslog-ng) is running as a non-root user, you may want to configure the Axolet agent to run as the same user.
To do that, set the AXOLET_USER
and AXOLET_GROUP
environment variables to the user’s username and groupname. For details, see Advanced installation options.
Operators will need to have access to the following commands:
-
/usr/bin/systemctl * axolet.service
: Controls the axolet.service
systemd unit. Usually *
is start
, stop
, restart
, enable
, and status
. Used by the operators for troubleshooting.
-
/usr/local/bin/configure-axolet
: Creates initial axolet
configuration and enables/starts the axolet
service. Executed by the bootstrap script.
-
Command to install and upgrade the axolet
package. Executed by the bootstrap script if the packages aren’t already installed.
- On RPM-based Linux distributions:
/usr/bin/rpm -Uv axo*.rpm
- On DEB-based Linux distributions:
/usr/bin/dpkg -i axo*.deb
For example, you can permit the syslogng
user to run these commands by running the following commands:
sudo tee /etc/sudoers.d/configure-axoflow <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axolet
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axolet.service
# for rpm installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/rpm -Uv axo*.rpm
A
sudo tee /etc/sudoers.d/configure-axoflow <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axolet
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axolet.service
# for deb installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/dpkg -i axo*.deb
A
7.1.5.4 - Reinstall Axolet
Reinstall Axolet to a (cloned) machine
To re-install Axolet on a (cloned) machine that has an earlier installation running, complete the following steps.
-
Log in to the host as root and execute the following commands:
systemctl stop axolet
rm -r /etc/axolet/
-
Follow the regular installation steps described in Axolet.
Recover Axolet after the root CA certificate was rotated
-
Log in to the host as root and execute the following commands:
export AXOLET_KEEP_GUID=y AXOLET_CONFIG_OVERWRITE=y PATH=$PATH:/usr/local/bin
-
Run the one-liner from the Axoflow Console Provisioning page in the same shell.
-
The installation may result in an error message, but Axolet should eventually recover. You can check by running:
journalctl -b -u axolet -n 20 -f
7.1.6 - Data sources
7.1.7 - Destinations
7.1.8 - Windows host - agent based solution
Axoflow provides a customized OpenTelemetry Collector distribution to collect data from Microsoft Windows hosts.
Prerequisites
The Axoflow OpenTelemetry Collector supports the following Windows versions:
- Windows Server 2025
- Windows Server 2022
- Windows 11
- Windows 10
Installation
-
Download the installation package for your platform form the Assets section of the release. We provide MSI installers and binary releases for amd64 and arm64 architectures.
-
Run the installer. The installer installs:
- the collector agent (by default) to
C:\Program Files\Axoflow\OpenTelemetry Collector\axoflow-otel-collector.exe
, and
- a default configuration file (
C:\Program Files\Axoflow\OpenTelemetry Collector\config.yaml
) that must be edited before use.
Configuration
If you have already installed the agent, complete the following steps to configure it.
-
Open the configuration file (C:\Program Files\Axoflow\OpenTelemetry Collector\config.yaml
).
-
Set the IP address and port of the AxoRouter host where you want to send data from this Windows host. Use the IP address and port of the AxoRouter OpenTelemetry connector (for example, 10.0.2.2:4317
). Here’s how to find the IP address of your AxoRouter. (By default, every AxoRouter has an OpenTelemetry connector enabled.)
exporters:
otlp/axorouter:
endpoint: 10.0.2.2:4317
tls:
insecure: true
-
(Optional) Customize the Event log sources. The default configuration collects data from the following channels:
application
,
security
,
system
.
To include additional channels:
-
Add a new windowseventlog
receiver under the receivers
section, like this:
receivers:
windowseventlog/<CHANNEL_NAME>:
channel: <CHANNEL_NAME>
raw: true
-
Include the new receiver in a pipeline under the service.pipelines
section, for example:
service:
pipelines:
logs/eventlog:
receivers: [windowseventlog/application, windowseventlog/system, windowseventlog/security, windowseventlog/<CHANNEL_NAME>]
processors: [resource/agent, resourcedetection/system]
exporters: [otlp/axorouter]
-
(Optional) Configure collecting DNS logs from the host.
-
Check the path of the DNS log file by running with the following PowerShell command:
(Get-DnsServerDiagnostics).LogFilePath
-
Enter the path into the receivers.filelog/windows_dns_debug_log.include
section of the configuration file. Note that you have to escape the backslashes in the path, for example, C:\\Windows\\System32\\DNS\\dns.log
.
receivers:
filelog/windows_dns_debug_log:
include: ['<ESCAPED_DNS_LOGFILE_PATH>']
...
-
(Optional) Configure collecting DHCP logs from the host.
-
Check the path of the DHCP log files by running with the following PowerShell command:
(Get-DhcpServerAuditLog).Path
DHCP server log files usually start with the DhcpSrvLog
(for IPv4) or the DhcpV6SrvLog
(for IPv6) prefixes.
-
Enter the path of the IPv4 log files without the filename into the receivers.filelog/windows_dhcp_server_v4_auditlog.include
section of the configuration file.
Note that you have to escape the backslashes in the path, for example, C:\\Windows\\System32\\DHCP\\
.
receivers:
filelog/windows_dhcp_server_v4_auditlog:
include: ['<ESCAPED_DHCP_SERVER_LOGS_PATH>\\DhcpSrvLog*']
...
filelog/windows_dhcp_server_v6_auditlog:
include: ['<ESCAPED_DHCPV6_SERVER_LOGS_PATH>\\DhcpV6SrvLog*']
...
-
Enter the path of the IPv6 log files without the filename into the receivers.filelog/windows_dhcp_server_v6_auditlog.include
section of the configuration file.
Note that you have to escape the backslashes in the path, for example, C:\\Windows\\System32\\DNS\\dns.log
.
-
Save the file.
-
Restart the service.
Restart-Service axoflow-otel-collector
The agent starts sending data to the configured AxoRouter.
-
Add the Windows host where you’ve installed the OpenTelemetry Collector to Axoflow Console as a data source.
-
Open the Axoflow Console.
-
Select Topology > + > Source.

-
Select Microsoft Windows as the type of the source.

-
Set the IP address and the host name (FQDN) of the host.
-
Select Create.
-
Create a flow between the data source and the OpenTelemetry connector of AxoRouter. You can use the Select messages processing step (with the meta.connector.type = otlp
and meta.product =* windows
query) to route only the Windows data received by the AxoRouter OpenTelemetry connector to your destination.

The AxoRouter connector adds the following fields to the meta
variable:
field |
value |
meta.connector.type |
otlp |
meta.connector.name |
<name of the connector> |
meta.product |
`windows |
7.2 - Topology
Based on the collected metrics, Axoflow visualizes the topology of your security data pipeline. The topology allows you to get a semi-real-time view of how your edge-to-edge data flows, and drill down to the details and metrics of your pipeline elements to quickly find data transport issues.

Select the name of a source host or a router to show the details and metrics of that pipeline elements.
If a host has active alerts, it’s indicated on the topology as well.

Traffic volume
Select bps or eps in the top bar to show the volume of the data flow on the topology paths in bytes per second or events per second.
Filter hosts
To find or display only specific hosts, you can use the filter bar.
- Free Text mode searches in the following fields of the host: Name, IP Address, GUID, and FQDN.
- AQL mode allows you to search in specific labels of the hosts. It also makes more complex filtering possible, using the Equal, Contains, and Match operators. When using AQL mode, Axoflow Console autocompletes the built-in host labels and field names, but doesn’t autocomplete custom labels.

If a filter is active, by default the Axoflow Console will display the matching hosts and the elements that are downstream from the matching hosts. For example, if an aggregator like AxoRouter matches the filter, only the aggregators and destinations where the matching host is sending data are displayed, the sources that are sending data to the matching aggregator aren’t shown. To display only the matching host without the downstream pipeline elements, select Filter strategy > Strict.
Grouping hosts
You can select labels to group hosts in the Group by field, so the visualization of large topologies with lots of devices remains useful. Axoflow Console automatically adds names to the groups. The following example shows hosts grouped by their region based on their location labels.

You can select multiple labels to group the hosts based on different parameters.
Queues
Select queue in the top bar to show the status of the memory and disk queues of the hosts. Select the status indicators of a host to display the details of the queues.

The following details about the queue help you diagnose which connections of the host are having throughput or connectivity issues:
- capacity: The total capacity of the buffer in bytes.
- usage: How much of the total capacity is currently in use.
- driver: The type of the AxoSyslog driver used for the connection the queue belongs to.
- id: The identifier of the connection.
Disk-based queues also have the following information:
- path: The location of the disk-buffer file on the host.
- reliable: Indicates wether the disk-buffer is set to be reliable or not.
- worker: The ID of the AxoSyslog worker thread the queue belongs to.
Both memory-based and disk-based queues have additional details that depend on the destination.
7.3 - Hosts
Axoflow collects and shows you a wealth of information about the hosts of your security data pipeline.
7.3.1 - Find a host
To find a specific host, you have the following options:
-
Open the Topology page, then click the name of the host you’re interested in. If you have many host and it’s difficult to find the one you need, use filtering, or grouping.

-
Open the Hosts page and find the host you’re interested in.

To find or display only specific hosts, you can use the filter bar.
- Free Text mode searches in the following fields of the host: Name, IP Address, GUID, and FQDN.
- AQL mode allows you to search in specific labels of the hosts. It also makes more complex filtering possible, using the Equal, Contains, and Match operators. When using AQL mode, Axoflow Console autocompletes the built-in host labels and field names, but doesn’t autocomplete custom labels.
7.3.2 - Host information
The Hosts page contains a quick overview of every data source and data processor node. The exact information depends on the type of the host: hosts managed by Axoflow provide more information than external hosts.

The following information is displayed:
- Hostname or IP address
- Metadata labels. These include labels added automatically during the Axoflow curation process (like product name and vendor labels), as well as any custom labels you’ve assigned to the host.
For hosts that have the Axoflow agent (Axolet) installed:
- The version on the agent
- The name and version of the log collector running on the host (for example, AxoSyslog or Splunk Connect for Syslog)
- Operating system, version, and architecture
- Resource information: CPU and memory usage, disk buffer usage. Click on a resource to open its resource history on the Metrics & health page of the host.
- Traffic information: volume of the incoming and outgoing data traffic on the host.
For AxoRouter hosts:
For more details about the host, select the hostname, or click ⋮.

7.3.3 - Custom labels and metadata
To add custom labels to a host, complete the following steps. Note that these are static labels. To add labels dynamically based on the contents of the processed data, use the processing steps in data flows.
-
Find the host on the Hosts or the Topology page, and click on its hostname. The overview of the host is displayed.

-
Select Edit.

You can add custom labels in <label-name>:<value>
format (for example, the group or department a source device belongs to), or a generic description about the host. You can use the labels for quickly finding the host on the Hosts page, and also for filtering when configuring Flows.
When using labels in filters, processing steps, or search bars, note that:
- Labels added to AxoRouter hosts get the
axo_host_
prefix.
- Labels added to data sources get the
host_
prefix. For example, if you add a rack label to an edge host, it’ll be added to the data received from the host as host_rack
.
On other pages, like the Host Overview page, the labels are displayed without the prefixes.
-
Select Save.
7.3.4 - Services
The Services page of a host shows information about the data collector or router services running on the host.
This page is only available for managed pipeline elements.
The following information is displayed:
- Name: Name of the service
- Version: Version number of the service
- Type: Type of the service (which data collector or processor it’s running)
- Supervisor: The type of the supervisor process. For example, Splunk Connect for Syslog (
sc4s
) runs a syslog-ng process under the hood.
Icons and colors indicate the status of the service: running
, stopped
, or not registered
.

Service configuration
To check the configuration of the service, select Configuration. This shows the list of related environment variables and configuration files. Select a file or environment variable to display its value. For example:

To display other details of the service (for example, the location of the configuration file or the binary), select Details.

Manage service
To reload a registered service, select Reload.
7.4 - Log tapping
Log tapping in Axoflow samples the log flow. You can use labels to filter for specific messages (like ones with parse errors) and tap only those messages. To not get overwhelmed with events, Axoflow automatically samples the output: if many messages match the selected filter, only a subset is shown (about 1 message per second). Using log tapping, you can quickly troubleshoot both parsing/curation errors and destination ingest (API) errors, and check:
- What was in the original message?
- What is sent in the final payload to the destination?
To see log tapping in action, check this blog post. To tap into your log flow, or to display the logs of the log collector agent, complete the following steps.
Note
Neither AxoRouter nor Axoflow Console records any log data. The
Flow tapping and
Log tapping features only send a sample of the data to your browser while the tapping is active.
-
Open the Topology page, then select the host where you want to tap the logs, for example, an AxoRouter deployment.
-
Select ⋮ > Tap log flow.

-
Tap into the log flow.
- To see the input data, select Input log flow > Start.
- To see the output data, select Output log flow > Start.
- To see the logs of the log collector agent running on the host, select Agent logs.
You can use labels to filter the messages and sample only the matching ones.

-
When the logs you’re interested in show up, click Stop Log Tap, then click a log message to see its details.

-
If you don’t know what the message means, select AI Analytics to ask our AI to interpret it.

Filter the messages
You can add labels to the Filter By Label field to sample only messages matching the filter. If you specify multiple labels, only messages that match all filters will be sampled. For example, the following filter selects messages from a specific source IP, sent to a specific destination IP.

7.5 - Alerts
Axoflow raises alerts for a number of events in your security data pipeline, for example:
- a destination becomes unavailable or isn’t accepting messages,
- a host is dropping packets or messages,
- a host becomes unavailable,
- the disk queues of a host are filling up,
- abandoned (orphaned) disk buffer files are on the host,
- the configuration of a managed host failed to synchronize or reload,
- traffic of a flow has unexpectedly dropped.
Alerts are indicated at a number of places:
-
On the Alerting page. Select an alert to display its details.

-
On the Topology page:

-
On the Overview page of the host:

-
On the Metrics & Health page of the host the alerts are shown as an overlay to the metrics:

7.6 - Private connections
7.6.1 - Google Private Service Connect
If you want your hosts in Google Cloud to access the Axoflow Console without leaving the Google network, we recommend that you use Google Cloud Private Service Connect (PSC) to secure the connection from your VPC to Axoflow.
Prerequisites
Contact Axoflow and provide the list of projects so we can set up an endpoint for your PSC. You will receive information from us that you’ll need to properly configure your connection.
You will also need to allocate a dedicated IP address for the connection in a subnet that’s accessible for the hosts.
Steps
After you have received the details of your target endpoint from Axoflow, complete the following steps to configure Google Cloud Private Service Connect from your VPC to Axoflow Console.
-
Open Google Cloud Console and navigate to Private Service Connect > Connected endpoints.
-
Select the project you want to connect to Axoflow.
-
Navigate to Connect endpoint and complete the following steps.
- Select Target > Published service.
- Set Target service to the service name you’ve received from Axoflow. The service name should be similar to:
projects/axoflow-shared/regions/<region-code>/serviceAttachments/<your-tenant-ID>
- Set Endpoint name to the name you prefer, or the one recommended by Axoflow. The recommended service name is similar to:
psc-axoflow-<your-tenant-ID>
- Select your VPC in the Network field.
- Set Subnet where the endpoint should appear. Since subnets are regional resources, select a subnet in the region you received from Axoflow.
- Select Create IP address and allocate an address for the endpoint. Save the address, you’ll need it later to verify that the connection is working.
- Select Enable global access.
- There is no need to enable the directory API even if it’s offered by Google.
- Select Add endpoint.
-
Test the connection.
-
Log in to a machine where you want to use the PSC using SSH.
-
Test the connection. Run the following command using the IP address you’ve allocated for the endpoint.
curl -vk https://<IP-address-allocated-for-the-endpoint>
If the connection is established, you’ll receive an HTTP 404 response.
-
If the connection is established, configure DNS resolution on the hosts. Complete the following steps.
Setting up selected machines to use the PSC
-
Add the following entry to the /etc/hosts
file of the machine.
<IP-address-allocated-for-the-endpoint> <your-tenant-id>.cloud.axoflow.io kcp.<your-tenant-id>.cloud.axoflow.io telemetry.<your-tenant-id>.cloud.axoflow.io
-
Run the following command to test DNS resolution:
curl -v https://<your-tenant-id>.cloud.axoflow.io
It should load an HTML page from the IP address of the endpoint.
-
If the host is running axolet, restart it by running:
sudo systemctl restart axolet.service
Check the axolet logs to verify that there’re no errors:
sudo journalctl -fu axolet
-
Deploy the changes of the /etc/hosts
file to all your VMs.
Setting up whole VPC networks to use the PSC
-
Open Google Cloud Console and in the Cloud DNS service navigate to the Create a DNS zone page.
-
Create a new private zone with the zone name <your-tenant-id>.cloud.axoflow.io
, and select the networks you want to use the PSC in.
-
Add the following three A records, all of which targeted to the <IP-address-allocated-for-the-endpoint>
:
<your-tenant-id>.cloud.axoflow.io
kcp.<your-tenant-id>.cloud.axoflow.io
telemetry.<your-tenant-id>.cloud.axoflow.io
8 - Data management
This section describes how data Axoflow processes your security data, and how to configure and monitor your data flows in Axoflow.
8.1 - Processing elements
Axoflow processes the data transported in your security data pipeline in the following stages:
-
Sources: Data enters the pipeline from a data source. A data source can be an external appliance or application, or a log collector agent managed by Axoflow.
-
Custom metadata on the source: You can configure Axoflow to automatically add custom metadata to the data received from a source.
-
Router: The AxoRouter data aggregator processes the data it receives from the sources:
- Connector: AxoRouter hosts receive data using source connectors. The different connectors are responsible for different protocols (like Syslog or OpenTelemetry). Some metadata labels are added to the data based on the connector it was received.
- Metadata: AxoRouter classifies and identifies the incoming messages and adds metadata, for example, the vendor and product of the identified source.
- Data extraction: AxoRouter extracts the relevant information from the content of the messages, and makes it available as structured data.
The router can perform other processing steps, as configured in the flows that apply to the specific router (see next step).
-
Flow: You can configure flows in the Axoflow Console that Axoflow uses to configure the AxoRouter instances to filter, route, and process the security data. Flows also allow you to automatically remove unneeded or redundant information from the messages, reducing data volume and SIEM and storage costs.
-
Destination: The router sends data to the specified destination in a format optimized for the specific destination.
8.2 - Flow management
Axoflow uses flows to manage the routing and processing of security data.
8.2.1 - Flow overview
Axoflow uses flows to manage the routing and processing of security data. A flow applies to one or more AxoRouter instances. The Flows page lists the configured flows, and also highlights if any alerts apply to a flow.

Each flow consists of the following main elements:
- A Router selector that specifies the AxoRouter instances the flow applies to. Multiple flows can apply to a single AxoRouter instance.
- Processing steps that filter and select the messages to process, set/unset message fields, and perform different data transformation and data reduction.
- A Destination where the AxoRouter instances of the flow deliver the data. Destinations can be external destinations (for example, a SIEM), or other AxoRouter instances.
Based on the flows, Axoflow Console automatically generates and deploys the configuration of the AxoRouter instances. Click ⋮ or the name of the flow to display the details of the flow.

Filter flows
To find or display only specific flows, you can use the filter bar.
- Free Text mode searches in the following fields of the flow: Name, Destination, Description.
- AQL mode allows you to search in specific fields of the flows. It also makes more complex filtering possible, using the Equal, Contains, and Match operators. When using AQL mode, Axoflow Console autocompletes the built-in labels and field names, but doesn’t autocomplete custom labels.

Disable flow
You can disable a flow without deleting it if needed by clicking the toggle on the right of the flow name.
CAUTION:
Disabling a flow immediately stops log forwarding for the flow. Any data that’s not forwarded using another flow can be irrevocably lost.

8.2.2 - Create flow
To create a new flow, open the Axoflow Console, then complete the following steps:
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

8.2.3 - Processing steps
Flows can include a list of processing steps to select and transform the messages. All log messages received by the routers of the flow are fed into the pipeline constructed from the processing steps, then the filtered and processed messages are forwarded to the specified destination.
CAUTION:
Processing steps are executed in the order they appear in the flow. Make sure to arrange the steps in the proper order to avoid problems.
To add processing steps to an existing flow, complete the following steps.
-
Open the Flows page and select the flow you want to modify.
-
Select Process > Add new Processing Step.
-
Select the type of the processing step to add.

The following types of processing steps are available:
- Select Messages: Filter the messages using a query. Only the matching messages will be processed in subsequent steps.
- Set Field: Set the value of a field on the message object.
- Unset Field: Unset the specified field of the message object.
- Reduce: Automatically remove redundant content from the messages.
- Regex: Execute a regular expression on the message.
- FilterX: Execute an arbitrary Filterx snippet.
- Probe: A measurement point that provides metrics about the throughput of the flow.
-
Configure the processing step as needed for your environment.
-
(Optional) To apply the processing step only to specific messages from the flow, set the Condition field of the step. Conditions act as filters, but apply only to this step. Messages that don’t match the condition will be processed by the subsequent steps.
-
(Optional) If needed, drag-and-drop the step to change its location in the flow.
-
Select Save.
FilterX
Execute an arbitrary FilterX snippet. The following example checks if the meta.host.labels.team
field exists, and sets the meta.openobserve
variable to a JSON value that contains stream
as a key with the value of the meta.host.labels.team
field.

Probe
Set a measurement point that provides metrics about the throughput of the flow. Note that you can’t name your probes input
and output
, as these are reserved.

For details, see Flow metrics.
Reduce
Automatically remove redundant content from the messages.

Regex
Use a regular expression to process the message.
-
Input is an AxoSyslog template string, for example, ${MESSAGE}
that the regular expression will be matched with.
-
Pattern is a Perl Compatible Regular Expression (PCRE). If you want to match the pattern to the whole string, use the ^
and $
anchors at the beginning and end of the pattern.
You can use https://regex101.com/ with the ECMAScript flavor to validate your regex patterns. (When denoting named capture groups, ?P
is not supported, use ?
.)
Named or anonymous capture groups denote the parts of the pattern that will be extracted to the Field field. Named capture groups can be specified like (?<group_name>pattern)
, which will be available as field_name["group_name"]
. Anonymous capture groups will be numbered, for example, the second group from (first.*) (bar|BAR)
will be available as field_name["2"]
. When specifying variables, note the following points:
-
If you want to include the results in the outgoing message or in its metadata for other processing steps, you must:
- Reference an existing variable or field. This field will be updated with the matches.
- For new variables, use names for the variables that start with the
$
character, or
- declare them explicitly. For details, see the FilterX data model in the AxoSyslog documentation.
-
The field will be set to an empty dictionary ({}
) when the pattern doesn’t match.
For example, the following processing step parses an unclassified log message from a firewall into a structured format in the log.body
field:
-
Input: ${RAWMSG}
-
Pattern:
^(?<priority><\d+>)(?<version>\d+)\s(?<timestamp>[\d\-T:.Z]+)\s(?<hostname>[\w\-.]+)\s(?<appname>[\w\-]+)\s(?<event_id>\d+|\-)\s(?<msgid>\-)\s(?<unused>\-)\s(?<rule_id>[\w\d]+)\s(?<network>\w+)\s(?<action>\w+)\s(?<result>\w+)\s(?<rule_seq>\d+)\s(?<direction>\w+)\s(?<length>\d+)\s(?<protocol>\w+)\s(?<src_ip>\d+\.\d+\.\d+\.\d+)\/(?<src_port>\d+)->(?<dest_ip>\d+\.\d+\.\d+\.\d+)\/(?<dest_port>\d+)\s(?<tcp_flags>\w+)\s(?<policy>[\w\-.]+)
-
Field: log.body

Select messages
Filter the messages using a query. Only the matching messages will be processed in subsequent steps.
The following example selects messages that have the resource.attributes["service.name"]
label set to nginx
.

You can also select messages using various other metadata about the connection and the host that sent the data, the connector that received the data, the classification results, and also custom labels, for example:
- a specific connector:
meta.connector.name = axorouter-soup
- a type of connector:
meta.connector.type = otlp
- a sender IP address:
meta.connection.src_ip = 192.168.1.1
- a specific product:
meta.product = fortigate
(see Vendors for the metadata of a particular product)
- a custom label you’ve added to the host:
meta.host.labels.location = eu-west-1
In addition to exact matches, you can use the following operators:
!=
not equal (string match)
=*
contains (substring match)
!*
: doesn’t contain
=~
: matches regular expression
!~
: doesn’t match regular expression
==~
: matches case sensitive regular expression
!=~
: doesn’t match case sensitive regular expression
You can combine multiple expressions using the AND (+
) operator. (To delete an expression, hit SHIFT+Backspace.)
Note
To check the metadata of a particular message, you can use Log tapping. The metadata associated with the event is under the Event > meta section.

Set field
Set specific field of the message. You can use static values, and also dynamic values that were extracted from the message automatically by Axoflow or manually in a previous processing step. If the field doesn’t exist, it’s automatically created. The following example sets the resource.attributes.format
field to parsed
.

Unset field
Unset a specific field of the message.

8.2.4 - Flow metrics
The Metrics page of a flow shows the amount of data (in events per second or bytes per second) processed by the flow. By default, the flow provides metrics for the input and the output of the flow. To get more information about the data flow, add Probe steps to the flow. For example, you can add a probe after:
- Select Messages steps to check the amount of data that match the filter, or
- Reduce steps to check the ratio of data saved.

8.2.5 - Flow analytics
The Analytics page of a flow allows you to analyze the data throughput of the flow using Sankey and Sunburst diagrams. For details on using these analytics, see Analytics.

8.2.6 - Paths
Create a path
Open the Axoflow Console.
-
Select Topology > + > Path.

-
Select your data source in the Source host field.

-
Select the target router or aggregator this source is sending its data to in the Target host field, for example, axorouter
.
-
Select the Target connector. The connector determines how the destination receives the data (for example, using which protocol or port).
-
Select Create. The new path appears on the Topology page.

8.2.7 - Flow tapping
Flow tapping allows you to sample the data flowing in a Flow and see how it processes the logs. It’s useful to test the flow when you’re configuring or adjusting Processing Steps.
Note
Neither AxoRouter nor Axoflow Console records any log data. The
Flow tapping and
Log tapping features only send a sample of the data to your browser while the tapping is active.
Prerequisites
Flow tapping works only on flows that have been deployed to an AxoRouter instance. This means that if you change a processing step in a flow, you must first Save the changes (or select Save and Test).
If you want to experiment with a flow, we recommend that you clone (
) the flow and configure it to use the /dev/null destination, then redo adjusting the final processing steps in the original flow.
Tap into a flow
To tap into a Flow, complete the following steps.
-
Open Axoflow Console and select Flows.
-
Select the flow you want to tap into, then select Process > Test.

-
Select the router where you want to tap into the flow, then click Start Log Tapping.

-
Axoflow Console displays a sample of the logs from the data flow in matched pairs for the input and the output of the flow.
The input point is after the source connector finished processing the incoming message. For the Syslog (autodetect and classify), this means that the input messages are already classified.
The output shows the messages right before they are formatted according to the specific destination’s requirements.

-
You can open the details of the messages to compare fields, values, and so on. By default, the messages are linked: opening the details of a message in the output automatically opens the details of the message input window, and vice versa (provided the message you clicked exists in both windows).

If you want to compare a message with other messages, click link
to disable linking.
-
If the flow contains one or more Probe steps, you can select the probe as the tapping point. That way you can check how the different processing steps modify your messages. You can change the first and the second tapping point by clicking
and
, respectively.

9 - Sources
The following chapters show you how to configure specific appliances, applications, and other sources to send their log data to Axoflow.
9.1 - Generic tips
The following section gives you a generic overview on how to configure a source to send its log data to Axoflow. If there is a specific section for your source, follow that instead of the generic procedure. For details, see the documentation of your source.
CAUTION:
Make sure to set data forwarding on your appliances/servers as described in this guide. Different settings like alternate message formats or ports might be valid, but can result in data loss or incorrect parsing.
Default connectors
By default, an AxoRouter deployment has the following connectors configured:
Open ports
By default, AxoRouter accepts data on the following ports:
- 514 TCP and UDP for RFC3164 (BSD-syslog) formatted traffic.
- 601 TCP for RFC5424 (IETF-syslog) formatted traffic.
- 6514 TCP for TLS-encrypted syslog traffic.
- 4317 TCP for OpenTelemetry log data.
To receive data on other ports or other protocols, configure the source connectors of the AxoRouter host.
Make sure to enable the ports you’re using on the firewall of your host.
Prerequisites
To configure a source to send data to Axoflow, make sure that:
Steps
-
Log in to your device. You need administrator privileges to perform the configuration.
-
If needed, enable syslog forwarding on the device.
-
Set AxoRouter as the syslog server. Typically, you can configure the following parameters:
-
Name or IP Address of the syslog server: Set the address of your AxoRouter.
-
Protocol: If possible, set TCP or TLS.
-
Syslog Format: If possible, set RFC5424 (or equivalent), otherwise leave the default.
-
Port: Set a port appropriate for the protocol and syslog format you have configured.
By default, AxoRouter accepts data on the following ports:
- 514 TCP and UDP for RFC3164 (BSD-syslog) formatted traffic.
- 601 TCP for RFC5424 (IETF-syslog) formatted traffic.
- 6514 TCP for TLS-encrypted syslog traffic.
- 4317 TCP for OpenTelemetry log data.
To receive data on other ports or other protocols, configure the source connectors of the AxoRouter host.
Make sure to enable the ports you’re using on the firewall of your host.
-
Add the appliance to Axoflow Console.
- Open the Axoflow Console and select Topology.
- Select + > Source.
- Select your source.
- Enter the parameters of the source, like IP address and FQDN.
- Select Create.
Note
If your syslog source is running
syslog-ng
, Splunk Connect for Syslog (SC4S), or AxoSyslog as its log forwarder agent, consider installing Axolet on the host and instrumenting the configuration of the log forwarder to receive detailed metrics about the host and the processed data. For details, see
Manage and monitor the pipeline.
9.2 - OpenTelemetry
Receive logs, metrics, and traces from OpenTelemetry clients over the OpenTelemetry Protocol (OTLP/gRPC).
Add new OpenTelemetry connector
To add a new connector to an AxoRouter host, complete the following steps:
-
Create a new connector.
-
Find the host.
-
Select Connectors. The list of connectors available on the host is displayed.

-
Select add
, then select the type of connector you want to create.

-
Enter a Name for the connector. This name must be unique on the host.

-
(Optional) Add custom labels to the connector.
You can also modify the product and vendor labels of the connector. In that case, Axoflow will treat the incoming messages as it was received and classified as data from the specified product. This is useful if you want to send data from a specific product to a dedicated port.
These labels and other parameters of the connector will be available under the meta.connector
key as metadata for the messages received via the connector, and can be used in routing decisions and processing steps. You can check the metadata of the messages using log tapping.

-
If needed, configure the port number where you want to receive data.
-
Select Create.
-
Make sure to enable the ports you’ve configured in the connector on the firewall of the AxoRouter host, and on other firewalls between the AxoRouter host and your data sources.
Labels
label |
value |
connector.type |
otlp |
connector.name |
The Name of the connector |
connector.port |
The port number where the connector receives data |
9.3 - Syslog
The Syslog connector can receive all kinds of syslog messages. You can configure it to receive data on specific ports, but it doesn’t apply classification and enrichment to the messages (apart from standard syslog parsing).
Add new syslog connector
To add a new connector to an AxoRouter host, complete the following steps:
-
Create a new connector.
-
Find the host.
-
Select Connectors. The list of connectors available on the host is displayed.

-
Select add
, then select the type of connector you want to create.

-
Select the template to use one of the standard syslog ports and networking protocols, for example, UDP 514 for the RFC3164 syslog protocol.
To configure a different port, or to specify the protocol elements manually, select Custom.

-
Enter a Name for the connector. This name must be unique on the host.

-
(Optional) Add custom labels to the connector.
-
Select the protocol to use for receiving syslog data: TCP, UDP, or TLS.
When using TLS, set the paths for the certificates and keys used for the TLS-encrypted communication with the clients.
You can use absolute paths (for example, /etc/axorouter/user-config/tls-key.pem
). The key and the certificate must be in PEM format. You must manually copy these files to their place on the AxoRouter host, currently you can’t distribute them from Axoflow Console.
- CA certificate path: The CA certificate that AxoRouter uses to authenticate the clients.
- Server certificate path: The certificate that AxoRouter shows to the clients.
- Server private key path: The private key of the server certificate.
-
(Optional) If explicitly needed for your use case, you can configure *Framing manually. Otherwise, leave it on Auto. Enable framing (On) if the payload contains the length of the message as specified in RFC6587 3.4.1. Disable (Off) for non-transparent-framing RFC6587 3.4.2.
-
Set the Port of the connector. The port number must be unique on the AxoRouter host.
-
(Optional) If needed for your environment, set protocol-specific connector options as needed.
You can also modify the product and vendor labels of the connector. In that case, Axoflow will treat the incoming messages as it was received and classified as data from the specified product. This is useful if you want to send data from a specific product to a dedicated port.
These labels and other parameters of the connector will be available under the meta.connector
key as metadata for the messages received via the connector, and can be used in routing decisions and processing steps. You can check the metadata of the messages using log tapping.

-
Select Create.
-
Make sure to enable the ports you’ve configured in the connector on the firewall of the AxoRouter host, and on other firewalls between the AxoRouter host and your data sources.
Protocol-specific connector options
- Encoding: The character set of the messages, for example,
UTF-8
.
- Maximum connections: The maximum number of simultaneous connections the connector can receive.
- Socket buffer size: The size of the socket buffer (in bytes).
TCP options
- TCP Keepalive Time Interval: The interval (number of seconds) between subsequential keepalive probes, regardless of the traffic exchanged in the connection.
- TCP Keepalive Probes: The number of unacknowledged probes to send before considering the connection dead.
- TCP Keepalive Time: The interval (in seconds) between the last data packet sent and the first keepalive probe.
TLS options
For TLS, you can use the TCP-specific options, and also the following:
- Require MTLS: If enabled, the clients sending data to the connector must have a TLS certificate, otherwise AxoRouter will reject the connection.
- Verify client certificate: If enabled, AxoRouter verifies certificate of the client, and rejects connections with invalid certificates.
Labels
The AxoRouter syslog connector adds the following meta labels:
label |
value |
connector.type |
syslog |
connector.name |
The Name of the connector |
connector.port |
The port number where the connector receives data |
9.4 - Syslog (autodetect and classify)
The Syslog (autodetect and classify) connector receives all kinds of syslog data, automatically recognizing the type and format (RFC3164, RFC5424) of the protocol used. It also automatically parses and classifies the incoming messages, recognizing and enriching over 100 data sources.
Note
If you want to create a syslog connector without classification, see
Syslog.
The Syslog (autodetect and classify) connector receives data on the following ports:
- 514 TCP and UDP for RFC3164 (BSD-syslog) formatted traffic.
- 601 TCP for RFC5424 (IETF-syslog) formatted traffic.
- 6514 TCP for TLS-encrypted syslog traffic.
Add new syslog connector
To add a new connector to an AxoRouter host, complete the following steps:
-
Create a new connector.
-
Find the host.
-
Select Connectors. The list of connectors available on the host is displayed.

-
Select add
, then select the type of connector you want to create.

-
Enter a Name for the connector. This name must be unique on the host.

-
(Optional) Add custom labels to the connector.
You can also modify the product and vendor labels of the connector. In that case, Axoflow will treat the incoming messages as it was received and classified as data from the specified product. This is useful if you want to send data from a specific product to a dedicated port.
These labels and other parameters of the connector will be available under the meta.connector
key as metadata for the messages received via the connector, and can be used in routing decisions and processing steps. You can check the metadata of the messages using log tapping.

-
(Optional) If you’re creating a multi-level AxoRouter architecture and you want to forward the data received to this connector to another AxoRouter, set the Address Override option to the address of the AxoRouter.
-
Select Create.
-
Make sure to enable the ports you’ve configured in the connector on the firewall of the AxoRouter host, and on other firewalls between the AxoRouter host and your data sources.
Labels
The Syslog (autodetect and classify) connector adds the following meta labels:
label |
value |
connector.type |
soup |
connector.name |
The Name of the connector |
9.5 - Webhook
Webhook connectors of AxoRouter can be used to receive log events through HTTP(S) POST requests.
You can specify static and dynamic URLs to receive the data. AxoRouter automatically parses the JSON payload of the request, and adds it to the log.body
field of the message as a JSON object. Other types of payload (including invalid JSON objects) is added to the log.body
field as a string. Note that you can add further parsing to the body using processing steps in Flows, for example, using FilterX or Regex processing steps.
Prerequisites
To receive data via HTTPS, you’ll need a key and a certificate that the connector will show to the clients.
- Key: The key file must contain an unencrypted private key in PEM or DER format, suitable as a TLS key. The Axoflow application uses this private key and the matching certificate to encrypt the communication with the client.
- Certificate: The file must contain an X.509 certificate (or a certificate chain) in PEM or DER format, suitable as a TLS certificate, matching the private key set in the Key option. If the file contains a certificate chain, the file must begin with the certificate of the AxoRouter host, followed by the CA certificate that signed the certificate of the host, and any other signing CAs in order.
Add new webhook connector
To add a new connector to an AxoRouter host, complete the following steps:
-
Create a new connector.
-
Find the host.
-
Select Connectors. The list of connectors available on the host is displayed.

-
Select add
, then select the type of connector you want to create.

-
Enter a Name for the connector. This name must be unique on the host.

-
(Optional) Add custom labels to the connector.
You can also modify the product and vendor labels of the connector. In that case, Axoflow will treat the incoming messages as it was received and classified as data from the specified product. This is useful if you want to send data from a specific product to a dedicated port.
These labels and other parameters of the connector will be available under the meta.connector
key as metadata for the messages received via the connector, and can be used in routing decisions and processing steps. You can check the metadata of the messages using log tapping.

-
Select the protocol you want to use: HTTPS or HTTP.
-
Set the port number where the webhook will receive the POST requests, for example, 8080
.
-
Set the endpoints where the webhook will receive data in the Paths field. You can use static paths, or regular expressions. In regular expressions you can use named capture groups to automatically set macro values in AxoRouter. For example, the /events/(?P<HOST>.*)
path sets the hostname for the data received in the request based on the second part of the URL: a request to the /events/my-example-host
URL sets the host field of that message to my-example-host
.
By default, the /events
and /events/(?P<HOST>.*)
paths are active.
-
For HTTPS endpoints, set the path to the Key and the Certificate files. AxoRouter uses these to encrypt the TLS channel. You can use absolute paths (for example, /etc/axorouter/user-config/tls-key.pem
). The key and the certificate must be in PEM format.
You must manually copy these files to their place, currently you can’t distribute them from Axoflow.
-
Select Create.
-
Make sure to enable the ports you’ve configured in the connector on the firewall of the AxoRouter host, and on other firewalls between the AxoRouter host and your data sources.
-
Send a test request to the webhook.
-
Open the Overview tab of the AxoRouter host where you’ve created the webhook connector and check the IP address or FQDN of the host.
-
Open a terminal on your computer and send a POST request to the path you’ve configured in the webhook.
curl -X POST -H 'Content-Type: application/json' --data '{"host":"myhost", "body": "sample webhook message" }' <;router-IP-address>:<webhook-port-number>/<webhook-path>/
For example, if you’ve used the default values of the webhook connector you can use:
curl -X POST -H 'Content-Type: application/json' --data '{"host":"myhost", "body": "sample webhook message" }' <;router-IP-address>/events/
Expected output:
Note
You can also use
Log tapping on the AxoRouter host to see the incoming messages.
9.6 - Windows Event Collector (WEC)
The AxoRouter Windows Events connector can receive Windows Event Logs by running a Windows Event Collector (WEC) server. After enabling the Windows Events connector, you can configure your Microsoft Windows hosts to forward their event logs to AxoRouter using Windows Event Forwarding (WEF).
Windows Event Forwarding (WEF) reads any operational or administrative event logged on a Windows host and forwards the events you choose to a Windows Event Collector (WEC) server - in this case, AxoRouter.
Prerequisites
When using TLS authentication, you’ll need a
- CA certificate (in PEM format) that AxoRouter uses to authenticate the clients.
- A certificate and the matching private key (in PEM format) that AxoRouter shows to the clients.
These files must be available on the AxoRouter host, and readable by the axorouter
service for the connector to work.
Add new Windows Event Log connector
To add a new connector to an AxoRouter host, complete the following steps.
Note
Currently you can configure one Windows Events connector for an AxoRouter.
-
Create a new connector.
-
Find the host.
-
Select Connectors. The list of connectors available on the host is displayed.

-
Select add
, then select the type of connector you want to create.

-
Enter a Name for the connector. This name must be unique on the host.

-
(Optional) Add custom labels to the connector.
You can also modify the product and vendor labels of the connector. In that case, Axoflow will treat the incoming messages as it was received and classified as data from the specified product. This is useful if you want to send data from a specific product to a dedicated port.
These labels and other parameters of the connector will be available under the meta.connector
key as metadata for the messages received via the connector, and can be used in routing decisions and processing steps. You can check the metadata of the messages using log tapping.

-
Configure the protocol-level settings of the connector.

-
Set the Hostname field. The clients will address this hostname. Note that:
- The Common Name of the server’s certificate (set in the following steps) must contain this hostname, otherwise the clients will reject the connection.
- You’ll have to use this hostname when configuring the Subscription Manager address in the Group Policy Editor.
-
(Optional) If for some reason don’t want to run the connection on the default port (5986
), adjust the Port field.
-
Set the paths for the certificates and keys used for the TLS-encrypted communication with the clients.
Use absolute paths (for example, /etc/axorouter/user-config/tls-key.pem
). The key and the certificate must be in PEM format. You have to make sure that these files are available on the AxoRouter host, currently you can’t distribute them from Axoflow Console.
- CA certificate path: The CA certificate that AxoRouter uses to authenticate the clients. If you want to limit which clients are accepted, set the More options > Certificate subject filter field.
- Server certificate path: The certificate that AxoRouter shows to the clients.
- Server private key path: The private key of the server certificate.
-
Configure the subscriptions of the connector.

-
Select Add new Subscription.
-
(Optional) Set a name for the subscription. If you leave it empty, Axoflow Console automatically generates a name.
-
Enter the event filter query into the Query field. This query specifies which events are collected by the subscription. For details on the query syntax, see the Microsoft documentation.
A single query can retrieve events from a maximum of 256 different channels.
For example, the following example queries every event from the Security, System, Application, and Setup channels.
<Query Id="0">
<Select Path="Application">*</Select>
<Select Path="Security">*</Select>
<Select Path="Setup">*</Select>
<Select Path="System">*</Select>
</Query>
-
(Optional) If needed, you can configure other low-level options in the More options section. For details, see Additional options.
-
Select Create.
-
Make sure to enable the ports you’ve configured in the connector on the firewall of the AxoRouter host, and on other firewalls between the AxoRouter host and your data sources.
-
Configure Windows Event Forwarding (WEF) on your clients to forward their events to the AxoRouter WEC connector.
When configuring the Subscription Manager address in the Group Policy Editor, use the hostname you’ve set in the connector

Additional options
You can set the following options of the WEC connector under Subscriptions > More options.

-
Certificate subject filter: A simple string to filter the clients based on the Common Name of their certificate. You can use the *
and ?
wildcard characters.
-
UUID: A unique ID for the subscription. If empty, Axoflow Console automatically generates it.
-
Heartbeat interval: The number of seconds, before the client will send a heartbeat message. The client sends heartbeat messages if it has no new events to send. Default value: 3600s
-
Connection retry interval: Time between reconnection attempts. Default value: 60s
-
Connection retry count: Number of times the client will attempt to reconnect if AxoRouter is unreachable. Default value: 10
-
Max time: The maximum number of seconds the client aggregates new events before sending them in a batch. Default value: 30s
-
Max elements: The maximum number of events that the client aggregates before sending them in a batch. By default it’s empty, meaning that only the Max time and Max envelope size options limit the aggregation. Default value: empty
-
Max envelope size: The maximum number of bytes in the SOAP envelope used to deliver the events. Default value: 512000 bytes
-
Locale: The language in which rendering information is expected, for example, en-US
. Default value: Client choose
-
Data locale: The language in which numerical data is expected to be formatted, for example, en-US
. Default value: Client choose
-
Read existing events: If enabled (Yes), the event source sends:
- all existing events that match the filter, and
- any events that subsequently occur for that event source.
If disabled (No), existing events will be ignored.
Default value: No
-
Ignore channel error: Subscription queries that result in errors will terminate the processing of the clients. Enable this option to ignore such errors. Default value: Yes
-
Content format: Determines whether to include rendering information (RenderedText
) with events or not (Raw
). Default value: Raw
The AxoRouter Windows Events connector adds the following fields to the meta
variable:
field |
value |
meta.connector.type |
windowsEvents |
meta.connector.name |
<name of the connector> |
meta.connector.port |
<port of the connector> |
9.7 - Vendors
To onboard a source that is specifically supported by Axoflow, complete the following steps. Onboarding allows you to collect metrics about the host, and display the host on the Topology page.
-
Open the Axoflow Console.
-
Select Topology.
-
Select + > Source.

-
If the source is already sending logs to an AxoRouter instance that is registered in the Axoflow Console, select Detected, then select the source.
Otherwise, select the type of the source you want to onboard, and follow the on-screen instructions.

-
Connect the source to the destination or AxoRouter instance it’s sending logs to.
-
Select Topology > + > Path.

-
Select your data source in the Source host field.

-
Select the target router or aggregator this source is sending its data to in the Target host field, for example, axorouter
.
-
Select the Target connector. The connector determines how the destination receives the data (for example, using which protocol or port).
-
Select Create. The new path appears on the Topology page.

-
Configure the appliance to send logs to an AxoRouter instance. Specific instructions regarding individual vendors are listed below, along with default metadata (labels) and specific metadata for Splunk.
Note
Unless instructed otherwise, configure your appliance to send the logs to the Syslog (autodetect and classify) connector of AxoRouter, using the appropriate port. Use RFC5424 if the appliance supports it.
- 514 TCP and UDP for RFC3164 (BSD-syslog) formatted traffic.
- 601 TCP for RFC5424 (IETF-syslog) formatted traffic.
- 6514 TCP for TLS-encrypted syslog traffic.
9.7.1 - Amazon
9.7.1.1 - CloudWatch
Axoflow can collect data from your Amazon CloudWatch. At a high level, the process looks like this:
- Deploy an Axoflow Cloud Connector that will collect the data from your CloudWatch. Axoflow Cloud Connector is a simple container that you can deploy into AWS, another cloud provider, or on-prem.
- The connector forwards the collected data to the OpenTelemetry connector of an AxoRouter instance. This AxoRouter can be deployed within AWS, another cloud provider, or on-prem.
- Configure a Flow on Axoflow Console that processes and routes the collected data to your destination (for example, Splunk or another SIEM).
Prerequisites
Steps
To collect data from AWS CloudWatch, complete the following steps.
-
Deploy an Axoflow Cloud Connector.
-
Access the Kubernetes node or virtual machine where you want to deploy Axoflow Cloud Connector.
-
Set the following environment variable to the IP address of the AxoRouter where you want to forward the data from CloudWatch. This IP address must be accessible from the connector. You can find the IP address of AxoRouter on the Hosts > AxoRouter > Overview page.
export AXOROUTER_ENDPOINT=<AxoRouter-IP-address>
-
(Optional) By default, the connector stores positional and other persistence-related data in the /etc/axoflow-otel-collector/storage
directory. In case you want to use a different directory, set the STORAGE_DIRECTORY
environment variable.
-
Configure the authentication that the Axoflow Cloud Connector will use to access CloudWatch. Set the environment variables for the authentication method you want to use.
-
AWS Profile with a configuration file: Set the region and the AWS_PROFILE
export AWS_PROFILE=""
export AWS_REGION=""
-
AWS Credentials: To use AWS access keys, set an access key and a matching secret.
export AWS_ACCESS_KEY_ID=""
export AWS_SECRET_ACCESS_KEY=""
export AWS_REGION=""
-
EC2 instance profile:
-
Deploy the Axoflow Cloud Connector. The exact command depends on the authentication method:
-
AWS Profile with a configuration file: Set the region and the AWS_PROFILE
docker run --rm \
-e AWS_PROFILE="${AWS_PROFILE}" \
-e AWS_REGION="${AWS_REGION}" \
-e AWS_SDK_LOAD_CONFIG=1 \
-e AXOROUTER_ENDPOINT="${AXOROUTER_ENDPOINT}" \
-e STORAGE_DIRECTORY="${STORAGE_DIRECTORY}" \
-v "${HOME}/.aws:/cloudconnectors/.aws:ro" \
ghcr.io/axoflow/axocloudconnectors:latest
-
AWS Credentials: To use AWS access keys, set an access key and a matching secret.
docker run --rm \
-e AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}" \
-e AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}" \
-e AWS_REGION="${AWS_REGION}" \
-e AXOROUTER_ENDPOINT="${AXOROUTER_ENDPOINT}" \
-e STORAGE_DIRECTORY="${STORAGE_DIRECTORY}" \
ghcr.io/axoflow/axocloudconnectors:latest
-
EC2 instance profile:
docker run --rm \
-e AWS_REGION="${AWS_REGION}" \
-e AXOROUTER_ENDPOINT="${AXOROUTER_ENDPOINT}" \
-e STORAGE_DIRECTORY="${STORAGE_DIRECTORY}" \
ghcr.io/axoflow/axocloudconnectors:latest
The Axoflow Cloud Connector starts forwarding logs to the AxoRouter instance.
-
Add the appliance to Axoflow Console.
- Open the Axoflow Console and select Topology.
- Select + > Source.
- Select AWS CloudWatch.
- Enter the IP address and the FQDN of the Axoflow Cloud Connector instance.
- Select Create.
-
Create a Flow to route the data from the AxoRouter instance to a destination. You can use the Labels of this source to select messages from this source.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
amazon |
product |
aws-cloudwatch |
format |
otlp |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
aws:cloudwatchlogs |
aws-activity |
9.7.2 - Cisco
9.7.2.1 - Adaptive Security Appliance (ASA)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
asa |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:asa |
netfw |
9.7.2.2 - Application Control Engine (ACE)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ace |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ace |
netops |
9.7.2.3 - Cisco IOS
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ios |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ios |
netops |
9.7.2.4 - Digital Network Architecture (DNA)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
dna |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:dna |
netops |
9.7.2.5 - Email Security Appliance (ESA)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
esa |
format |
text-plain | cef |
Note that the device can be configured to send plain syslog text or CEF-formatted output.
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, index, and source settings:
sourcetype |
index |
source |
cisco:esa:http |
email |
esa:http |
cisco:esa:textmail |
email |
esa:textmail |
cisco:esa:amp |
email |
esa:amp |
cisco:esa:antispam |
email |
esa:antispam |
cisco:esa:system_logs |
email |
esa:system_logs |
cisco:esa:system_logs |
email |
esa:euq_logs |
cisco:esa:system_logs |
email |
esa:service_logs |
cisco:esa:system_logs |
email |
esa:reportd_logs |
cisco:esa:system_logs |
email |
esa:sntpd_logs |
cisco:esa:system_logs |
email |
esa:smartlicense |
cisco:esa:error_logs |
email |
esa:error_logs |
cisco:esa:error_logs |
email |
esa:updater_logs |
cisco:esa:content_scanner |
email |
esa:content_scanner |
cisco:esa:authentication |
email |
esa:authentication |
cisco:esa:http |
email |
esa:http |
cisco:esa:textmail |
email |
esa:textmail |
cisco:esa:amp |
email |
esa:amp |
cisco:esa |
email |
program: <variable> |
cisco:esa:cef |
email |
esa:consolidated |
Tested with: Splunk Add-on for Cisco ESA
9.7.2.6 - Firepower
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
firepower |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:firepower:syslog |
netids |
9.7.2.7 - Firepower Threat Defence (FTD)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ftd |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ftd |
netfw |
9.7.2.8 - Firewall Services Module (FWSM)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
fwsm |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:fwsm |
netfw |
9.7.2.9 - HyperFlex (HX, UCSH)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ucsh |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ucsh:hx |
infraops |
9.7.2.10 - Integrated Management Controller (IMC)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
cimc |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:cimc |
infraops |
9.7.2.11 - IOS XR
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
xr |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:xr |
netops |
9.7.2.12 - Meraki MX
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
meraki |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:meraki |
netfw |
Tested with: TA-meraki
9.7.2.13 - Private Internet eXchange (PIX)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
pix |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:pix |
netfw |
9.7.2.14 - TelePresence Video Communication Server (VCS)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
tvcs |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:tvcs |
main |
9.7.2.15 - Unified Computing System Manager (UCSM)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ucsm |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ucs |
infraops |
9.7.2.16 - Unified Communications Manager (UCM)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ucm |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ucm |
netops |
9.7.2.17 - Viptela
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
viptela |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:viptela |
netops |
9.7.3 - Citrix
9.7.3.1 - Netscaler
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
citrix |
product |
netscaler |
format |
text-plain |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
citrix:netscaler:appfw:cef |
netfw |
citrix:netscaler:syslog |
netfw |
citrix:netscaler:appfw |
netfw |
9.7.4 - CyberArk
9.7.4.1 - Privileged Threat Analytics (PTA)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cyberark |
product |
pta |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cyberark:pta:cef |
main |
9.7.4.2 - Vault
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cyberark |
product |
vault |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cyberark:epv:cef |
netauth |
9.7.5 - F5 Networks
9.7.5.1 - BIG-IP
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
f5 |
product |
bigip |
format |
text-plain | JSON | kv |
Note that the device can be configured to send plain syslog text, JSON, or key-value pairs.
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
f5:bigip:syslog |
netops |
f5:bigip:ltm:access_json |
netops |
f5:bigip:asm:syslog |
netops |
f5:bigip:apm:syslog |
netops |
f5:bigip:ltm:ssl:error |
netops |
f5:bigip:ltm:tcl:error |
netops |
f5:bigip:ltm:traffic |
netops |
f5:bigip:ltm:log:error |
netops |
f5:bigip:gtm:dns:request:irule |
netops |
f5:bigip:gtm:dns:response:irule |
netops |
f5:bigip:ltm:http:irule |
netops |
f5:bigip:ltm:failed:irule |
netops |
nix:syslog |
netops |
Tested with: Splunk Add-on for F5 BIG-IP
9.7.6 - FireEye
9.7.7 - Fortinet
9.7.7.1 - FortiGate firewalls
The following sections show you how to configure FortiGate Next-Generation Firewall (NGFW) to send their log data to Axoflow.
CAUTION:
Make sure to set data forwarding on your appliances/servers as described in this guide. Different settings like alternate message formats or ports might be valid, but can result in data loss or incorrect parsing.
Prerequisites
Steps
Note: The steps involving the FortiGate user interface are just for your convenience, for details, see the official FortiGate documentation.
-
Log in to your FortiGate device. You need administrator privileges to perform the configuration.
-
Register the address of your AxoRouter as an Address Object.
-
Select Log & Report > Log Settings > Global Settings.
-
Configure the following settings:
- Event Logging: Click All.
- Local traffic logging: Click All.
- Syslog logging: Enable this option.
- IP address/FQDN: Enter the address of your AxoRouter:
%axorouter-ip%
-
Click Apply.
-
Add the appliance to Axoflow Console.
- Open the Axoflow Console and select Topology.
- Select + > Source.
- Select your source.
- Enter the parameters of the source, like IP address and FQDN.
- Select Create.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
fortinet |
product |
fortigate |
format |
kv |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
fortigate_event |
netops |
fortigate_traffic |
netfw |
fortigate_utm |
netfw |
Tested with: Fortinet FortiGate Add-On for Splunk technical add-on
9.7.7.2 - FortiMail
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
fortinet |
product |
fortimail |
format |
kv |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
fml:log |
email |
Tested with: FortiMail Add-on for Splunk technical add-on
9.7.7.3 - FortiWeb
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
fortinet |
product |
fortiweb |
product |
kv |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
fwb_log |
netops |
fwb_attack |
netids |
fwb_event |
netops |
fwb_traffic |
netfw |
Tested with: Fortinet FortiWeb Add-0n for Splunk technical add-on
9.7.8 - Fortra
9.7.8.1 - Powertech SIEM Agent for IBM i
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
forta |
product |
powertech-siem-agent |
format |
cef |
format |
leef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
source |
index |
PowerTech:SIEMAgent:cef |
PowerTech:SIEMAgent |
netops |
PowerTech:SIEMAgent:leef |
PowerTech:SIEMAgent |
netops |
Earlier name/vendor
Powertech Interact
9.7.9 - Imperva
9.7.9.1 - Incapsula
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
imperva |
product |
incapsula |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
source |
index |
cef |
Imperva:Incapsula |
netwaf |
9.7.9.2 - SecureSphere
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
imperva |
product |
securesphere |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
index |
imperva:waf:firewall:cef |
netwaf |
imperva:waf:security:cef |
netwaf |
imperva:waf |
netwaf |
9.7.10 - Infoblox
9.7.10.1 - NIOS
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
infloblox |
product |
nios |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
source |
infoblox:threatprotect |
netids |
Infoblox:NIOS |
infoblox:dns |
netids |
Infoblox:NIOS |
Tested with: Splunk Add-on for Infoblox
9.7.11 - Ivanti
9.7.11.1 - Connect secure
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
ivanti |
product |
connect-secure |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
source |
index |
pulse:connectsecure |
|
netfw |
pulse:connectsecure:web |
|
netproxy |
Earlier name/vendor
Pulse Connect Secure
9.7.12 - Juniper
9.7.12.1 - Junos OS
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
juniper |
product |
junos |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
juniper:junos:aamw:structured |
netfw |
juniper:junos:firewall |
netfw |
juniper:junos:firewall |
netids |
juniper:junos:firewall:structured |
netfw |
juniper:junos:firewall:structured |
netids |
juniper:junos:idp |
netids |
juniper:junos:idp:structured |
netids |
juniper:legacy |
netops |
juniper:junos:secintel:structured |
netfw |
juniper:junos:snmp |
netops |
juniper:structured |
netops |
Tested with: Splunk Add-on for Juniper
9.7.13 - Kaspersky
9.7.13.1 - Endpoint Security
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
kaspersky |
product |
endpoint_security |
format |
text-plain | cef | leef |
Note that the device can be configured to send plain syslog text, LEEF, or CEF-formatted output.
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
kaspersky:cef |
epav |
kaspersky:es |
epav |
kaspersky:gnrl |
epav |
kaspersky:klau |
epav |
kaspersky:klbl |
epav |
kaspersky:klmo |
epav |
kaspersky:klna |
epav |
kaspersky:klpr |
epav |
kaspersky:klsr |
epav |
kaspersky:leef |
epav |
kaspersky:sysl |
epav |
9.7.14 - MicroFocus
9.7.15 - Microsoft
9.7.15.1 - Azure Event Hubs
Axoflow can collect data from your Azure Event Hubs. At a high level, the process looks like this:
- Deploy an Axoflow Cloud Connector that will collect the data from your Event Hub. Axoflow Cloud Connector is a simple container that you can deploy into Azure, another cloud provider, or on-prem.
- The connector forwards the collected data to the OpenTelemetry connector of an AxoRouter instance. This AxoRouter can be deployed within Azure, another cloud provider, or on-prem.
- Configure a Flow on Axoflow Console that processes and routes the collected data to your destination (for example, Splunk or another SIEM).
Prerequisites
Steps
To collect data from Azure Event Hubs, complete the following steps.
-
Deploy an Axoflow Cloud Connector into Azure.
-
Access the Kubernetes node or virtual machine.
-
Set the following environment variable to the IP address of the AxoRouter where you want to forward the data from Event Hubs. This IP address must be accessible from the connector. You can find the IP address of AxoRouter on the Hosts > AxoRouter > Overview page.
export AXOROUTER_ENDPOINT=<AxoRouter-IP-address>
-
(Optional) By default, the connector stores positional and other persistence-related data in the /etc/axoflow-otel-collector/storage
directory. In case you want to use a different directory, set the STORAGE_DIRECTORY
environment variable.
-
Set the AZURE_EVENTHUB_CONNECTION_STRING
environment variable.
export AZURE_EVENTHUB_CONNECTION_STRING="Endpoint=sb://<NamespaceName>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<KeyValue>;EntityPath=<EventHubName>"
-
Deploy the Axoflow Cloud Connector by running:
docker run --rm \
-v "${STORAGE_DIRECTORY}":"${STORAGE_DIRECTORY}" \
-e AZURE_EVENTHUB_CONNECTION_STRING="${AZURE_EVENTHUB_CONNECTION_STRING}" \
-e AXOROUTER_ENDPOINT="${AXOROUTER_ENDPOINT}" \
-e STORAGE_DIRECTORY="${STORAGE_DIRECTORY}" \
ghcr.io/axoflow/axocloudconnectors:latest
The Axoflow Cloud Connector starts forwarding logs to the AxoRouter instance.
-
Add the appliance to Axoflow Console.
- Open the Axoflow Console and select Topology.
- Select + > Source.
- Select Azure Event Hubs.
- Enter the IP address and the FQDN of the Axoflow Cloud Connector instance.
- Select Create.
-
Create a Flow to route the data from the AxoRouter instance to a destination. You can use the Labels of this source to select messages from this source.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
microsoft |
product |
azure-event-hubs |
format |
otlp |
Event Hubs Audit logs labels
label |
value |
vendor |
microsoft |
product |
azure-event-hubs-audit |
format |
otlp |
Event Hubs Provisioning logs labels
label |
value |
vendor |
microsoft |
product |
azure-event-hubs-provisioning |
format |
otlp |
Event Hubs Signin logs labels
label |
value |
vendor |
microsoft |
product |
azure-event-hubs-signin |
format |
otlp |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
mscs:azure:eventhub:log |
azure-activity |
9.7.15.2 - Windows hosts
To collect event logs from Microsoft Windows hosts, Axoflow supports both agent-based and agentless methods.
Labels
Labels assigned to data received from Windows hosts depend on how AxoRouter receives the data. For details, see Windows host - agent based solution and Windows Event Collector (WEC).
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
windows:eventlog:snare |
oswin |
windows:eventlog:xml |
oswin |
9.7.16 - MikroTik
9.7.16.1 - RouterOS
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
mikrotik |
product |
routeros |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
routeros |
netfw |
routeros |
netops |
9.7.17 - Netgate
9.7.17.1 - pfSense
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
netgate |
product |
pfsense |
format |
csv | text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
pfsense:filterlog |
netfw |
pfsense:<program> |
netops |
The pfsense:<program>
variant is simply a generic linux event that is generated by the underlying OS on the appliance.
Tested with: TA-pfsense
9.7.18 - Netmotion
9.7.18.1 - Netmotion
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
netmotion |
product |
netmotion |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
netmotion:reporting |
netops |
netmotion:mobilityserver:nm_mobilityanalyticsappdata |
netops |
9.7.19 - NETSCOUT
9.7.19.1 - Arbor Edge Defense (AED)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
netscout |
product |
arbor-edge |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
source |
index |
netscout:aed |
netscout:aed |
netids |
9.7.20 - OpenText
9.7.20.1 - ArcSight
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
opentext |
product |
arcsight |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
source |
index |
cef |
ArcSight:ArcSight |
main |
Earlier name/vendor
MicroFocus ArcSight
9.7.21 - Palo Alto Networks
9.7.21.1 - Cortex XSOAR
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
palo-alto-networks |
product |
cortex-xsoar |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
source |
index |
cef |
tim:cef |
infraops |
Earlier name/vendor
Threat Intelligence Management (TIM)
9.7.21.2 - Palo Alto firewalls
The following sections show you how to configure Palo Alto Networks Next-Generation Firewall devices to send their log data to Axoflow.
CAUTION:
Make sure to set data forwarding on your appliances/servers as described in this guide. Different settings like alternate message formats or ports might be valid, but can result in data loss or incorrect parsing.
Prerequisites
Steps
Note: The steps involving the Palo Alto Networks Next-Generation Firewall user interface are just for your convenience, for details, see the official PAN-OS® documentation.
-
Log in to your firewall device. You need administrator privileges to perform the configuration.
-
Configure a Syslog server profile.
-
Select Device > Server Profiles > Syslog.
-
Click Add and enter a Name for the profile, for example, axorouter
.
-
Configure the following settings:
- Syslog Server: Enter the IP address of your AxoRouter:
%axorouter-ip%
- Transport: Select TCP or TLS.
- Port: Set the port to
601
. (This is needed for the recommended IETF log format. If for some reason you need to use the BSD format, set the port to 514
.)
- Format: Select IETF.
- Syslog logging: Enable this option.
-
Click OK.
-
Configure syslog forwarding for Traffic, Threat, and WildFire Submission logs. For details, see Configure Log Forwarding the official PAN-OS® documentation.
- Select Objects > Log Forwarding.
- Click Add.
- Enter a Name for the profile, for example,
axoflow
.
- For each log type, severity level, or WildFire verdict, select the Syslog server profile.
- Click OK.
- Assign the log forwarding profile to a security policy to trigger log generation and forwarding.
- Select Policies > Security and select a policy rule.
- Select Actions, then select the Log Forwarding profile you created (for example,
axoflow
).
- For Traffic logs, select one or both of the Log at Session Start and Log At Session End options.
- Click OK.
-
Configure syslog forwarding for System, Config, HIP Match, and Correlation logs.
- Select Device > Log Settings.
- For System and Correlation logs, select each Severity level, select the Syslog server profile, and click OK.
- For Config, HIP Match, and Correlation logs, edit the section, select the Syslog server profile, and click OK.
-
Click Commit.
-
Add the appliance to Axoflow Console.
- Open the Axoflow Console and select Topology.
- Select + > Source.
- Select your source.
- Enter the parameters of the source, like IP address and FQDN.
- Select Create.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
pan |
product |
paloalto |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
pan:audit |
netops |
pan:globalprotect |
netfw |
pan:hipmatch |
epintel |
pan:traffic |
netfw |
pan:threat |
netproxy |
pan:system |
netops |
Tested with: Palo Alto Networks Add-on for Splunk technical add-on
9.7.22 - Powertech
9.7.23 - Riverbed
9.7.23.1 - SteelConnect
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
riverbed |
product |
steelconnect |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
riverbed:syslog |
netops |
9.7.23.2 - SteelHead
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
riverbed |
product |
steelhead |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
riverbed:steelhead |
netops |
9.7.24 - rsyslog
Axoflow treats rsyslog sources as a generic syslog source. To send data from rsyslog to Axoflow, just configure rsyslog to send data to an AxoRouter instance using the syslog protocol.
Note that even if rsyslog is acting as a relay (receiving data from other clients and forwarding them to AxoRouter), on the Topology page it will be displayed as a data source.
Prerequisites
9.7.25 - SecureAuth
9.7.25.1 - Identity Platform
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
secureauth |
product |
idp |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
secureauth:idp |
netops |
Tested with: SecureAuth IdP Splunk App
9.7.26 - SonicWall
9.7.26.1 - SonicWall
The following sections show you how to configure SonicWall firewalls to send their log data to Axoflow.
CAUTION:
Make sure to set data forwarding on your appliances/servers as described in this guide. Different settings like alternate message formats or ports might be valid, but can result in data loss or incorrect parsing.
Prerequisites
Steps for SonicOS 7.x
Note: The steps involving the SonicWall user interface are just for your convenience, for details, see the official SonicWall documentation.
-
Log in to your SonicWall device. You need administrator privileges to perform the configuration.
-
Register the address of your AxoRouter as an Address Object.
-
Select MENU > OBJECT.
-
Select Match Objects > Addresses > Address objects.
-
Click Add Address.
-
Configure the following settings:
- Name: Enter a name for the AxoRouter, for example,
AxoRouter
.
- Zone Assignment: Select the correct zone.
- Type: Select Host.
- IP Address: Enter the IP address of your AxoRouter:
%axorouter-ip%
-
Click Save.
-
Set your AxoRouter as a syslog server.
-
Navigate to Device > Log > Syslog.
-
Select the Syslog Servers tab.
-
Click Add.
-
Configure the following options:
- Name or IP Address: Select the Address Object of AxoRouter.
- Server Type: Select Syslog Server.
- Syslog Format: Select Enhanced.
If your Syslog server does not use default port 514, type the port number in the Port field.
By default, AxoRouter accepts data on the following ports:
- 514 TCP and UDP for RFC3164 (BSD-syslog) formatted traffic.
- 601 TCP for RFC5424 (IETF-syslog) formatted traffic.
- 6514 TCP for TLS-encrypted syslog traffic.
- 4317 TCP for OpenTelemetry log data.
To receive data on other ports or other protocols, configure the source connectors of the AxoRouter host.
Make sure to enable the ports you’re using on the firewall of your host.

-
Add the appliance to Axoflow Console.
- Open the Axoflow Console and select Topology.
- Select + > Source.
- Select your source.
- Enter the parameters of the source, like IP address and FQDN.
- Select Create.
Steps for SonicOS 6.x
Note: The steps involving the SonicWall user interface are just for your convenience, for details, see the official SonicWall documentation.
-
Log in to your SonicWall device. You need administrator privileges to perform the configuration.
-
Register the address of your AxoRouter as an Address Object.
-
Select MANAGE > Policies > Objects > Address Objects.
-
Click Add.
-
Configure the following settings:
- Name: Enter a name for the AxoRouter, for example,
AxoRouter
.
- Zone Assignment: Select the correct zone.
- Type: Select Host.
- IP Address: Enter the IP address of your AxoRouter:
%axorouter-ip%
-
Click Add.
-
Set your AxoRouter as a syslog server.
-
Navigate to MANAGE > Log Settings > SYSLOG.
-
Click ADD.
-
Configure the following options:
- Syslog ID: Enter an ID for the firewall. This ID will be used as the hostname in the log messages.
- Name or IP Address: Select the Address Object of AxoRouter.
- Server Type: Select Syslog Server.
- Enable the Enhanced Syslog Fields Settings.
-
Click OK.
-
Add the appliance to Axoflow Console.
- Open the Axoflow Console and select Topology.
- Select + > Source.
- Select your source.
- Enter the parameters of the source, like IP address and FQDN.
- Select Create.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
dell |
product |
sonicwall |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
dell:sonicwall |
netfw |
Tested with: Dell SonicWall Add-on for Splunk technical add-on
9.7.27 - syslog-ng
By default, Axoflow treats syslog-ng sources as a generic syslog source.
- The easiest way to send data from syslog-ng to Axoflow is to configure it to send data to an AxoRouter instance using the syslog protocol.
- If you’re using syslog-ng Open Source Edition version 4.4 or newer, use the
syslog-ng-otlp()
driver to send data to AxoRouter using the OpenTelemetry Protocol.
Note that even if syslog-ng is acting as a relay (receiving data from other clients and forwarding them to AxoRouter), on the Topology page it will be displayed as a data source.
Prerequisites
9.7.28 - Thales
9.7.28.1 - Vormetric Data Security Platform
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
thales |
product |
vormetric |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
thales:vormetric |
netauth |
9.7.29 - Trellix
9.7.29.1 - Central Management System (CMS)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
trellix |
product |
cms |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
trellix:cms |
trellix:cms |
netops |
9.7.29.2 - Endpoint Security (HX)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
trellix |
product |
hx |
format |
text-json |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
hx_json |
fireeye |
fe_json |
fireeye |
hx_cef_syslog |
fireeye |
Tested with: FireEye Add-on for Splunk Enterprise
Earlier name/vendor
FireEye Endpoint Security (HX)
9.7.29.3 - ETP
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
trellix |
product |
etp |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
fe_etp |
fireeye |
9.7.29.4 - MPS
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
trellix |
product |
mps |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
trellix:mps |
trellix:mps |
netops |
9.7.30 - Trend Micro
9.7.30.1 - Deep Security Agent
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
trend-micro |
product |
deep-security-agent |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
deepsecurity |
epintel |
deepsecurity-system_events |
epintel |
deepsecurity-intrusion_prevention |
epintel |
deepsecurity-firewall |
epintel |
deepsecurity-antimalware |
epintel |
deepsecurity-integrity_monitoring |
epintel |
deepsecurity-log_inspection |
epintel |
deepsecurity-web_reputation |
epintel |
deepsecurity-app_control |
epintel |
deepsecurity-system_events |
epintel |
9.7.31 - Varonis
9.7.31.1 - DatAdvantage
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
varonis |
product |
datadvantage |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
varonis:ta |
main |
9.7.32 - Vectra AI
Earlier name/vendor
Vectra Cognito
9.7.32.1 - X-Series
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
vectra |
product |
x-series |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
vectra:cognito:detect |
main |
vectra:cognito:accountdetect |
main |
vectra:cognito:accountscoring |
main |
vectra:cognito:audit |
main |
vectra:cognito:campaigns |
main |
vectra:cognito:health |
main |
vectra:cognito:hostscoring |
main |
vectra:cognito:accountlockdown |
main |
9.7.33 - Zscaler appliances
9.7.33.1 - Zscaler Nanolog Streaming Service
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
zscaler |
product |
nss |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
zscalernss-alerts |
netops |
zscalernss-tunnel |
netops |
zscalernss-web |
netproxy |
zscalernss-web:leef |
netproxy |
Tested with: Zscaler Technical Add-On for Splunk
9.7.33.2 - Zscaler Log Streaming Service
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
zscaler |
product |
lss |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
zscalerlss-zpa-app |
netproxy |
zscalerlss-zpa-audit |
netproxy |
zscalerlss-zpa-auth |
netproxy |
zscalerlss-zpa-bba |
netproxy |
zscalerlss-zpa-connector |
netproxy |
Tested with: Zscaler Technical Add-On for Splunk
9.8 - Cloud sources
9.8.1 - A10 Networks
9.8.1.1 - vThunder
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
a10networks |
product |
vthunder |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
a10networks:vThunder:cef |
a10networks:vThunder |
netwaf |
9.8.2 - Imperva
9.8.2.1 - Incapsula
9.8.3 - Microsoft
9.8.3.1 - Cloud App Security (MCAS)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
microsoft |
product |
cas |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cef |
microsoft:cas |
main |
10 - Destinations
The following chapters show you how to configure Axoflow to send your data to specific destinations, like SIEMs or object storage solutions.
10.1 - Amazon
10.1.1 - Amazon S3
To add an Amazon S3 destination to Axoflow, complete the following steps.
Prerequisites
-
An existing S3 bucket configured for programmatic access, and the related ACCESS_KEY
and SECRET_KEY
of a user that can access it. The user needs to have the following permissions:
kms:Decrypt
kms:Encrypt
kms:GenerateDataKey
s3:ListBucket
s3:ListBucketMultipartUploads
s3:AbortMultipartUpload
s3:ListMultipartUploadParts
s3:PutObject
-
To configure Axoflow, you’ll need the bucket name, region (or URL), access key, and the secret key of the bucket.
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + > Destination.
-
Configure the destination.
-
Select Amazon S3.
-
Enter a name for the destination.

-
Enter the name of the bucket you want to use.
-
Enter the region code of the bucket into the Region field (for example, us-east-1
.), or select the Use custom endpoint URL option, and enter the URL of the endpoint into the URL field.
-
Enter the Access key and the Secret key for the account you want to use.
-
Enter the Object key (or key name), which uniquely identifies the object in an Amazon S3 bucket, for example: my-logs/${HOSTNAME}/
.
You can use AxoSyslog macros in this field.
- Select the Object key timestamp format you want to use, or select Use custom object key timestamp and enter a custom template. For details on the available date-related macros, see the AxoSyslog documentation.
- Set the maximal size of the S3 object. If an object reaches this size, Axoflow appends an index ("-1", “-2”, …) to the end of the object key and starts a new object after rotation.
- Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

10.2 - Apache Kafka
Coming soon
If you’d like, we can send you an email when we update this page.
10.3 - Clickhouse
Coming soon
If you’d like, we can send you an email when we update this page.
10.4 - Crowdstrike
Coming soon
If you’d like, we can send you an email when we update this page.
10.5 - Elastic
10.5.1 - Elastic Cloud
To add an Elasticsearch destination to Axoflow, complete the following steps.
Prerequisites
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + > Destination.
-
Configure the destination.
-
Select Elasticsearch.
-
Select which type of configuration you want to use:
- Simple: Send all data into a single index.
- Dynamic: Send data to an index based on the content or metadata of the incoming messages (or to a default index).
- Advanced: Allows you to specify a custom URL endpoint.
-
Enter a name for the destination.

-
Configure the endpoint of the destination.
- Advanced: Enter your Elasticsearch URL into the URL field, for example,
http://my-elastic-server:9200/_bulk
- Simple and Dynamic:
- Select the HTTPS or HTTP protocol to use to access your destination.
- Enter the Hostname and Port of the destination.
-
Specify the Elasticsearch index to send the data to.
- Simple: Enter the expression that specifies the Elasticsearch index to use into the Index field, for example:
test-${YEAR}${MONTH}${DAY}
. All data will be sent into this index.
- Dynamic and Advanced: Enter the expression that specifies the default index. The data will be sent into this index if no other index is set during the processing of the message (for example, by the processing steps of the Flow).
You can use AxoSyslog macros in this field.
-
Enter the username and password for the account you want to use.
-
(Optional)
By default, Axoflow rejects connections to the destination server if the certificate of the server is invalid (for example, it’s expired, signed by an unknown CA, or its CN and the name of the server is mismatched).
Warning
When validating a certificate, the entire certificate chain must be valid, including the CA certificate. If any certificate of the chain is invalid, Axoflow will reject the connection.
If you want to accept invalid certificates (or no certificate) from the destination servers, disable the Verify server certificate option.
-
(Optional) Set other options as needed for your environments.
- Timeout: The number of seconds to wait for a log-forwarding request to complete, and attempt to reconnect the server if exceeded. The default (
0
) is unlimited. For more details, see the AxoSyslog documentation.
- Batch Timeout: Maximal time in milliseconds to wait for a batch to be filled before sending it. The default value (
-1
) means that the batch should be full before sending, which may result in a long wait if the incoming traffic is low. For more details, see the AxoSyslog documentation.
- Workers: Used for scaling the destination in case of high message load. Specifies the number of worker threads AxoRouter uses for sending messages to the destination. The default is 1. If high message throughput is not a concern, leave it on 1. For maximum performance, increase it up to the number of CPU cores available on AxoRouter. For more details, see the AxoSyslog documentation.
-
Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

10.5.2 - Elasticsearch
Coming soon
If you’d like, we can send you an email when we update this page.
10.6 - Generic
10.6.1 - /dev/null
This is a null destination that discards (drops) every data it receives, but reports that it has successfully received the data. This is useful sometimes for testing and performance measurements, for example, to find out if a real destination is the bottleneck, or another element in the upstream pipeline.
CAUTION:
All data that’s sent to the /dev/null destination only is irrevocably lost.
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + > Destination.
-
Configure the destination.
-
Select /dev/null.
-
Enter a name for the destination.

-
(Optional): Add custom labels to the destination.
-
Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

10.7 - Google
10.7.1 - Google Cloud Pub/Sub
To add a Google Cloud Pub/Sub destination to Axoflow, complete the following steps.
Prerequisites
For details, see the Google Pub/Sub tutorial.
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + > Destination.
-
Configure the destination.
-
Select Pub/Sub.
-
Select which type of configuration you want to use:
- Simple: Send all data into a project and topic.
- Dynamic: Send data to a project and topic based on the content or metadata of the incoming messages (or to a default project/topic).
-
Enter a name for the destination.

-
Specify the project and topic to send the data to.
- Simple: Enter the ID of the GCP Project and the Topic. All data will be sent to this topic.
- Dynamic: Enter the expression that specifies the default Project and Topic. The data will be sent into here unless it is set during the processing of the message (for example, by the processing steps of the Flow).
You can use AxoSyslog macros in this field.
-
Configure the authentication method to access the GCP project.
-
Automatic (ADC): Use the service account attached to the cloud resource (VM) that hosts AxoRouter.
-
Service Account File: Specify the path where a service account key file is located (for example, /etc/axorouter/user-config/
). You must manually copy that file to its place, currently you can’t distribute it from Axoflow.
-
None: Disable authentication completely. Only available when the More options > Service Endpoint option is set.
CAUTION:
Do not disable authentication in production.
-
(Optional) Set other options as needed for your environments.
- Service Endpoint: Use a custom API endpoint. Leave it empty to use the default:
https://pubsub.googleapis.com
- Timeout: The number of seconds to wait for a log-forwarding request to complete, and attempt to reconnect the server if exceeded. The default (
0
) is unlimited. For more details, see the AxoSyslog documentation.
- Batch Timeout: Maximal time in milliseconds to wait for a batch to be filled before sending it. The default value (
-1
) means that the batch should be full before sending, which may result in a long wait if the incoming traffic is low. For more details, see the AxoSyslog documentation.
- Workers: Used for scaling the destination in case of high message load. Specifies the number of worker threads AxoRouter uses for sending messages to the destination. The default is 1. If high message throughput is not a concern, leave it on 1. For maximum performance, increase it up to the number of CPU cores available on AxoRouter. For more details, see the AxoSyslog documentation.
-
Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

Pub/Sub attributes
You can embed custom attributes as metadata in Pub/Sub messages in the processing steps of the Flow that’s sending data to the Pub/Sub destination.
To do that:
-
Add a FilterX processing step to the Flow.
-
Edit the Statements field of the processing step:
-
Add the meta.pubsub.attributes = json();
line to add an empty JSON object to the messages.
-
Set your custom attributes under the meta.pubsub.attributes
key. For example, if you want to include the timestamp as a custom attribute as well, you can use:
meta.pubsub.attributes = {"timestamp": $S_ISODATE};
10.8 - Grafana
10.8.1 - Grafana Loki
Coming soon
If you’d like, we can send you an email when we update this page.
10.9 - Microsoft
10.9.1 - Azure Monitor
Sending data to Azure Monitor is practically identical to using the Sentinel destination. Follow the procedure described in Microsoft Sentinel.
10.9.2 - Microsoft Sentinel
To add a Microsoft Sentinel or Azure Monitor destination to Axoflow, complete the following steps. Axoflow Console can configure your AxoRouters to send data to the built-in syslog table of Azure Monitor.
Prerequisites
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + > Destination.
-
Configure the destination.
-
Select Sentinel or Azure Monitor.
-
Enter a name for the destination.

-
Configure the credentials needed for authentication.

- Tenant ID: Directory (tenant) ID of the environment where you’re sending the data. (Practically everything belongs to the tenant ID: the Entra ID application, the Log analytics workspace, Sentinel, the DCE and the DCR, and so on.)
- Application ID: Application (client) ID of the Microsoft Entra ID application.
- Application secret: The Client secret of the Microsoft Entra ID application.
- Scope: The scope for the authentication token. Usually you can leave empty to use the default value (
https://monitor.azure.com//.default
).
-
Specify the details of the table to send the data to.
-
(Optional) Set other options as needed for your environments.
- Timeout: The number of seconds to wait for a log-forwarding request to complete, and attempt to reconnect the server if exceeded. The default (
0
) is unlimited. For more details, see the AxoSyslog documentation.
- Batch Timeout: Maximal time in milliseconds to wait for a batch to be filled before sending it. The default value (
-1
) means that the batch should be full before sending, which may result in a long wait if the incoming traffic is low. For more details, see the AxoSyslog documentation.
- Workers: Used for scaling the destination in case of high message load. Specifies the number of worker threads AxoRouter uses for sending messages to the destination. The default is 1. If high message throughput is not a concern, leave it on 1. For maximum performance, increase it up to the number of CPU cores available on AxoRouter. For more details, see the AxoSyslog documentation.
-
Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

10.10 - OpenObserve
To add an OpenObserve destination to Axoflow, complete the following steps.
Prerequisites
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + > Destination.
-
Configure the destination.
-
Select OpenObserve.
-
Select which type of configuration you want to use:
- Simple: Send all data into a single stream of an organization.
- Dynamic: Send data to an organization and stream based on the content or metadata of the incoming messages (or to the default stream of the default organization).
- Advanced: Allows you to specify a custom URL endpoint.
-
Enter a name for the destination.

-
Configure the endpoint of the destination.
- Advanced: Enter your OpenObserve URL into the URL field, for example,
https://example.com/api/my-org/my-stream/_json
- Simple and Dynamic:
- Select the HTTPS or HTTP protocol to use to access your destination.
- Enter the Hostname and Port of the destination.
-
Specify the OpenObserve stream to send the data to.
- Simple: Enter name of the Organization and the Stream where you want to send the data. All data will be sent into this stream.
- Dynamic: Enter the expression that specifies the default Default Organization and teh Default Stream. The data will be sent into this stream if no other organization and stream is set during the processing of the message (for example, by the processing steps of the Flow).
You can use AxoSyslog macros in this field.
-
Enter the username and password for the account you want to use.
-
(Optional)
By default, Axoflow rejects connections to the destination server if the certificate of the server is invalid (for example, it’s expired, signed by an unknown CA, or its CN and the name of the server is mismatched).
Warning
When validating a certificate, the entire certificate chain must be valid, including the CA certificate. If any certificate of the chain is invalid, Axoflow will reject the connection.
If you want to accept invalid certificates (or no certificate) from the destination servers, disable the Verify server certificate option.
-
(Optional) Set other options as needed for your environments.
- Timeout: The number of seconds to wait for a log-forwarding request to complete, and attempt to reconnect the server if exceeded. The default (
0
) is unlimited. For more details, see the AxoSyslog documentation.
- Batch Timeout: Maximal time in milliseconds to wait for a batch to be filled before sending it. The default value (
-1
) means that the batch should be full before sending, which may result in a long wait if the incoming traffic is low. For more details, see the AxoSyslog documentation.
- Workers: Used for scaling the destination in case of high message load. Specifies the number of worker threads AxoRouter uses for sending messages to the destination. The default is 1. If high message throughput is not a concern, leave it on 1. For maximum performance, increase it up to the number of CPU cores available on AxoRouter. For more details, see the AxoSyslog documentation.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

10.11 - Splunk
10.11.1 - Splunk Cloud
To add a Splunk destination to Axoflow, complete the following steps.
Prerequisites
-
Enable the HTTP Event Collector (HEC) on your Splunk deployment if needed. On Splunk Cloud Platform deployments, HEC is enabled by default.
-
Create a token for Axoflow to use in the destination. When creating the token, use the syslog source type.
For details, see Set up and use HTTP Event Collector in Splunk Web.
-
If you’re using AxoRouter, create the indexes where Axoflow sends the log data. Which index is needed depends on the sources you have, but create at least the following event indices: axoflow
, infraops
, netops
, netfw
, osnix
(for unclassified messages). Check your sources in the Sources section for a detailed lists on which indices their data is sent.
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + > Destination.
-
Configure the destination.
-
Select Splunk.
-
Select which type of configuration you want to use:
- Simple: Send all data into a single index, with fixed source and source type settings.
- Dynamic: Set index, source, and source type based on the content or metadata of the incoming messages.
- Advanced: Allows you to specify a custom URL endpoint.
-
Enter a name for the destination.

-
Configure the endpoint of the destination.
- Advanced: Enter your Splunk URL into the URL field, for example,
https://<your-splunk-tenant-id>.splunkcloud.com:8088
for Splunk Cloud Platform free trials, or https://<your-splunk-tenant-id>.splunkcloud.com
for Splunk Cloud Platform instances.
- Simple and Dynamic:
- Select the HTTPS or HTTP protocol to use to access your destination.
- Enter the Hostname and Port of the destination.
-
Specify the Splunk index to send the data to.
- Simple: Enter the expression that specifies the Splunk index to use into the Index field, for example:
netops
. All data will be sent into this index.
- Dynamic and Advanced:
- Enter the name of the Default Index. The data will be sent into this index if no other index is set during the processing of the message (based on automatic classification, or by the processing steps of the Flow). Make sure that the index exists in Splunk.
- Enter the Default Source and Default Source Type. These will be assigned to the messages that have no source or source type set during the processing of the message (based on automatic classification, or by the processing steps of the Flow).
-
Enter the token you’ve created into the Token field.
-
Disable the Verify server certificate option unless your deployment has a valid, non-self-signed certificate. Free Splunk Cloud accounts have self-signed certificates.
-
(Optional) Set other options as needed for your environments.
- Timeout: The number of seconds to wait for a log-forwarding request to complete, and attempt to reconnect the server if exceeded. The default (
0
) is unlimited. For more details, see the AxoSyslog documentation.
- Batch Timeout: Maximal time in milliseconds to wait for a batch to be filled before sending it. The default value (
-1
) means that the batch should be full before sending, which may result in a long wait if the incoming traffic is low. For more details, see the AxoSyslog documentation.
- Workers: Used for scaling the destination in case of high message load. Specifies the number of worker threads AxoRouter uses for sending messages to the destination. The default is 1. If high message throughput is not a concern, leave it on 1. For maximum performance, increase it up to the number of CPU cores available on AxoRouter. For more details, see the AxoSyslog documentation.
-
Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

10.11.2 - Splunk Enterprise
Coming soon
If you’d like, we can send you an email when we update this page.
11 - Metrics and analytics
This chapter shows how to access the different metrics and analytics that Axoflow collects about your security data pipeline.
11.1 - Analytics
Axoflow allows you to analyze your data at various points in your pipeline using using Sankey and Sunburst diagrams.
- The Analytics page of allows you to analyze the data throughput of your pipeline.
- You can analyze the throughput of a flow on the Flows > <flow-to-analyze> > Analytics page.
- You can analyze the throughput of a single host on the Hosts > <host-to-analyze> > Analytics page.

The analytics charts
You can select what is displayed and how using the top bar and the Filter labels bar.

- Time period: Select the calendar_month
icon to change the time period that’s displayed on the charts. You can use absolute (calendar) time, or relative time (for example, the last 2 days).
- Select insights
to switch between Sankey and Sunburst diagrams.
- You can display the data throughput based on:
- Output bytes
- Output events
- Input bytes
- Input events
- Add and clear filters (filter_alt
/ filter_alt_off
).
In the Filter labels bar, you can:
-
Reorder the labels to adjust the diagram. On Sunburst diagrams, the left-most label is on the inside of the diagram.
-
Add new labels to get more details about the data flow.
- Labels added to AxoRouter hosts get the
axo_host_
prefix.
- Labels added to data sources get the
host_
prefix. For example, if you add a rack label to an edge host, it’ll be added to the data received from the host as host_rack
.
On other pages, like the Host Overview page, the labels are displayed without the prefixes.
-
Remove unneeded labels from the diagram.
Click a segment of the diagram to drill-down into the data. That’s equivalent with selecting filter_alt
and adding the label to the Analytics filters. To clear the filters, select filter_alt_off
.
Hover over a segment displays more details about it.
Sunburst diagrams
Sunburst diagrams (also known as ring charts or radial treemaps) visualize your data pipeline as a hierarchical dataset. It organizes the data according to the labels displayed in the Filter labels field into concentric rings, where each ring corresponds to a level in the hierarchy. The left-most label is on the inside of the diagram.

For example, sunburst diagrams are great for visualizing:
- top talkers (the data sources that are sending the most data), or
- if you’ve added custom labels that show the owner to your data sources, you can see which team is sending the most data to the destination.
The following example groups the data sources that send data into a Splunk destination based on their custom host_label_team
labels.

Sankey diagrams
The Sankey diagram of your data pipeline shows the flow of data between the elements of the pipeline, for example, from the source (host) to the destination. Sankey diagrams are especially suited to visualize the flow of data, and show how that flow is subdivided at each stage. That way, they help highlight bottlenecks, and show where and how much data is flowing.

The diagram consists of nodes (also called segments) that represent the different attributes or labels of the data flowing through the host. Nodes are shown as labeled columns, for example, the sender application (app), or a host. The thickness of the links between the nodes of the diagram shows the amount of data.
- Hover over a link to show the data throughput of this link between the edges of the diagram.
- Click on a link to show the details of the link: the labels that the link connects, and their data throughput. You can also tap into the log flow.
- Click on a node to drill-down into the diagram. (To undo, use the Back button of your browser, or the clear filters icon filter_alt_off
.)
The following example shows a custom label that shows the owner of the source host, thereby visualizing which team is sending the most data to the destination.

Sankey diagrams are a great way to:
- Visualize flows: add the
flow
label to the Filter labels field.
- Find unclassified messages that weren’t recognized by the Axoflow database: add the
app
label to the Filter labels field, and look for the axo_fallback
link. You can tap into the log flow to check these messages. Feel free to send us sample so we can add them to the classification database.
- Visualize custom labels and their relation to data flows.
Tapping into the log flow
-
On the Sankey diagram, click on a link to show the details of the link.

-
Tap into the data traffic:
- Select Tap with this label to tap into the log flow at either end of the link.
- Select Tap both to tap into the data flowing through the link.
-
Select the host where you want to tap into the logs.
-
Select Start.
-
When the logs you’re interested in show up, click Stop Log Tap, then click a log message to see its details.

-
If you don’t know what the message means, select AI Analytics to ask our AI to interpret it.

11.2 - Host metrics
The Metrics & health page of a host shows the history of the various metrics Axoflow collects about the host.

Events of the hosts (for example, configuration reloads, or alerts affecting the host) are displayed over the metrics.

Interact with metrics
You can change which metrics are displayed and for which time period using the bar above the metrics.

-
Metrics categories: Temporarily hide/show the metrics of that category, for example, System. The category for each metric is displayed under the name of the chart. Note that this change is just temporary: if you want to change the layout of the metrics, use the settings
icon.
-
calendar_month
: Use the calendar icon to change the time period that’s displayed on the charts. You can use absolute (calendar) time, or relative time (for example, the last 2 days).
To quickly zoom in on a period, click and drag to select the period to display on any of the charts. Every chart will be updated for the selected period. (To return to the previous state, click the Back button of your browser.)
-
/
: Hide/show the alerts and other events (like configuration reloads) of the host. These event are overlayed on the charts by default.
-
: Shows the number of active alerts on the host for that period.
-
The settings
allows you to change the order of the metrics, or to hide metrics. These changes are persistent and stored in your profile.

Interact with a chart
In addition to the possibilities of the top bar, you can interact with the charts the following way:
-
Hover on the info
icon on a colored metric card to display the definition, details, and statistics of the metric.

-
Click on a colored card to hide the related metric from the chart. For example, you can hide unneeded sources on the Log input charts.
-
Click on an event (for example, an alert or configuration reload) to show its details.

-
To quickly zoom in on a period, click and drag to select the period to display on any of the charts. Every chart will be updated for the selected period. (To return to the previous state, click the Back button of your browser.)
Metrics reference
For managed hosts, the following metrics are available:
-
Connections: Number of active connections and their configured maximum number for each log source.
-
CPU: Percentage of time a CPU core spent on average in a non-idle state within a window of 5 minutes.
-
Disk: Effective storage space used and available on each device (an overlap may exist between devices identified).
-
Dropped packets (total): This chart shows different metrics for packet loss:
- Dropped UDP packets: Count of UDP packets dropped by the OS before processing per second averaged within a time window of 5 minutes.
- Dropped log events: Count of events dropped from event queues within a time window of 5 minutes.
-
Log input (bytes): Incoming log messages processed by each log source measured in bytes per second averaged in a time window of 5 minutes.
-
Log input (events): Count of incoming log messages processed by each log source per second averaged within a time window of 5 minutes.
-
Log output (bytes): Log messages sent to each log destination measured in bytes per second averaged within a time window of 5 minutes.
-
Log output (events): Count of log messages sent to each log destination per second averaged within a time window of 5 minutes.
-
Log memory queue (bytes): Total bytes of data waiting in each memory queue.
-
Log memory queue (events): Count of messages waiting in each memory queue by destination.
-
Log disk queue (bytes): This chart shows the following metrics about disk queue usage:
- Disk queue bytes: Total bytes of data waiting in each disk queue.
- Disk queue memory cache bytes: Amount of memory used for caching disk-based queues.
-
Log disk queue (events): Count of messages waiting in each disk queue by destination.
-
Memory: Memory usage and capacity reported by the OS in bytes (including reclaimable caches and buffers).
-
Network input (bytes): Incoming network traffic in bytes/second reported by the OS for each network interface averaged within a time window of 5 minutes.
-
Network input (packets): Count of incoming network packets per second reported by the OS for each network interface averaged within a time window of 5 minutes.
-
Network output (bytes): Outgoing network traffic in bytes/second reported by the OS for each network interface averaged within a time window of 5 minutes.
-
Network output (packets): Count of outgoing network packets per second reported by the OS for each network interface averaged within a time window of 5 minutes.
-
Event delay (seconds): Latency of outgoing messages.
11.3 - Flow analytics
11.4 - Flow metrics
12 - Deploy Axoflow Console
This chapter describes the installation and configuration of Axoflow Console in different scenarios.
Axoflow Console is available as:
- an SaaS service,
- on-premises deployment,
- Kubernetes deployment.
12.1 - Air-gapped environment
12.2 - Kubernetes
12.3 - On-premise
This guide shows you how to install Axoflow Console on a local, on-premises virtual machine using k3s and Helm. Installation of other components like the NGINX ingress controller NS cert-manager is also included. Since this is a single instance deployment, we don’t recommend using it in production environments. For additional steps and configurations needed in production environments, contact us.
Note
Axoflow Console is also available without installation as a SaaS service. You only have to install Axoflow Console on premises if you want to run and manage it locally.
At a high level, the deployment consists of the following steps:
- Preparing a virtual machine.
- Installing Kubernetes on the virtual machine.
- Configure authentication. This guide shows you how to configure LDAP authentication. Other supported authentication methods include Google and GitHub.
- Before deploying AxoRouter describes the steps you have to complete on a host before deploying AxoRouter on it. These steps are specific to on-premises Axoflow Console deployments, and are not needed when using the SaaS Axoflow Console.
Prerequisites
To install Axoflow Console, you’ll need the following:
System requirements
Supported operating system: Ubuntu 24.04, Red Hat 9 and compatible (tested with AlmaLinux 9)
The virtual machine (VM) must have at least:
- 4 vCPUs
- 4 GB RAM
- 100 GB disk space
This setup can handle about 100 AxoRouter instances and 1000 data source hosts.
For reference, a real-life scenario that handles 100 AxoRouter and 3000 data source hosts with 30-day metric retention would need:
- 16 vCPU
- 16 GB RAM
- 250 GB disk space
For details on sizing, contact us.
You’ll need to have access to a user with sudo
privileges.
Network access
The hosts must be able to access the following domains related to the Axoflow Console:
Set up Kubernetes
Complete the following steps to install k3s and some dependencies on the virtual machine.
-
Run getenforce
to check the status of SELinux.
-
Check the /etc/selinux/config
file and make sure that the settings match the results of the getenforce
output (for example, SELINUX=disabled
). K3S supports SELinux, but the installation can fail if there is a mismatch between the configured and actual settings.
-
Install k3s.
-
Run the following command:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik --disable metrics-server" K3S_KUBECONFIG_MODE="644" sh -s -
-
Check that the k3s service is running:
-
Install the NGINX ingress controller.
VERSION="v4.11.3"
sudo tee /var/lib/rancher/k3s/server/manifests/ingress-nginx.yaml > /dev/null <<EOF
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: ingress-nginx
namespace: kube-system
spec:
createNamespace: true
chart: ingress-nginx
repo: https://kubernetes.github.io/ingress-nginx
version: $VERSION
targetNamespace: ingress-nginx
valuesContent: |-
controller:
extraArgs:
enable-ssl-passthrough: true
hostNetwork: true
hostPort:
enabled: true
service:
enabled: false
admissionWebhooks:
enabled: false
updateStrategy:
strategy:
type: Recreate
EOF
-
Install the cert-manager controller.
VERSION="1.16.2"
sudo tee /var/lib/rancher/k3s/server/manifests/cert-manager.yaml > /dev/null <<EOF
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: cert-manager
namespace: kube-system
spec:
createNamespace: true
chart: cert-manager
version: $VERSION
repo: https://charts.jetstack.io
targetNamespace: cert-manager
valuesContent: |-
crds:
enabled: true
enableCertificateOwnerRef: true
EOF
Install the Axoflow Console
Complete the following steps to deploy the Axoflow Console Helm chart.
-
Set the following environment variables as needed for your deployment.
-
The domain name for your Axoflow Console deployment. This must point to the IP address of the VM you’re installing Axoflow Console on.
BASE_HOSTNAME=your-host.your-domain
Note
To use Google authentication, Axoflow Console must be deployed on a publicly accessible domain name (the `$BASE_HOSTNAME` must end with a valid top-level domain, for example, `.com` or `.io`.)
-
A default email address for the Axoflow Console administrator.
ADMIN_EMAIL="your-email@your-domain"
-
The username of the Axoflow Console administrator.
-
The password of the Axoflow Console administrator.
ADMIN_PASSWORD="your-admin-password"
-
A unique identifier of the administrator user. For example, you can generate one using the uuidgen
. (Note that on Ubuntu, you must first install the uuid-runtime
package by running sudo apt install uuid-runtime
.
-
The bcrypt hash of the administrator password. For example, you can generate it with the htpasswd
tool, which is part of the httpd-tools
package on Red Hat (install it using sudo dnf install httpd-tools
), and the apache2-utils
package on Ubuntu (install it using sudo apt install apache2-utils
).
ADMIN_PASSWORD_HASH=$(echo $ADMIN_PASSWORD | htpasswd -BinC 10 $ADMIN_USERNAME | cut -d: -f2)
-
The version number of Axoflow Console you want to deploy.
-
The internal IP address of the virtual machine. This is needed so the authentication service can communicate with the identity provider without relying on DNS.
VM_IP_ADDRESS=$(hostname -I | cut -d' ' -f1)
Check that the above command returned the correct IP, it might not be accurate if the host has multiple IP addresses.
-
The external IP address of the virtual machine. This is needed so external components (like AxoRouter and Axolet that’s running on other hosts) can access the Axoflow Console.
EXTERNAL_IP="<external.ip.of.vm>"
-
The URL of the Axoflow Helm chart. You’ll receive this URL from our team. You can request it using the contact form.
AXOFLOW_HELM_CHART_URL="<the-chart-URL-received-from-Axoflow>"
-
Install Axoflow Console. The following configuration file enables the user set in ADMIN_USERNAME
log in with the static password set in ADMIN_PASSWORD
. If this basic setup is working, you can configure another authentication provider.
(For details on what the different services enabled in the YAML file do, see the service reference.)
sudo tee /var/lib/rancher/k3s/server/manifests/axoflow.yaml > /dev/null <<EOF
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: axoflow
namespace: kube-system
spec:
createNamespace: true
targetNamespace: axoflow
version: $VERSION
chart: $AXOFLOW_HELM_CHART_URL
failurePolicy: abort
valuesContent: |-
baseHostName: $BASE_HOSTNAME
issuer: self-sign-issuer
issuerKind: Issuer
imageTagDefault: $VERSION
selfSignedRoot:
create: true
chalco:
enabled: true
axoletDist:
enabled: true
controllerManager:
enabled: true
kcp:
enabled: true
kcpCertManager:
enabled: true
telemetryproxy:
enabled: true
prometheus:
enabled: true
pomerium:
enabled: true
authenticateServiceUrl: https://authenticate.$BASE_HOSTNAME
authenticateServiceIngress: authenticate
certificates:
- authenticate
certificateAuthority:
secretName: dex-certificates
policy:
emails:
- $ADMIN_EMAIL
domains: []
groups: []
claim/groups: []
readOnly:
emails: []
domains: []
groups: []
claim/groups: []
ui:
enabled: true
dex:
enabled: true
localIP: $VM_IP_ADDRESS
config:
create: true
# connectors:
# ...
staticPasswords:
- email: $ADMIN_EMAIL
hash: $ADMIN_PASSWORD_HASH
username: $ADMIN_USERNAME
userID: $ADMIN_UUID
EOF
-
Wait a few minutes until everything is installed. Check that every component has been installed and is running:
sudo k3s kubectl get pods -A
The output should be similar to:
NAMESPACE NAME READY STATUS RESTARTS AGE
NAMESPACE NAME READY STATUS RESTARTS AGE
axoflow axolet-dist-55ccbc4759-q7l7n 1/1 Running 0 2d3h
axoflow chalco-6cfc74b4fb-mwbbz 1/1 Running 0 2d3h
axoflow configure-kcp-1-x6js6 0/1 Completed 2 2d3h
axoflow configure-kcp-cert-manager-1-wvhhp 0/1 Completed 1 2d3h
axoflow configure-kcp-token-1-xggg7 0/1 Completed 1 2d3h
axoflow controller-manager-5ff8d5ffd-f6xtq 1/1 Running 0 2d3h
axoflow dex-5b9d7b88b-bv82d 1/1 Running 0 2d3h
axoflow kcp-0 1/1 Running 1 (2d3h ago) 2d3h
axoflow kcp-cert-manager-0 1/1 Running 0 2d3h
axoflow pomerium-56b8b9b6b9-vt4t8 1/1 Running 0 2d3h
axoflow prometheus-server-0 2/2 Running 0 2d3h
axoflow telemetryproxy-65f77dcf66-4xr6b 1/1 Running 0 2d3h
axoflow ui-6dc7bdb4c5-frn2p 2/2 Running 0 2d3h
cert-manager cert-manager-74c755695c-5fs6v 1/1 Running 5 (144m ago) 2d3h
cert-manager cert-manager-cainjector-dcc5966bc-9kn4s 1/1 Running 0 2d3h
cert-manager cert-manager-webhook-dfb76c7bd-z9kqr 1/1 Running 0 2d3h
ingress-nginx ingress-nginx-controller-bcc7fb5bd-n5htp 1/1 Running 0 2d3h
kube-system coredns-ccb96694c-fhk8f 1/1 Running 0 2d3h
kube-system helm-install-axoflow-b9crk 0/1 Completed 0 2d3h
kube-system helm-install-cert-manager-88lt7 0/1 Completed 0 2d3h
kube-system helm-install-ingress-nginx-qvkhf 0/1 Completed 0 2d3h
kube-system local-path-provisioner-5cf85fd84d-xdlnr 1/1 Running 0 2d3h
(For details on what the different services do, see the service reference.)
Note
When using AxoRouter with an on-premises Axoflow Console deployment, you have to
prepare the hosts before deploying AxoRouter. These steps are specific to on-premises Axoflow Console deployments, and are not needed when using the SaaS Axoflow Console.
Login to Axoflow Console
-
If the domain name of Axoflow Console cannot be resolved from your desktop, add it to the /etc/hosts
file in the following format. Use and IP address of Axoflow Console that can be accessed from your desktop.
<AXOFLOW-CONSOLE-IP-ADDRESS> <your-host.your-domain> idp.<your-host.your-domain> authenticate.<your-host.your-domain>
-
Open the https://<your-host.your-domain>
URL in your browser.
-
The on-premise deployment of Axoflow Console shows a self-signed certificate. If your browser complains about the related risks, accept it.
-
Use the email address and password you’ve configured to log in to Axoflow Console.
Axoflow Console service reference
Service Name |
Namespace |
Purposes |
Function |
KCP |
axoflow |
Backend API |
Kubernetes Like Service with built in database, that service manage all the settings that our system manage |
Chalco |
axoflow |
Frontend API |
Serve the UI API Calls, implement business logic for the UI |
Controller-Manager |
axoflow |
Backend Service |
Reacts to state changes in our business entities, manage business logic for the Backend |
Telemetry Proxy |
axoflow |
Backend API |
Receives agents telemetries |
UI |
axoflow |
Dashboard |
The frontend for Axoflow Console |
Prometheus |
axoflow |
Backend API /Service |
Monitoring component to store time series information and an API for query, manage alert rules |
Alertmanager |
axoflow |
Backend API /Service |
Monitoring component to Send alert based on alerting rules |
Dex |
axoflow |
Identity Connector/Proxy |
Identity Connector/Proxy to allow the customer to use own identity (Google, Ldap, etc.) |
Pomerium |
axoflow |
HTTP Proxy |
Allow policy based access to the Frontend API (Chalco) |
NGINX Ingress Controller |
ingress-nginx |
HTTP Proxy |
Allow routing the HTTP traffic between multiple Frontend/Backend API |
Axolet Dist |
axoflow |
Backend API |
Static artifact store to contains agents binaries |
Cert Manager |
cert-manager |
Automated Certificate management tool |
Manage certificates for Axoflow components (Backend API, HTTP Proxy) |
Cert Manager (kcp) |
axoflow |
Automated Certificate management tool |
Manage certificates for Agents |
External DNS |
kube-system |
Automated DNS management tool |
Manage DNS records for Axoflow components |
12.3.1 - Authentication
These sections show you how to configure on-premises and Kubernetes deployments of Axoflow Console to use different authentication backends.
12.3.1.1 - GitHub
This section shows you how to use GitHub OAuth2 as an authentication backend for Axoflow Console. It is assumed that you already have a GitHub organization. Complete the following steps.
Prerequisites
Register a new OAuth GitHub application for your organization. (For testing, you can create it under your personal account.) Make sure to:
-
Set the Homepage URL to the URL of your Axoflow Console deployment: https://<your-console-host.your-domain>/callback
, for example, https://axoflow-console.example.com
.
-
Set the Authorization callback URL to: https://auth.<your-console-host.your-domain>/callback
, for example, https://auth.axoflow-console.example.com/callback
.

-
Save the Client id of the app, you’ll need it to configure Axoflow Console.
-
Generate a Client secret and save it, you’ll need it to configure Axoflow Console.
Configuration
-
Configure authentication by editing the spec.dex.config
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
-
(Optional) If you’ve used our earlier example, delete the spec.dex.config.staticPasswords
section.
-
Add the spec.dex.config.connectors
section to the file, like this:
dex:
enabled: true
localIP: $VM_IP_ADDRESS
config:
create: true
connectors:
- type: github
# Required field for connector id.
id: github
# Required field for connector name.
name: GitHub
config:
# Credentials can be string literals or pulled from the environment.
clientID: <ID-of-GitHub-application>
clientSecret: <Secret-of-GitHub-application>
redirectURI: <idp.your-host.your-domain/callback>
orgs:
- name: <your-GitHub-organization>
Note that if you have a valid domain name that points to the VM, you can omit the localIP: $VM_IP_ADDRESS
line.
-
Edit the following fields. For details on the configuration parameters, see the Dex GitHub connector documentation.
connectors.config.clientID
: The ID of the GitHub OAuth application.
connectors.config.clientSecret
: The client secret of the GitHub OAuth application.
connectors.config.redirectURI
: The callback URL of the GitHub OAuth application: https://auth.<your-console-host.your-domain>/callback
, for example, https://auth.axoflow-console.example.com/callback
.
connectors.config.orgs
: List of the GitHub organizations whose members can access Axoflow Console. Restrict access to your organization.
-
Configure authorization in the spec.pomerium.policy
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
- List the primary email addresses of the users who have read and write access to Axoflow Console under the
emails
section.
- List the primary email addresses of the users who have read-only access to Axoflow Console under the
readOnly.emails
section.
Note
Use the primary GitHub email addresses of your users, otherwise the authorization will fail.
policy:
emails:
- username@yourdomain.com
domains: []
groups: []
claim/groups: []
readOnly:
emails:
- readonly-username@yourdomain.com
domains: []
groups: []
claim/groups: []
For details on authorization settings, see Authorization.
-
Save the file.
-
Restart the dex deployment after changing the connector:
kubectl rollout restart deployment/dex -n axoflow
Expected output:
deployment.apps/dex restarted
-
Open the main page of your Axoflow Console deployment in your browser. You’ll be redirected to the GitHub authentication page.
After completing the GitHub authentication you can access Axoflow Console.
Getting help
You can troubleshoot common errors by running kubectl logs -n axoflow <dex-container-name>
If you run into problems setting up the authentication or authorization, contact us.
12.3.1.2 - Google
This section shows you how to use Google OpenID Connect as an authentication backend for Axoflow Console. It is assumed that you already have a Google organization and Google Cloud Console access. Complete the following steps.
Prerequisites
-
To use Google authentication, Axoflow Console must be deployed on a publicly accessible domain name (the $BASE_HOSTNAME
must end with a valid top-level domain, for example, .com
or .io
.)
-
Configure OpenID Connect and create an OpenID credential for a Web application.
- Make sure to set the Authorized redirect URIs to:
https://auth.<your-console-host.your-domain>/callback
, for example, https://auth.axoflow-console.example.com/callback
.
- Save the Client ID of the app and the Client secret of the application, you’ll need them to configure Axoflow Console.
For details on setting up OpenID Connect, see the official documentation.
Configuration
-
Configure authentication by editing the spec.dex.config
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
-
(Optional) If you’ve used our earlier example, delete the spec.dex.config.staticPasswords
section.
-
Add the spec.dex.config.connectors
section to the file, like this:
connectors:
- type: google
id: google
name: Google
config:
# Connector config values starting with a "$" will read from the environment.
clientID: <ID-of-Google-application>
clientSecret: <Secret-of-GitHub-application>
# Dex's issuer URL + "/callback"
redirectURI: <idp.your-host.your-domain/callback>
# Set the value of `prompt` query parameter in the authorization request
# The default value is "consent" when not set.
promptType: consent
# Google supports whitelisting allowed domains when using G Suite
# (Google Apps). The following field can be set to a list of domains
# that can log in:
#
hostedDomains:
- <your-domain>
-
Edit the following fields. For details on the configuration parameters, see the Dex Google connector documentation.
connectors.config.clientID
: The ID of the Google application.
connectors.config.clientSecret
: The client secret of the Google application.
connectors.config.redirectURI
: The callback URL of the Google application: https://auth.<your-console-host.your-domain>/callback
, for example, https://auth.axoflow-console.example.com/callback
.
connectors.config.hostedDomains
: The domain where Axoflow Console is deployed, for example, example.com
. Your users must have email addresses for this domain at Google.
-
Configure authorization in the spec.pomerium.policy
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
- List the email addresses of the users who have read and write access to Axoflow Console under the
emails
section.
- List the email addresses of the users who have read-only access to Axoflow Console under the
readOnly.emails
section.
Note
The email addresses of your users must have the same domain as the one set under `connectors.config.hostedDomains`, otherwise the authorization will fail.
policy:
emails:
- username@yourdomain.com
domains: []
groups: []
claim/groups: []
readOnly:
emails:
- readonly-username@yourdomain.com
domains: []
groups: []
claim/groups: []
For details on authorization settings, see Authorization.
-
Save the file.
-
Restart the dex deployment after changing the connector:
kubectl rollout restart deployment/dex -n axoflow
Expected output:
deployment.apps/dex restarted
-
Open the main page of your Axoflow Console deployment in your browser. You’ll be redirected to the Google authentication page.
After completing the Google authentication you can access Axoflow Console.
Getting help
You can troubleshoot common errors by running kubectl logs -n axoflow <dex-container-name>
If you run into problems setting up the authentication or authorization, contact us.
12.3.1.3 - LDAP
This section shows you how to use LDAP as an authentication backend for Axoflow Console. In the examples we used the public demo service of FreeIPA as an LDAP server. It is assumed that you already have an LDAP server in place. Complete the following steps.
-
Configure authentication by editing the spec.dex.config
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
-
(Optional) If you’ve used our earlier example, delete the spec.dex.config.staticPasswords
section.
-
Add the spec.dex.config.connectors
section to the file, like this:
CAUTION:
This example shows a simple configuration suitable for testing. In production environments, make sure to:
- configure TLS encryption to access your LDAP server
- retrieve the bind password from a vault or environment variable. Note that if the bind password contains the `$` character, you must set it in an environment variable and pass it like `bindPW: $LDAP_BINDPW`.
dex:
enabled: true
localIP: $VM_IP_ADDRESS
config:
create: true
connectors:
- type: ldap
name: OpenLDAP
id: ldap
config:
host: ipa.demo1.freeipa.org
insecureNoSSL: true
# This would normally be a read-only user.
bindDN: uid=admin,cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org
bindPW: Secret123
usernamePrompt: Email Address
userSearch:
baseDN: dc=demo1,dc=freeipa,dc=org
filter: "(objectClass=person)"
username: mail
# "DN" (case sensitive) is a special attribute name. It indicates that
# this value should be taken from the entity's DN not an attribute on
# the entity.
idAttr: uid
emailAttr: mail
nameAttr: cn
groupSearch:
baseDN: dc=demo1,dc=freeipa,dc=org
filter: "(objectClass=groupOfNames)"
userMatchers:
# A user is a member of a group when their DN matches
# the value of a "member" attribute on the group entity.
- userAttr: DN
groupAttr: member
# The group name should be the "cn" value.
nameAttr: cn
-
Edit the following fields. For details on the configuration parameters, see the Dex LDAP connector documentation.
connectors.config.host
: The hostname and optionally the port of the LDAP server in “host:port” format.
connectors.config.bindDN
and connectors.config.bindPW
: The DN and password for an application service account that the connector uses to search for users and groups.
connectors.config.userSearch.bindDN
and connectors.config.groupSearch.bindDN
: The base DN for the user and group search.
-
Configure authorization in the spec.pomerium.policy
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
- List the names of the LDAP groups whose members have read and write access to Axoflow Console under
claim/groups
. (Group managers
in the example.)
- List the names of the LDAP groups whose members have read-only access to Axoflow Console under
readOnly.claim/groups
. (Group employee
in the example.)
policy:
emails: []
domains: []
groups: []
claim/groups:
- managers
readOnly:
emails: []
domains: []
groups: []
claim/groups:
- employee
For details on authorization settings, see Authorization.
-
Save the file.
-
Restart the dex deployment after changing the connector:
kubectl rollout restart deployment/dex -n axoflow
Expected output:
deployment.apps/dex restarted
Getting help
You can troubleshoot common errors by running kubectl logs -n axoflow <dex-container-name>
If you run into problems setting up the authentication or authorization, contact us.
12.3.2 - Authorization
These sections show you how to configure the authorization of Axoflow Console with different authentication backends.
You can configure authorization in the spec.pomerium.policy
section of the Axoflow Console manifest. In on-premise deployments, the manifest is in the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
You can list individual email addresses and user groups to have read and write (using the keys under spec.pomerium.policy
) and read-only (using the keys under spec.pomerium.policy.readOnly
) access to Axoflow Console. Which key to use depends on the authentication backend configured for Axoflow Console:
-
emails
: Email addresses used with static passwords and GitHub authentication.
With GitHub authentication, use the primary GitHub email addresses of your users, otherwise the authorization will fail.
-
claim/groups
: LDAP groups used with LDAP authentication. For example:
policy:
emails: []
domains: []
groups: []
claim/groups:
- managers
readOnly:
emails: []
domains: []
groups: []
claim/groups:
- employee
12.3.3 - Prepare AxoRouter hosts
When using AxoRouter with an on-premises Axoflow Console deployment, you have to complete the following steps on the hosts you want to deploy AxoRouter on. These steps are specific to on-premises Axoflow Console deployments, and are not needed when using the SaaS Axoflow Console.
-
If the domain name of Axoflow Console cannot be resolved from the AxoRouter host, add it to the /etc/hosts
file of the AxoRouter host in the following format. Use and IP address of Axoflow Console that can be accessed from the AxoRouter host.
<AXOFLOW-CONSOLE-IP-ADDRESS> <AXOFLOW-CONSOLE-BASE-URL> kcp.<AXOFLOW-CONSOLE-BASE-URL> telemetry.<AXOFLOW-CONSOLE-BASE-URL> idp.<AXOFLOW-CONSOLE-BASE-URL> authenticate.<AXOFLOW-CONSOLE-BASE-URL>
-
Import Axoflow Console certificates to AxoRouter hosts.
-
On the Axoflow Console host: Run the following command to extract certificates. The AxoRouter host will need these certificates to download the installation binaries and access management traffic.
k3s kubectl get secret -n axoflow kcp-ca -o=jsonpath='{.data.ca\.crt}'|base64 -d > $BASE_HOSTNAME-kcp-ca.crt
k3s kubectl get secret -n axoflow pomerium-certificates -o=jsonpath='{.data.ca\.crt}'|base64 -d > $BASE_HOSTNAME-pomerium-ca.crt
This will create two files in the local folder. Copy them to the AxoRouter hosts.
-
On the AxoRouter hosts: Copy the certificate files extracted from the Axoflow Console host.
- On Red Hat: Copy the files into the
/etc/pki/ca-trust/source/anchors/
folder, then run sudo update-ca-trust extract
. (If needed, install the ca-certificates
package.)
- On Ubuntu: Copy the files into the
/usr/local/share/ca-certificates/
folder, then run sudo update-ca-certificates
curl https://kcp.<your-host.your-domain>
- Now you can deploy AxoRouter on the host.
13 - Legal notice
Copyright of Axoflow Inc. All rights reserved.
The Axoflow, AxoRouter, and AxoSyslog trademarks and logos are trademarks of Axoflow Inc. All other trademarks are property of their respective owners.