This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Axoflow Platform
Axoflow is a security data curation pipeline that helps you get high quality, reduced security data, automatically, without coding. Better quality data allows your organization and SOC team to detect and respond to threats faster, use AI, and reduce compliance breaches and costs.
Collect security data from anywhere
Axoflow allows you to collect data from any source, including:
- cloud services
- cloud-native sources like OpenTelemetry or Kubernetes
- traditional sources like Microsoft Windows endpoints (WEC) and Linux servers (syslog)
- networking appliances like firewalls
- applications.

Shift left
Axoflow deals with tasks like data classification, parsing, curation, reduction, and routing. Except for routing, these tasks are commonly performed in the SIEM, often manually. Axoflow automates all of these processing steps and shifts them left into the data pipeline, so:
- it’s automatically applied to all your security data,
- all destinations benefit from it (multi-SIEM and storage+SIEM scenarios),
- data format is optimized for the specific destination (for example, Splunk, Azure Sentinel, Google Pub/Sub),
- unneeded and redundant data can be dropped before having to pay for it, reducing data volume and storage costs.
Curation
Axoflow can automatically classify, curate, enrich, and optimize data right at the edge – limiting excess network overhead and costs as well as downstream manual data-engineering in the analytics tools. Axoflow has over 100 zero-maintenance connectors that automatically identify and classify the incoming data, and apply automatic, device- and source-specific curation steps.

Policy-based routing
Axoflow can configure your aggregator and edge devices to intelligently route data based on the high-level flows you configure. Flows can use labels and other properties of the transferred data as well as your inventory to automatically map your business and compliance policies into configuration files, without coding.
Management
Axoflow gives you a vendor-agnostic management plane for best-in-class visibility into your data pipeline. You can:
- automatically discover and identify existing logging infrastructure,
- visualize the complete edge-to-edge flow of security data to understand the contribution of sources to the data pipeline, and
- monitor the elements of the pipeline and the data flow.

Try Axoflow
Would you like to try Axoflow? Request a zero-commitment demo or a sandbox environment to experience the power of our platform.
Ready to get started? Go to our Getting started section for the first steps.
1 - Introduction
2 - Axoflow architecture
The Axoflow provides an end-to-end pipeline automating the collection, management and loading of your security data in a vendor-agnostic way. The following figure highlights the Axoflow data flow:

Axoflow architecture
The architecture of Axoflow is comprised of two main elements: the Axoflow Console and the Data Plane.
- The Axoflow Console is primarily concerned with highlighting the metadata of each event. This includes the source from which it originated, the size in bytes (and event count over time), its destination, and any other element which describes the data.
- The Data Plane includes collector agents and processing engines (like AxoRouter) that collect, classify, filter, transform, and deliver telemetry data to its proper destinations (SIEMs, storage), and provide metrics to the Axoflow Console. The components of the Data Plane can be managed from the Axoflow Console, or can be independent.
Pipeline components
A telemetry pipeline consists of the following high-level components:
-
Data Sources: Data sources are the endpoints of the pipeline that generate the logs and other telemetry data you want to collect. For example, firewalls and other appliances, Kubernetes clusters, application servers, and so on can all be data sources. Data sources send their data either directly to a destination, or to a router.
Axoflow provides several log collecting agents and solutions to collect data in different environments, including connectors for cloud services, Kubernetes clusters, Linux servers, and Windows servers.
-
Routers: Router (also called relays or aggregators) collect the data from a set of data sources and transport them to the destinations.
AxoRouter can collect, curate, and enrich the data: it automatically identifies your log sources and fixes common errors in the incoming data. It also converts the data into a format that best suits the destination to optimize ingestion speed and data quality.
-
Destinations: Destinations are your SIEM and storage solutions where the telemetry pipeline delivers your security data.

Your telemetry pipeline can consist of managed and unmanaged components. You can deploy and configure managed components from the Axoflow Console. Axoflow provides several managed components that help you collect or fetch data from your various data sources, or act as routers.
Axoflow Console
Axoflow Console is the data visualization and management UI of Axoflow. Available both as a SaaS and an on-premises solution, it collects and visualizes the metrics received from the pipeline components to provide insight into the details of your telemetry pipeline and the data it processes. It also allows you to:
AxoRouter
AxoRouter is a router (aggregator) and data curation engine: it collects all kinds of telemetry and security data and has all the low-level functions you would expect of log-forwarding agents and routers. AxoRouter can also curate and enrich the collected data: it automatically identifies your log sources and fixes common errors in the incoming data: for example, it corrects missing hostnames, invalid timestamps, formatting errors, and so on.
AxoRouter also has a range of zero-maintenance connectors for various networking and security products (for example, switches, firewalls, and web gateways), so it can classify the incoming data (by recognizing the product that is sending it), and apply various data curation and enrichment steps to reduce noise and improve data quality.
Before sending your data to its destination, AxoRouter automatically converts the data into a format that best suits the destination to optimize ingestion speed and data quality. For example, when sending data to Splunk, setting the proper sourcetype and index is essential.
Note
Note that AxoRouter collects detailed, real-time metrics about the data-flows – giving you observability over the health of the security data pipeline and its components. Your security data remains in your self-managed cloud or in your on-prem instance where your sources, destinations, and AxoRouters are running, only metrics are forwarded to Axoflow Console.

Axolet
Axolet is a monitoring and management agent that integrates with the local log collector (like AxoSyslog, Splunk Connect for Syslog, or syslog-ng) that runs on the data source and provides detailed metrics about the host and its data traffic to the Axoflow Console.
3 - Deployment scenarios
Thanks to its flexible deployment modes, you can quickly insert an Axoflow and its processing node (called AxoRouter) transparently into your data pipeline and gain instant benefits:
- Data reduction
- Improved SIEM accuracy
- Configuration UI for routing data
- Automatic data classification
- Metrics about log ingestion, processing, and data drops
- Analytics about the transported data
- Health check, highlighting anomalies
Note
Note that AxoRouter collects detailed, real-time metrics about the data-flows – giving you observability over the health of the security data pipeline and its components. Your security data remains in your self-managed cloud or in your on-prem instance where your sources, destinations, and AxoRouters are running, only metrics are forwarded to Axoflow Console.
After the first step, you can further integrate your pipeline, by deploying Axoflow agents to collect and manage your security data, or onboarding your existing log collector agents (for example, syslog-ng). Let’s see what these scenarios look like in detail.
3.1 - Axoflow Console deployment
You can use Axoflow Console:
We also support hybrid environments.
The Axoflow Console provides the UI for accessing metrics and analytics, deploying and configuring AxoRouter instances, configuring data flows, and so on.
3.2 - Transparent router mode
AxoRouter is a powerful aggregator and data processing engine that can receive data from a wide variety of sources, including:
- OpenTelemetry,
- syslog,
- Windows Event Forwarding, or HTTP.
In transparent mode, you deploy AxoRouter in front of your SIEM and configure your sources or aggregators to send the logs to AxoRouter instead of the SIEM. The transparent deployment method is a quick, minimally invasive way to get instant benefits and value from Axoflow, and:
Note
Note that AxoRouter collects detailed, real-time metrics about the data-flows – giving you observability over the health of the security data pipeline and its components. Your security data remains in your self-managed cloud or in your on-prem instance where your sources, destinations, and AxoRouters are running, only metrics are forwarded to Axoflow Console.
Axoflow Console as SaaS

Axoflow Console on premises

3.3 - Router and edge deployment
Axoflow provides agents to collect data from all kinds of sources:
- Kubernetes clusters,
- cloud sources,
- security appliances,
- Linux servers,
- Microsoft Windows hosts.
(If you’d prefer to keep using your existing syslog infrastructure instead of the Axoflow agents, see Onboard existing syslog infrastructure).
You can deploy the AxoRouter data aggregator on Linux and Kubernetes.

Using the Axoflow collector agents gives you:
- Reliable transport: Between its components, Axoflow transports security data using the reliable OpenTelemetry protocol (OTLP) for high performance, and to avoid losing messages.
- Managed components: You can configure, manage, monitor, and troubleshoot all these components from the Axoflow Console. For example, you can sample the data flowing through each component.
- Metrics: Detailed metrics from every collector provide unprecedented insight into the status of your data pipeline.
3.4 - Onboard existing syslog infrastructure
If your organization already has a syslog architecture in place, Axoflow provides ways to reuse it. This allows you to integrate your existing infrastructure with Axoflow, and optionally – in a later phase – replace your log collectors with the agents provided by Axoflow.
Managed AxoRouter deployments

In this deployment mode you use the centralized management UI of Axoflow Console to manage your AxoRouter instances. This provides the tightest integration and the most benefits, including:
Unmanaged AxoRouter deployments

In this mode, you install AxoRouter on the data source to replace its local collector agent, and manage it manually. That way you get the functional benefits of using AxoRouter as an aggregator and data curation engine to collect and classify your data, but can manage its configuration as you see fit. This gives you all the benefits of the read-only mode (since AxoRouter includes Axolet as well), and in addition, it provides:
Read-only mode

In this scenario, you install Axolet on the data source. Axolet is a monitoring (and management) agent that integrates with the local log collector, like AxoSyslog, Splunk Connect for Syslog, or syslog-ng, and sends detailed metrics about the host and its data traffic to the Axoflow Console. This allows you to use the Axoflow Console to:
4 - Try for free
To see Axoflow in action, you have a number of options:
- Watch some of the product videos on our blog.
- Request a demo sandbox environment. This pre-deployed environment contains a demo pipeline complete with destinations, routers, and sources that generate traffic, and you can check the metrics, use the analytics, modify the flows, and so on to get a feel of using a real Axoflow deployment.
- Request a free evaluation version. That’s an empty cloud deployment you can use for testing and PoC: you can onboard your own pipeline elements, add sources and destinations, and see how Axoflow performs with your data!
- Request a live demo where our engineers show you how Axoflow works, and can discuss your specific use cases in detail.
5 - Getting started
This guide shows you how to get started with Axoflow. You’re going to install AxoRouter, and configure or create a source to send data to AxoRouter. You’ll also configure AxoRouter to forward the received data to your destination SIEM or storage provider. The resulting topology will look something like this:

Why use Axoflow
Using the Axoflow security data pipeline automatically corrects and augments the security data you collect, resulting in high-quality, curated, SIEM-optimized data. It also removes redundant data to reduce storage and SIEM costs. In addition, it allows automates pipeline configuration and provides metrics and alerts for your telemetry data flows.
Prerequisites
You’ll need:
-
An Axoflow subscription, access to a free evaluation version, or an on-premise deployment.
-
A data source. This can be any host that you can configure to send syslog or OpenTelemetry data to your AxoRouter instance that you’ll install. If you don’t want to change the configuration of an existing device, you can use a virtual machine or a docker container on your local computer.
-
A host that you’ll install AxoRouter on. This can be a separate Linux host, or a virtual machine running on your local computer.
AxoRouter should work on most Red Hat and Debian compatible Linux distributions. For production environments, we recommend using Red Hat 9.
-
Access to a supported SIEM or storage provider, like Splunk or Amazon S3. For a quick test of Axoflow, you can use a free Splunk or OpenObserve account as well.
Network access
The hosts must be able to access the following domains related to the Axoflow Console:
Log in to the Axoflow Console
Verify that you have access to the Axoflow Console.
- Open
https://<your-tenant-id>.axoflow.io/
in your browser.
- Log in using Google Authentication.
Add a destination
Add the destination where you’re sending your data. For a quick test, you can use a free Splunk or OpenObserve account.
Add a Splunk Cloud destination. For other destinations, see Destinations.
Prerequisites
-
Enable the HTTP Event Collector (HEC) on your Splunk deployment if needed. On Splunk Cloud Platform deployments, HEC is enabled by default.
-
Create a token for Axoflow to use in the destination. When creating the token, use the syslog source type.
For details, see Set up and use HTTP Event Collector in Splunk Web.
-
If you’re using AxoRouter, create the indexes where Axoflow sends the log data. Which index is needed depends on the sources you have, but create at least the following event indices: axoflow
, infraops
, netops
, netfw
, osnix
(for unclassified messages). Check your sources in the Sources section for a detailed lists on which indices their data is sent.
-
If you’ve created any new indexes, make sure to add those indexes to the token’s Allowed Indexes.
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + Create New Item > Destination.
-
Configure the destination.
-
Select Splunk.
-
Select Dynamic. This will allow you to set a default index, source, and source type for messages that aren’t automatically identified.

-
Enter your Splunk URL into the Hostname field, for example, <your-splunk-tenant-id>.splunkcloud.com
for Splunk Cloud Platform free trials, or <your-splunk-tenant-id>.splunkcloud.com
for Splunk Cloud Platform instances.
-
Enter the name of the Default Index. The data will be sent into this index if no other index is set during the processing of the message (based on automatic classification, or by the processing steps of the Flow). Make sure that the index exists in Splunk.
-
Enter the Default Source and Default Source Type. These will be assigned to the messages that have no source or source type set during the processing of the message (based on automatic classification, or by the processing steps of the Flow).
-
Enter the token you’ve created into the Token field.
-
Disable the Verify server certificate option unless your deployment has a valid, non-self-signed certificate. Free Splunk Cloud accounts have self-signed certificates.
-
(Optional) You can set other options as needed for your environment. For details, see Splunk.
-
Select Create.
Deploy an AxoRouter instance
Deploy an AxoRouter instance that will route, curate, and enrich your log data.
Note
Note that AxoRouter collects detailed, real-time metrics about the data-flows – giving you observability over the health of the security data pipeline and its components. Your security data remains in your self-managed cloud or in your on-prem instance where your sources, destinations, and AxoRouters are running, only metrics are forwarded to Axoflow Console.
Deploy AxoRouter on Linux. For other platforms, see AxoRouter.
-
Select Provisioning > Select type and platform.

-
Select the type (AxoRouter) and platform (Linux). The one-liner installation command is displayed.

If needed, set the Advanced options (for example, proxy settings) to modify the installation parameters. Usually, you don’t have to use advanced options unless the Axoflow support team instructs you to do so.
-
Open a terminal on the host where you want to install AxoRouter.
-
Run the one-liner, then follow the on-screen instructions.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
Example output:
Do you want to install AxoRouter now? [Y]
y
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4142 100 4142 0 0 19723 0 --:--:-- --:--:-- --:--:-- 19818
Verifying packages...
Preparing packages...
axorouter-0.40.0-1.aarch64
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 31.6M 100 31.6M 0 0 2092k 0 0:00:15 0:00:15 --:--:-- 2009k
Verifying packages...
Preparing packages...
axolet-0.40.0-1.aarch64
Created symlink /etc/systemd/system/multi-user.target.wants/axolet.service → /usr/lib/systemd/system/axolet.service.
Now continue with onboarding the host on the Axoflow web UI.
-
Register the host.
-
Reload the Provisioning page. There should be a registration request for the new AxoRouter deployment. Select ✓.

-
Select Register to register the host. You can add a description and labels (in label:value
format) to the host.

-
Select the Topology page. The new AxoRouter instance is displayed.
Add a source
Configure a host to send data to AxoRouter.
Configure a generic syslog host. For sources that are specifically supported by Axoflow, see Sources.
-
Log in to your device. You need administrator privileges to perform the configuration.
-
If needed, enable syslog forwarding on the device.
-
Set AxoRouter as the syslog server. Typically, you can configure the following parameters:
-
Name or IP Address of the syslog server: Set the address of your AxoRouter.
-
Protocol: If possible, set TCP or TLS.
Note
If you’re sending data over TLS, make sure to configure a TLS-enabled
connector rule in Axoflow.
-
Syslog Format: If possible, set RFC5424 (or equivalent), otherwise leave the default.
-
Port: Set a port appropriate for the protocol and syslog format you have configured.
By default, AxoRouter accepts data on the following ports (unless you’ve modified the default connector rules):
- 514 UDP and TCP for RFC3164 (BSD-syslog) and RFC5424 (IETF-syslog) formatted traffic. AxoRouter automatically recognizes and handles both formats.
- 601 TCP for RFC5424 (IETF-syslog) and RFC3164 (BSD-syslog) formatted traffic. AxoRouter automatically recognizes and handles both formats.
- 6514 TCP for TLS-encrypted syslog traffic.
- 4317 TCP for OpenTelemetry log data.
To receive data on other ports or other protocols, configure other connector rules for the AxoRouter host.
For TLS-encrypted syslog connections, create a new connector rule or edit an existing one, and configure the keys and certificates needed to encrypt the connections. For details, see Syslog.
Note
Make sure to enable the ports you’re using on the firewall of your host.
-
Add the source to Axoflow Console.
-
Open the Axoflow Console and select Topology.
-
Select Create New Item > Source.
- If the source is actively sending data to an AxoRouter instance, select Detected, then select your source.
- Otherwise, select the vendor and product corresponding to your source from the Predefined sources, then enter the parameters of the source, like IP address and FQDN.
Note
During
log tapping, you can add hosts that are actively sending data to an AxoRouter instance by clicking
Register source.
-
(Optional) Add custom labels as needed.
-
Select Create.
Note
If your syslog source is running
syslog-ng
, Splunk Connect for Syslog (SC4S), or AxoSyslog as its log forwarder agent, consider installing Axolet on the host and instrumenting the configuration of the log forwarder to receive detailed metrics about the host and the processed data. For details, see
Manage and monitor the pipeline.
Add a path
Create a path between the source source and the AxoRouter instance.
-
Select Topology > Create New Item > Path.

-
Select your data source in the Source host field.

-
Select the target router or aggregator this source is sending its data to in the Target host field, for example, axorouter
.
-
Select the Target connector. The connector determines how the destination receives the data (for example, using which protocol or port).
-
Select Create. The new path appears on the Topology page.

Create a flow
Create a flow to route the traffic from your AxoRouter instance to your destination.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Classify, a Parse, and a Reduce step, in that order, to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet AND meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

Check the metrics on the Topology page
Open the Topology page and verify that your AxoRouter instance is connected both to the source and the destination.
If you have traffic flowing from the source to your AxoRouter instance, the Topology page shows the amount of data flowing on the path. Click the AxoRouter instance, then select Analytics to visualize the data flow.

Tap into the log flow
Log tapping in Axoflow samples the log flow. You can use labels to filter for specific messages (like ones with parse errors) and tap only those messages. To not get overwhelmed with events, Axoflow automatically samples the output: if many messages match the selected filter, only a subset is shown (about 1 message per second). Using log tapping, you can quickly troubleshoot both parsing/curation errors and destination ingest (API) errors, and check:
- What was in the original message?
- What is sent in the final payload to the destination?
Tap into the log flow.
-
Click your AxoRouter instance on the Topology page, then select ⋮ > Tap log flow.

-
Tap into the log flow.
- To see the input data, select Input log flow > Start.
- To see the output data, select Output log flow > Start.
You can use labels to filter the messages and sample only the matching ones.

-
When the logs you’re interested in show up, click Stop Log Tap, then click a log message to see its details.

-
If you don’t know what the message means, select AI Analytics to ask our AI to interpret it.

Troubleshooting
In case you run into problems, or you’re not getting any data in Splunk, check the logs of your AxoRouter instance:
-
Select Topology, then select your AxoRouter instance.
-
Select ⋮ > Tap agent logs > Start. Axoflow displays the log messages of AxoRouter. Check the logs for error messages. Some common errors include:
Redirected event for unconfigured/disabled/deleted index=netops with source="source::axo" host="host::axosyslog-almalinux" sourcetype="sourcetype::fortigate_event" into the LastChanceIndex. So far received events from 1 missing index(es).
: The Splunk index where AxoRouter is trying to send data doesn’t exist. Check which index is missing in the error message and create it in Splunk. (For a list of recommended indices, see the Splunk destination prerequisites.)
http: error sending HTTP request; url='https://prd-p-sp2id.splunkcloud.com:8088/services/collector/event/1.0?index=&source=&sourcetype=', error='SSL peer certificate or SSH remote key was not OK', worker_index='0', driver='splunk--flow-axorouter4-almalinux#0', location='/usr/share/syslog-ng/include/scl/splunk/splunk.conf:104:3'
: Your Splunk deployment uses an invalid or self-signed certificate, and the Verify server certificate option is enabled in the Splunk destination of Axoflow. Either fix the certificate in Splunk, or: select Topology > <your-splunk-destination>, disable Verify server certificate, then select Update.
6 - Concepts
This section describes the main concepts of Axoflow.
6.1 - Automatic data processing
When forwarding data to your SIEM, poor quality data and malformed logs that lack critical fields like timestamps or hostnames need to be fixed. The usual solutions fix the problem in the SIEM, and involve complex regular expressions, which are difficult to create and maintain. SIEM users often rely on vendors to manage these rules, but support is limited, especially for less popular devices. Axoflow offers a unique solution by automatically processing, curating, and classifying data before it’s sent to the SIEM, ensuring accurate, structured, and optimized data, reducing ingestion costs and improving SIEM performance.
The problem
The main issue related to classification is that many devices send malformed messages: missing timestamp, missing hostname, invalid message format, and so on. Such errors can cause different kinds of problems:
- Log messages are often routed to different destinations based on the sender hostname. Missing or invalid hostnames mean that the message is not attributed to the right host, and often doesn’t arrive at its intended destination.
- Incorrect timestamp or timezone hampers investigations during an incident, resulting in potentially critical data failing to show up (or extraneous data appearing) in queries for a particular period.
- Invalid data can lead to memory leaks or resource overload in the processing software (and to unusable monitoring dashboards) when a sequence number or other rapidly varying field is mistakenly parsed as the hostname, program name, or other low cardinality field.
Overall, they decrease the quality of security data you’re sending to your SIEM tools, which increases false positives, requires secondary data processing to clean, and increases query time – all of which ends up costing firms a lot more.
For instance, this is a log message from a SonicWall
firewall appliance:
<133> id=firewall sn=C0EFE33057B0 time="2024-10-07 14:56:47 UTC" fw=172.18.88.37 pri=6 c=1024 m=537 msg="Connection Closed" f=2 n=316039228 src=192.0.0.159:61254:X1: dst=10.0.0.7:53:X3:SIMILDC01 proto=udp/dns sent=59 rcvd=134 vpnpolicy="ELG Main"
A well-formed syslog message should look like this:
<priority>timestamp hostname application: message body
As you can see, the SonicWall format is completely invalid after the initial <priority>
field. Instead of the timestamp, hostname, and application name comes the free-form part of the message (in this case a whitespace-separated key=value list). Unless you extract the hostname and timestamp from the content of this malformed message, you won’t be able to reconstruct the course of events during a security incident.
Axoflow provides data processing, curation, and classification intelligence that’s built into the data pipeline, so it processes and fixes the data before it’s sent to the SIEM.
Our solution
Our data engine and database solution automatically processes the incoming data: AxoRouter recognizes and classifies the incoming data, applies device-specific fixes for the errors, then enriches and optimizes the formatting for the specific destination (SIEM). This approach has several benefits:
- Cost reduction: All data is processed before it’s sent to the SIEM. That way, we can automatically reduce the amount of data sent to the SIEM (for example, by removing empty and redundant fields), cutting your data ingestion costs.
- Structured data: Axoflow recognizes the format of the incoming data payload (for example, JSON, CSV, LEEF, free text), and automatically parses the payload into a structured map. This allows us to have detailed, content-based metrics and alerts, and also makes it easy for you to add custom transformations if needed.
- SIEM-independent: The structured data representation allows us to support multiple SIEMs (and other destinations) and optimize the data for every destination.
- Performance: Compared to the commonly used regular expressions, it’s more robust and has better performance, allowing you to process more data with fewer resources.
- Maintained by Axoflow: We maintain the database; you don’t have work with it. This includes updates for new product versions and adding new devices. We proactively monitor and check the new releases of main security devices for logging-related changes and update our database. (If something’s not working as expected, you can easily submit log samples and we’ll fix it ASAP). Currently, we have over 80 application adapters in our database.
The automatic classification and curation also adds labels and metadata that can be used to make decisions and route your data. Messages with errors are also tagged with error-specific tags. For example, you can easily route all firewall and security logs to your Splunk deployment, and exclude logs (like debug logs) that have no security relevance and shouldn’t be sent to the SIEM.
Message fixup
Axoflow Console allows you to quickly drill down to find log flows with issues, and to tap into the log flow and see samples of the specific messages that are processed, along with the related parsing information, like tags that describe the errors of invalid messages.
message.utf8_sanitized
: The message is not valid UTF-8.
syslog.missing_timestamp
: The message has no timestamp.
syslog.invalid_hostname
: The hostname field doesn’t seem to be valid, for example, it contains invalid characters.
syslog.missing_pri
: The priority (PRI) field is missing from the message.
syslog.unexpected_framing
: An octet count was found in front of the message, suggested invalid framing.
syslog.rfc3164_missing_header
: The date and the host are missing from the message – practically that’s the entire header of RFC3164-formatted messages.
syslog.rfc5424_unquoted_sdata_value
: The message contains an incorrectly quoted RFC5424 SDATA field.
message.parse_error
: Some other parsing error occurred.

You can get an overview of such problems in your pipeline on the Analytics page by adding issue
to the Filter labels field.

6.2 - Host attribution and inventory
Axoflow’s built-in inventory solution enriches your security data with critical metadata (like the origin host) so you can pinpoint the exact source of every data entry, enabling precise, label-based routing and more informed security decisions.
Enterprises and organizations collect security data (like syslog and other event data) from various data sources, including network devices, security devices, servers, and so on. When looking at a particular log entry during a security incident, it’s not always trivial to determine what generated it. Was it an appliance or an application? Which team is the owner of the application or device? If it was a network device like a switch or a Wi-Fi access point (of which even medium-sized organizations have dozens), can you tell which one it was, and where is it located?
Cloud-native environments, like Kubernetes have addressed this issue using resource metadata. You can attach labels to every element of the infrastructure: containers, pods, nodes, and so on, to include region, role, owner or other custom metadata. When collecting log data from the applications running in containers, the log collector agent can retrieve these labels and attach them to the log data as metadata. This helps immensely to associate routing decisions and security conclusions with the source systems.
In non-cloud environments, like traditionally operated physical or virtual machine clusters, the logs of applications, appliances, and other data sources lack such labels. To identify the log source, you’re stuck with the sender’s IP address, the sender’s hostname, and in some cases the path and filename of the log file, plus whatever information is available in the log message.
Why isn’t the IP address or hostname enough?
- Source IP addresses are poor identifiers, as they aren’t strongly tied to the host, especially when they’re dynamically allocated using DHCP or when a group of senders are behind a NAT and have the same address.
- Some organizations encode metadata into the hostname or DNS record. However, compared to the volume of log messages, DNS resolving is slow. Also, the DNS record is available at the source, but might not be available at the log aggregation device or the log server. As a result, this data is often lost.
- Many data sources and devices omit important information from their messages. For example, the log messages of Cisco routers by default omit the hostname. (They can be configured to send their hostname, but unfortunately, often they aren’t.) You can resolve their IP address to obtain the hostname of the device, but that leads back to the problem in the previous point.
- In addition, all the above issues are rooted in human configuration practices, which tend to be the main cause of anomalies in the system.
Correlating data to data source
Some SIEMs (like IBM QRadar) rely on the sender IP address to identify the data source. Others, like Splunk, delegate the task of providing metadata to the collector agent, but log sources often don’t supply metadata, and even if they do, the information is based on IP address or DNS.

Certain devices, like SonicWall firewalls include a unique device ID in the content of their log messages. Having access to this ID makes attribution straightforward, but logging solutions rarely extract such information. AxoRouter does.
Axoflow builds an inventory to match the messages in your data flow to the available data sources, based on data including:
-
IP address, host name, and DNS records of the source (if available),
-
serial number, device ID, and other unique identifiers extracted from the messages, and
-
metadata based on automatic classification of the log messages, like product and vendor name.
Axoflow (or more precisely, AxoRouter) classifies the processed log data to identify common vendors and products, and automatically applies labels to attach this information.
You can also integrate the telemetry pipeline with external asset management systems to further enrich your security data with custom labels and additional contextual information about your data sources.
6.3 - Policy based routing
Axoflow’s host attribution gives you unprecedented possibilities to route your data. You can tell exactly what kind of data you want to send where. Not only based on technical parameters like sender IP or application name, but using a much more fine-grained inventory that includes information about the devices, their roles, and their location. For example:
- The vendor and the product (for example, a SonicWall firewall)
- The physical location of the device (geographical location, rack number)
- Owner of the device (organization/department/team)

Axoflow automatically adds metadata labels based on information extracted from the data payload, but you can also add static custom labels to hosts manually, or dynamically using flows.
Paired with the information from the content of the messages, all this metadata allows you to formulate high-level, declarative policies and alerts using the metadata labels, for example:
- Route the logs of devices to the owner’s Splunk index.
- Route every firewall and security log to a specific index.
- Let the different teams manage and observe their own devices.
- Granular access control based on relevant log content (for example, identify a proxy log-stream based on the upstream address), instead of just device access.
- Ensure that specific logs won’t cross geographic borders, violating GDPR or other compliance requirements.

6.4 - Classify and reduce security data
Axoflow provides a robust classification system that actually verifies the data it receives, instead of relying on using dedicated ports. This approach results in automatic data labeling, high-quality data, and volume reduction - out of the box, automated, without coding.
Classify the incoming data
Verifying which device or service a certain message belongs to is difficult: even a single data source (like an appliance) can have different kinds of messages, and you have to be able to recognize each one, uniquely. This requires:
- deep, device and vendor-specific understanding of the data, and also
- understanding of the syslog data formats and protocols, because oftentimes the data sources send invalid messages that you have to recognize and fix as part of the classification process.
Also, classification needs to be both reliable and performant. A naive implementation using regexps is neither, nevertheless, that’s the solution you find at the core of today’s ingestion pipelines. We at Axoflow understand that creating and maintaining such a classification database is difficult, this is why we decided to make classification a core functionality of the Axoflow Platform, so you will never need to write another parsing regexp. At the moment, Axoflow supports over 90 data sources of well-known vendors.
Classification and the ability to process your security data in the pipeline also allows you to:
- Fix the incoming data (like the malformed firewall messages shown above) to add missing information, like hostname or timestamp.
- Identify the source host,
- Parse the log to access the information contained within,
- Redact sensitive information before it gets sent to a SIEM or storage, like PII information,
- Reduce the data volume and as a result, storage and SIEM costs,
- Enrich the data with contextual information, like adding labels based on the source or content of the data,
- Use all the above to route the data to the appropriate destinations, and finally
- Transform the data into an optimized format that the destination can reliably and effortlessly consume. This includes mapping your data to multiple different schemas if you use multiple analytic tools.
Reduce data volume
Classifying and parsing the incoming data allows you to remove the parts that aren’t needed, for example, to:
- drop entire messages if they are redundant or not relevant from a security perspective, or
- remove parts of individual messages, like fields that are non-empty even if they do not convey information (for example, that contain values such as “N/A” or “0”).
As this data reduction happens in the pipeline, before the data arrives in the SIEM or storage, it can save you significant costs, and also improves the quality of the data your detection engineers get to work with.
Palo Alto log reduction example
Let’s see an example on how data reduction works in Axoflow. Here is a log message from a Palo Alto firewall:
<165>Mar 26 18:41:06 us-east-1-dc1-b-edge-fw 1,2025/03/26 18:41:06,007200001056,TRAFFIC,end,1,2025/03/26 18:41:06,192.168.41.30,192.168.41.255,10.193.16.193,192.168.41.255,allow-all,,,netbios-ns,vsys1,Trust,Untrust,ethernet1/1,ethernet1/2,To-Panorama,2025/03/26 18:41:06,8720,1,137,137,11637,137,0x400000,udp,allow,276,276,0,3,2025/03/26 18:41:06,2,any,0,2345136,0x0,192.168.0.0-192.168.255.255,192.168.0.0-192.168.255.255,0,3,0
Here’s what you can drop from this particular message:
-
Redundant timestamps: Palo Alto log messages contain up to five, practically identical timestamps (see the Receive time, Generated time, and High resolution timestamp fields in the Traffic Log Fields documentation):
- the syslog timestamp in the header (Mar 26 18:41:06),
- the time Panorama (the management plane of Palo Alto firewalls) collected the message (2025/03/26 18:41:06), and
- the time when the event was generated (2025/03/26 18:41:06).
The sample log message has five timestamps. Leaving only one timestamp can reduce the message size by up to 15%.
-
The priority field (<165>
) is identical in most messages and has no information value. While that takes up only about 1% of the size of the, on high-traffic firewalls even this small change adds up to significant data saving.
-
Several fields contain default or empty values that provide no information, for example, default internal IP ranges like 192.168.0.0-192.168.255.255
. Removing such fields yields over 10% size reduction.
Note that when removing fields, we can delete only the value of the field, because the message format (CSV) relies on having a fixed order of columns for each message type. This also means that we have to individually check what can be removed from each of the 17 Palo Alto log type.
Palo Alto firewalls send this specific message when a connection is closed. They also send a message when a new connection is started, but that doesn’t contain any information that’s not available in the ending message, so it’s completely redundant and can be dropped. As every connection has a beginning and an end, this alone almost halves the size of the data stored per connection. For example:
Connection start message:
<113>Apr 11 10:58:18 us-east-1-dc1-b-edge-fw 1,10:58:18.421048,007200001056,TRAFFIC,end,1210,10:58:18.421048,192.168.41.30,192.168.41.255,10.193.16.193,192.168.41.255,allow-all,,,ssl,vsys1,trust-users,untrust,ethernet1/2.30,ethernet1/1,To-Panorama,2020/10/09 17:43:54,36459,1,39681,443,32326,443,0x400053,tcp,allow,43135,24629,18506,189,2020/10/09 16:53:27,3012,laptops,0,1353226782,0x8000000000000000,10.0.0.0-10.255.255.255,United States,0,90,99,tcp-fin,16,0,0,0,,testhost,from-policy,,,0,,0,,N/A,0,0,0,0,ace432fe-a9f2-5a1e-327a-91fdce0077da,0
Connection end message:
<113>Apr 11 10:58:18 us-east-1-dc1-b-edge-fw 1,10:58:18.421048,007200001056,TRAFFIC,end,1210,10:58:18.421048,192.168.41.30,192.168.41.255,10.193.16.193,192.168.41.255,allow-all,,,ssl,vsys1,trust-users,untrust,ethernet1/2.30,ethernet1/1,To-Panorama,2020/10/09 17:43:54,36459,1,39681,443,32326,443,0x400053,tcp,allow,43135,24629,18506,189,2020/10/09 16:53:27,3012,laptops,0,1353226782,0x8000000000000000,10.0.0.0-10.255.255.255,United States,0,90,99,tcp-fin,16,0,0,0,,testhost,from-policy,,,0,,0,,N/A,0,0,0,0,ace432fe-a9f2-5a1e-327a-91fdce0077da,0
You can enable data reduction in your data flows using the Reduce processing step, and see the amount of data received and transferred in the flow on the Metrics page of the flow.
6.5 - Reliable transport
Between its components, Axoflow transports data using the reliable OpenTelemetry protocol (OTLP) for high performance, and to avoid losing messages. Under the hood, OpenTelemetry uses gRPC, and has superior performance compared to other log transport protocols, consumes less bandwidth (because it uses protocol buffers), and also provides features like:
- on-the-wire compression
- authentication
- application layer acknowledgement
- batched data sending
- multi-worker scalability on the client and the server.
7 - Manage and monitor the pipeline
7.1 - Provision pipeline elements
The following sections describe how to register a logging host into the Axoflow Console.
Hosts with supported collectors
If the host is running one of the following log collector agents and you can install Axolet on the host to receive detailed metrics about the host, the agent, and data flow the agent processes.
-
Install Axolet on the host. For details, see Axolet.
-
Configure the log collector agent of the host to integrate with Axolet. For details, see the following pages:
7.1.1 - AxoRouter
7.1.1.1 - Install AxoRouter on Kubernetes
To install AxoRouter on a Kubernetes cluster, complete the following steps. For other platforms, see AxoRouter.
Note
Note that AxoRouter collects detailed, real-time metrics about the data-flows – giving you observability over the health of the security data pipeline and its components. Your security data remains in your self-managed cloud or in your on-prem instance where your sources, destinations, and AxoRouters are running, only metrics are forwarded to Axoflow Console.
Prerequisites
Kubernetes version 1.29 and newer
Minimal resource requirements
- CPU: at least
100m
- Memory:
256MB
- Storage:
8Gi
Network access
The hosts must be able to access the following domains related to the Axoflow Console:
Install AxoRouter
-
Open the Axoflow Console.
-
Select Provisioning.
-
Select the Host type > AxoRouter > Kubernetes. The one-liner installation command is displayed.
-
Open a terminal and set your Kubernetes context to the cluster where you want to install AxoRouter.
-
Run the one-liner, and follow the on-screen instructions.
Current kubernetes context: minikube
Server Version: v1.28.3
Installing to new namespace: axorouter
Do you want to install AxoRouter now? [Y]
-
Register the host.
-
Reload the Provisioning page. There should be a registration request for the new AxoRouter deployment. Select ✓.

-
Select Register to register the host. You can add a description and labels (in label:value
format) to the host.

-
Select the Topology page. The new AxoRouter instance is displayed.
Create a flow
- If you haven’t already done so, create a new destination.
-
Create a flow to connect the new AxoRouter to the destination.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Classify, a Parse, and a Reduce step, in that order, to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet AND meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

Send logs to AxoRouter
By default, AxoRouter accepts data on the following ports (unless you’ve modified the default connector rules):
- 514 UDP and TCP for RFC3164 (BSD-syslog) and RFC5424 (IETF-syslog) formatted traffic. AxoRouter automatically recognizes and handles both formats.
- 601 TCP for RFC5424 (IETF-syslog) and RFC3164 (BSD-syslog) formatted traffic. AxoRouter automatically recognizes and handles both formats.
- 6514 TCP for TLS-encrypted syslog traffic.
- 4317 TCP for OpenTelemetry log data.
To receive data on other ports or other protocols, configure other connector rules for the AxoRouter host.
For TLS-encrypted syslog connections, create a new connector rule or edit an existing one, and configure the keys and certificates needed to encrypt the connections. For details, see Syslog.
Note
Make sure to enable the ports you’re using on the firewall of your host.
Upgrade AxoRouter
Axoflow Console raises an alert for the host when a new AxoRouter version is available. To upgrade to the new version, re-run the one-liner installation command you used to install AxoRouter, or select Provisioning > Select type and platform to create a new one.
7.1.1.1.1 - Advanced installation options
When installing AxoRouter, you can set a number of advanced options if needed for your environment. Setting the advanced options in the Axoflow Console automatically updates the one-liner command that you can copy and run.

Alternatively, before running the one-liner you can use one of the following methods:
Proxy settings
Use the http_proxy=
, https_proxy=
, no_proxy=
parameters to configure HTTP proxy settings for the installer. To configure the Axolet service to use the proxy settings, enable the AXOLET_AVOID_PROXY
parameter as well. Lowercase variable names are preferred because they work universally.
Installation options
You can pass the following parameters to the installation script as environment variables, or as URL parameters.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
AxoRouter image override
|
|
Default value: |
empty string |
Environment variable |
IMAGE |
URL parameter |
image |
Description: Deploy the specified AxoRouter image.
Helm chart
|
|
Default value: |
oci://us-docker.pkg.dev/axoflow-registry-prod/axoflow/charts/axorouter-syslog |
Environment variable |
HELM_CHART |
URL parameter |
helm_chart |
Description: The path or URL of the AxoRouter Helm chart.
Helm chart version
|
|
Default value: |
Current Axoflow version |
Environment variable |
HELM_CHART_VERSION |
URL parameter |
helm_chart_version |
Description: Deploy the specified version of the Helm chart.
|
|
Default value: |
empty string |
Environment variable |
HELM_EXTRA_ARGS |
URL parameter |
helm_extra_args |
Description: Additional arguments passed to Helm during the installation.
Helm release name
|
|
Default value: |
axorouter |
Environment variable |
HELM_RELEASE_NAME |
URL parameter |
helm_release_name |
Description: Name of the Helm release.
Image repository
|
|
Default value: |
us-docker.pkg.dev/axoflow-registry-prod/axoflow/axorouter |
Environment variable |
IMAGE_REPO |
URL parameter |
image_repo |
Description: Deploy AxoRouter from a custom image repository.
Image version
|
|
Default value: |
Current Axoflow version |
Environment variable |
IMAGE_VERSION |
URL parameter |
image_version |
Description: Deploy the specified AxoRouter version.
Namespace
|
|
Default value: |
axorouter |
Environment variable |
NAMESPACE |
URL parameter |
namespace |
Description: The namespace where AxoRouter is installed.
Axolet parameters
API server host
|
|
Default value: |
|
Environment variable |
|
URL parameter |
api_server_host |
Description: Override the host part of the API endpoint for the host.
Axolet executable path
|
|
Default value: |
|
Environment variable |
AXOLET_EXECUTABLE |
URL parameter |
axolet_executable |
Description: Path to the Axolet executable.
Axolet image override
|
|
Default value: |
empty string |
Environment variable |
AXOLET_IMAGE |
URL parameter |
axolet_image |
Description: Deploy the specified Axolet image.
Axolet image repository
|
|
Default value: |
us-docker.pkg.dev/axoflow-registry-prod/axoflow/axolet |
Environment variable |
AXOLET_IMAGE_REPO |
URL parameter |
axolet_image_repo |
Description: Deploy Axolet from a custom image repository.
Axolet image version
|
|
Default value: |
Current Axoflow version |
Environment variable |
AXOLET_IMAGE_VERSION |
URL parameter |
axolet_image_version |
Description: Deploy the specified Axolet version.
Initial GUID
|
|
Default value: |
|
Environment variable |
|
URL parameter |
initial_guid |
Description: Set a static GUID.
7.1.1.2 - Install AxoRouter on Linux
AxoRouter is a key building block of Axoflow that collects, aggregates, transforms and routes all kinds of telemetry and security data automatically. AxoRouter for Linux includes a Podman container running AxoSyslog, Axolet, and other components.
To install AxoRouter on a Linux host, complete the following steps. For other platforms, see AxoRouter.
Note
Note that AxoRouter collects detailed, real-time metrics about the data-flows – giving you observability over the health of the security data pipeline and its components. Your security data remains in your self-managed cloud or in your on-prem instance where your sources, destinations, and AxoRouters are running, only metrics are forwarded to Axoflow Console.
What the install script does
When you deploy AxoRouter, you run a command that installs the required software packages, configures them and sets up the connection with Axoflow.
The installer script installs the axolet
packages, then executes the configure-axolet
command with the right parameters. (If the packages are already installed, the installer will update them unless the none
package format is selected when generating the provisioning command.)
The install script is designed to be run as root (sudo), but you can configure AxoRouter to run as a non-root user.
The installer script performs the following main steps:
- Executes prerequisite checks:
- Tests the network connection with the console endpoints.
- Checks if the operating system is supported.
- Checks if
podman
is installed.
- Downloads and installs the
axolet
RPM or DEB package.
- The package contains the
axolet
and configure-axolet
commands, and the axolet.service
systemd unit file.
- The
configure-axolet
command is executed with a configuration snippet on its standard input which contains a token required for registering into the management platform. The command:
-
Writes the initial /etc/axolet/config.json
configuration file.
Note: if the file already exists it will only be overwritten if the Overwrite config option is enabled when generating the provisioning command.
-
Enables and starts the axolet
service.
axolet
performs the following main steps on its first execution:
- Generates and persists a unique identifier (GUID).
- Initiates a cryptographic handshake process to Axoflow Console.
- Axoflow Console issues a client certificate to AxoRouter, which will be stored in the above mentioned
config.json
file.
- The service waits for an approval on Axoflow Console. Once you approve the host registration request, axolet starts to manage the local services and
send telemetry data to Axoflow Console. It keeps doing so as long as the agent is registered.
Prerequisites
Minimal resource requirements
- CPU: at least
100m
- Memory:
256MB
- Storage:
8Gi
Network access
The hosts must be able to access the following domains related to the Axoflow Console:
Install AxoRouter
-
Select Provisioning > Select type and platform.

-
Select the type (AxoRouter) and platform (Linux). The one-liner installation command is displayed.

If needed, set the Advanced options (for example, proxy settings) to modify the installation parameters. Usually, you don’t have to use advanced options unless the Axoflow support team instructs you to do so.
-
Open a terminal on the host where you want to install AxoRouter.
-
Run the one-liner, then follow the on-screen instructions.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
Example output:
Do you want to install AxoRouter now? [Y]
y
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4142 100 4142 0 0 19723 0 --:--:-- --:--:-- --:--:-- 19818
Verifying packages...
Preparing packages...
axorouter-0.40.0-1.aarch64
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 31.6M 100 31.6M 0 0 2092k 0 0:00:15 0:00:15 --:--:-- 2009k
Verifying packages...
Preparing packages...
axolet-0.40.0-1.aarch64
Created symlink /etc/systemd/system/multi-user.target.wants/axolet.service → /usr/lib/systemd/system/axolet.service.
Now continue with onboarding the host on the Axoflow web UI.
-
Register the host.
-
Reload the Provisioning page. There should be a registration request for the new AxoRouter deployment. Select ✓.

-
Select Register to register the host. You can add a description and labels (in label:value
format) to the host.

-
Select the Topology page. The new AxoRouter instance is displayed.
Create a flow
- If you haven’t already done so, create a new destination.
-
Create a flow to connect the new AxoRouter to the destination.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Classify, a Parse, and a Reduce step, in that order, to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet AND meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

Send logs to AxoRouter
Configure your hosts to send data to AxoRouter.
-
For appliances that are specifically supported by Axoflow, see Sources.
-
For other appliances and generic Linux devices, see Generic tips.
-
For a quick test without an actual source, you can also do the following (requires nc
to be installed on the AxoRouter host):
-
Open the Axoflow Console, select Topology, then select the AxoRouter instance you’ve deployed.
-
Select ⋮ > Tap log flow > Input log flow. Select Start.
-
Open a terminal on your AxoRouter host.
-
Run the following command to send 120 test messages (2 per second) in a loop to AxoRouter:
for i in `seq 1 120`; do echo "<165> fortigate date=$(date -u +%Y-%m-%d) time=$(date -u +"%H:%M:%S%Z") devname=us-east-1-dc1-a-dmz-fw devid=FGT60D4614044725 logid=0100040704 type=event subtype=system level=notice vd=root logdesc=\"System performance statistics\" action=\"perf-stats\" cpu=2 mem=35 totalsession=61 disk=2 bandwidth=158/138 setuprate=2 disklograte=0 fazlograte=0 msg=\"Performance statistics: average CPU: 2, memory: 35, concurrent sessions: 61, setup-rate: 2\""; sleep 0.5; done | nc -v 127.0.0.1 514
Alternatively, you can send logs in an endless loop:
while true; do echo "<165> fortigate date=$(date -u +%Y-%m-%d) time=$(date -u +"%H:%M:%S%Z") devname=us-east-1-dc1-a-dmz-fw devid=FGT60D4614044725 logid=0100040704 type=event subtype=system level=notice vd=root logdesc=\"System performance statistics\" action=\"perf-stats\" cpu=2 mem=35 totalsession=61 disk=2 bandwidth=158/138 setuprate=2 disklograte=0 fazlograte=0 msg=\"Performance statistics: average CPU: 2, memory: 35, concurrent sessions: 61, setup-rate: 2\""; sleep 1; done | nc -v 127.0.0.1 514
Manage AxoRouter
This section describes how to start, stop and check the status of the AxoRouter service on Linux.
Start AxoRouter
To start AxoRouter, execute the following command. For example:
systemctl start axorouter
If the service starts successfully, no output will be displayed.
The following message indicates that AxoRouter can not start (see Check AxoRouter status):
Job for axorouter.service failed because the control process exited with error code. See `systemctl status axorouter.service` and `journalctl -xe` for details.
Stop AxoRouter
To stop AxoRouter
-
Execute the following command.
systemctl stop axorouter
-
Check the status of the AxoRouter service (see Check AxoRouter status).
Restart AxoRouter
To restart AxoRouter, execute the following command.
systemctl restart axorouter
Reload the configuration without restarting AxoRouter
To reload the configuration file without restarting AxoRouter, execute the following command.
systemctl reload axorouter
Check the status of AxoRouter service
To check the status of AxoRouter service
-
Execute the following command.
systemctl --no-pager status axorouter
-
Check the Active:
field, which shows the status of the AxoRouter service. The following statuses are possible:
Upgrade AxoRouter
Axoflow Console raises an alert for the host when a new AxoRouter version is available. To upgrade to the new version, re-run the one-liner installation command you used to install AxoRouter, or select Provisioning > Select type and platform to create a new one.
7.1.1.2.1 - Advanced installation options
When installing AxoRouter, you can set a number of advanced options if needed for your environment. Setting the advanced options in the Axoflow Console automatically updates the one-liner command that you can copy and run.

Alternatively, before running the one-liner you can use one of the following methods:
Proxy settings
Use the HTTP proxy, HTTPS proxy, No proxy parameters to configure HTTP proxy settings for the installer. To avoid using the proxy for the Axolet service, enable the Avoid proxy parameter as well. Lowercase variable names are preferred because they work universally.
Installation options
You can pass the following parameters to the installation script as environment variables, or as URL parameters.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
AxoRouter capabilities
|
|
Default value: |
CAP_NET_BIND_SERVICE CAP_NET_BROADCAST CAP_NET_RAW CAP_SYSLOG CAP_BPF |
Environment variable |
AXO_AXOROUTER_CAPS |
URL parameter |
axorouter_caps |
Description: Capabilities added to the AxoRouter container.
AxoRouter config mount path
|
|
Default value: |
/etc/axorouter/user-config |
Environment variable |
AXO_AXOROUTER_CONFIG_MOUNT_INSIDE |
URL parameter |
axorouter_config_mount_inside |
Description: Mount path for custom user configuration.
AxoRouter image override
|
|
Default value: |
us-docker.pkg.dev/axoflow-registry-prod/axoflow/axorouter |
Environment variable |
AXO_IMAGE |
URL parameter |
image |
Description: Deploy the specified AxoRouter image.
|
|
Default value: |
empty string |
Environment variable |
AXO_PODMAN_ARGS |
URL parameter |
extra_args |
Description: Additional arguments passed to the AxoRouter container.
Image repository
|
|
Default value: |
us-docker.pkg.dev/axoflow-registry-prod/axoflow/axorouter |
Environment variable |
AXO_IMAGE_REPO |
URL parameter |
image_repo |
Description: Deploy AxoRouter from a custom image repository.
Image version
|
|
Default value: |
Current Axoflow version |
Environment variable |
AXO_IMAGE_VERSION |
URL parameter |
image_version |
Description: Deploy the specified AxoRouter version.
|
|
Default value: |
auto |
Available values: |
auto , dep , rpm , tar , none |
Environment variable |
AXO_INSTALL_PACKAGE |
URL parameter |
install_package |
Description: File format of the installer package.
Start router
|
|
Default value: |
true |
Available values: |
true , false |
Environment variable |
AXO_START_ROUTER |
URL parameter |
start_router |
Description: Start AxoRouter after installation.
Axolet parameters
API server host
|
|
Default value: |
|
Environment variable |
|
URL parameter |
api_server_host |
Description: Override the host part of the API endpoint for the host.
Avoid proxy
|
|
Default value: |
false |
Available values: |
true , false |
Environment variable |
AXO_AVOID_PROXY |
URL parameter |
avoid_proxy |
Description: Do not use proxy for the Axolet process.
Axolet capabilities
|
|
Default value: |
CAP_SYS_PTRACE CAP_SYS_CHROOT |
Environment variable |
AXO_CAPS |
URL parameter |
caps |
Description: Capabilities added to the Axolet service.
Configuration directory
|
|
Default value: |
/etc/axolet |
Environment variable |
AXO_CONFIG_DIR |
URL parameter |
config_dir |
Description: The directory where the configuration files are stored.
HTTP proxy
|
|
Default value: |
empty string |
Environment variable |
AXO_HTTP_PROXY |
URL parameter |
http_proxy |
Description: Use a proxy to access Axoflow Console from the host.
HTTPS proxy
|
|
Default value: |
empty string |
Environment variable |
AXO_HTTPS_PROXY |
URL parameter |
https_proxy |
Description: Use a proxy to access Axoflow Console from the host.
No proxy
|
|
Default value: |
empty string |
Environment variable |
AXO_NO_PROXY |
URL parameter |
no_proxy |
Description: Comma-separated list of hosts that shouldn’t use proxy to access Axoflow Console from the host.
Overwrite config
|
|
Default value: |
false |
Available values: |
true , false |
Environment variable |
AXO_CONFIG_OVERWRITE |
URL parameter |
config_overwrite |
Description: Overwrite the configuration when reinstalling the service.
Service group
|
|
Default value: |
root |
Environment variable |
AXO_GROUP |
URL parameter |
group |
Description: The group running the Axolet service.
Service user
|
|
Default value: |
root |
Environment variable |
AXO_USER |
URL parameter |
user |
Description: The user running the Axolet service.
Start service
|
|
Default value: |
true |
Available values: |
true , false |
Environment variable |
AXO_START |
URL parameter |
start |
Description: Start the Axolet service after installation.
WEC parameters
These parameters are related to the Windows Event Collector server that can be run on AxoRouter. For details, see Windows Event Collector (WEC).
WEC Image repository
|
|
Default value: |
us-docker.pkg.dev/axoflow-registry-prod/axoflow/axorouter-wec |
Environment variable |
AXO_WEC_IMAGE_REPO |
URL parameter |
wec_image_repo |
Description: Deploy the Windows Event Collector server from a custom image repository.
WEC Image version
|
|
Default value: |
Current Axoflow version |
Environment variable |
AXO_WEC_IMAGE_VERSION |
URL parameter |
wec_image_version |
Description: Deploy the specified Windows Event Collector server version.
7.1.1.2.2 - Run AxoRouter as non-root
To run AxoRouter as a non-root user, set the AXO_USER
and AXO_GROUP
environment variables to the user’s username and groupname on the host you want to deploy AxoRouter. For details, see Advanced installation options.
Operators must have access to the following commands:
-
/usr/bin/systemctl * axolet.service
: Controls the axolet.service
systemd unit. Usually *
is start
, stop
, restart
, enable
, and status
. Used by the operators for troubleshooting.
-
/usr/local/bin/configure-axolet
: Creates initial axolet
configuration and enables/starts the axolet
service. Executed by the bootstrap script.
-
Command to install and upgrade the axolet
package. Executed by the bootstrap script if the packages aren’t already installed.
- On RPM-based Linux distributions:
/usr/bin/rpm -Uv axo*.rpm
- On DEB-based Linux distributions:
/usr/bin/dpkg -i axo*.deb
-
/usr/bin/systemctl * axorouter.service
: Controls the axorouter.service
systemd unit. Usually *
is start
, stop
, restart
, enable
, and status
. Used by the operators for troubleshooting.
-
/usr/local/bin/configure-axorouter
: Creates the initial axorouter configuration and enables/starts the axorouter service. Executed by the bootstrap script.
-
Command to install and upgrade the axorouter and the axolet package. Executed by the bootstrap script if the packages aren’t already installed.
- On RPM-based Linux distributions:
/usr/bin/rpm -Uv axo*.rpm
- On DEB-based Linux distributions:
/usr/bin/dpkg -i axo*.deb
You can permit the syslogng
user to run these commands by running on of the following:
sudo tee /etc/sudoers.d/configure-axoflow <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axolet
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axolet.service
# for rpm installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/rpm -Uv axo*.rpm
A
sudo tee /etc/sudoers.d/configure-axorouter <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axorouter
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axorouter.service
# for rpm installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/rpm -Uv axo*.rpm
A
sudo tee /etc/sudoers.d/configure-axorouter <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axorouter
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axorouter.service
# for deb installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/dpkg -i axo*.deb
A
sudo tee /etc/sudoers.d/configure-axorouter <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axorouter
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axorouter.service
# for deb installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/rpm -Uv axo*.deb
A
7.1.2 - AxoSyslog
Onboarding allows you to collect metrics about the host, display the host on the Topology page, and to tap into the log flow.
Onboarding requires you to modify the host and the configuration of the logging agent (AxoSyslog) running on the host.
- Level 1: Install Axolet on the host where AxoSyslog is running. Axolet collects metrics from the host and sends them to the Axoflow Console, so you can check host-level metrics on the Metrics & Health page of the host, and displays the host on the Topology page.
- Level 2: Instrument the configuration of the logging agent to provide detailed metrics about the traffic flow. This allows you to display data about the host on the Analytics page.
- Level 3: Instrument the configuration of the logging agent to allow you to access the logs of the logging agent and to tap into the log flow from the Axoflow Console. The exact steps for this integration depend on the configuration of your logging agent. We provide basic instrumentation instructions for getting started in this guide, but we strongly recommend to contact us so our professional services can help you with a production integration.
To onboard an existing AxoSyslog instance into Axoflow, complete the following steps.
-
Install Axolet on the host, then approve its registration on the Provisioning page of the Axoflow Console.
-
The AxoSyslog host is now visible on the Topology page of the Axoflow Console as a source.
-
If you've already added the AxoRouter instance or the destination where this host is sending data to the Axoflow Console, add a path to connect the host to the AxoRouter or the destination.
-
Select Topology > Create New Item > Path.

-
Select your data source in the Source host field.

-
Select the target router or aggregator this source is sending its data to in the Target host field, for example, axorouter
.
-
Select the Target connector. The connector determines how the destination receives the data (for example, using which protocol or port).
-
Select Create. The new path appears on the Topology page.

-
Access the AxoSyslog host and edit the configuration of AxoSyslog. Set the statistics-related global options like this (if the options
block already exists, add these lines to the bottom of the block):
options {
stats(
level(2)
freq(0) # Inhibit statistics output to stdout
);
};
-
(Optional) To get detailed metrics and analytics about the traffic that flows through the host, instrument your AxoSyslog configuration as follows:
Note
You can use Axolet with an un-instrumented AxoSyslog configuration file, but that limits available metrics to host statistics (for example, disk, memory, queue information). You won’t be able to access data about the actual traffic flowing through the host. To collect traffic-related metrics, instrument configuration with
metrics-probe()
stanzas. The example below shows how to instrument the configuration to highlight common macros such as
$HOST
and
$PROTOCOL
. If you want to customize the collected metrics or need help with the instrumentation,
contact us.
-
Download the following configuration snippet to the AxoSyslog host, for example, as /etc/syslog-ng/conf.d/axoflow-instrumentation.conf
for AxoSyslog
.
-
Include it in at the top of your configuration file:
@version: current
@include "axoflow-instrumentation.conf"
-
Edit every destination statement to include a parser { metrics-output(destination(my-destination)); };
line (making sure to include the channel
construct if your destination block does not already contain it):
destination my-destination {
channel {
parser { metrics-output(destination(my-destination)); };
destination { file("/dev/null"); };
};
};
-
Reload the configuration of AxoSyslog.
systemctl reload syslog-ng
-
To enable log tapping for the traffic that flows through the host, add the d_axotap
destination in your log paths. This allows the Axolet agent to collect rate-limited log samples from that specific point in the pipeline.
Note
There are multiple ways to add tapping to your configuration, which in some cases can introduce unwanted side effects.
Contact us to help you create a safe tapping configuration for your use case!
-
Find a log path that you want to tap into and add the tap destination in a safe way, for example, by adding it in a separate inner log path:
log {
source(s_udp514);
log { destination(d_udp514); };
log { destination(d_axotap); };
};
-
Reload the configuration of AxoSyslog.
systemctl reload syslog-ng
-
Unregister the service so that auto service registration can discover the new tapping point:

-
Test log tapping from the top right menu:

7.1.3 - Splunk Connect for Syslog (SC4S)
Onboarding allows you to collect metrics about the host, display the host on the Topology page, and to tap into the log flow.
Onboarding requires you to modify the host and the configuration of the logging agent (SC4S) running on the host.
- Level 1: Install Axolet on the host where SC4S is running. Axolet collects metrics from the host and sends them to the Axoflow Console, so you can check host-level metrics on the Metrics & Health page of the host, and displays the host on the Topology page.
- Level 2: Instrument the configuration of the logging agent to provide detailed metrics about the traffic flow. This allows you to display data about the host on the Analytics page.
- Level 3: Instrument the configuration of the logging agent to allow you to access the logs of the logging agent and to tap into the log flow from the Axoflow Console. The exact steps for this integration depend on the configuration of your logging agent. We provide basic instrumentation instructions for getting started in this guide, but we strongly recommend to contact us so our professional services can help you with a production integration.
To generate metrics for the Axoflow platform from an existing Splunk Connect for Syslog (SC4S) instance, you need to configure SC4S to generate these metrics. Complete the following steps.
-
If you haven’t already done so, install Axolet on the host, then approve its registration on the Provisioning page of the Axoflow Console.
-
Download the following code snippet as axoflow-instrumentation.conf
.
-
If you are running SC4S under podman or docker, copy the file into the /opt/sc4s/local/config/destinations
directory. In other deployment methods this might be different, check the SC4S documentation for details.
-
Check if the metrics are appearing, for example, run the following command on the SC4S host:
syslog-ng-ctl stats prometheus | grep classified
7.1.4 - syslog-ng
Onboarding allows you to collect metrics about the host, display the host on the Topology page, and to tap into the log flow.
Onboarding requires you to modify the host and the configuration of the logging agent (syslog-ng) running on the host.
- Level 1: Install Axolet on the host where syslog-ng is running. Axolet collects metrics from the host and sends them to the Axoflow Console, so you can check host-level metrics on the Metrics & Health page of the host, and displays the host on the Topology page.
- Level 2: Instrument the configuration of the logging agent to provide detailed metrics about the traffic flow. This allows you to display data about the host on the Analytics page.
- Level 3: Instrument the configuration of the logging agent to allow you to access the logs of the logging agent and to tap into the log flow from the Axoflow Console. The exact steps for this integration depend on the configuration of your logging agent. We provide basic instrumentation instructions for getting started in this guide, but we strongly recommend to contact us so our professional services can help you with a production integration.
To onboard an existing syslog-ng instance into Axoflow, complete the following steps.
-
Install Axolet on the host, then approve its registration on the Provisioning page of the Axoflow Console.
-
The syslog-ng host will now be visible on the Topology page of the Axoflow Console as a source.
-
If you've already added the AxoRouter instance or other destination where this newly-registered host is sending data to the Axoflow Console, add a path to connect the host to the AxoRouter or the destination.
-
Select Topology > Create New Item > Path.

-
Select your data source in the Source host field.

-
Select the target router or aggregator this source is sending its data to in the Target host field, for example, axorouter
.
-
Select the Target connector. The connector determines how the destination receives the data (for example, using which protocol or port).
-
Select Create. The new path appears on the Topology page.

-
Access the syslog-ng host and edit the configuration of syslog-ng. Set the statistics-related global options like this (if the options
block already exists, add these lines to the bottom of the block):
options {
stats-level(2);
stats-freq(0); # Inhibit statistics output to stdout
};
-
(Optional) To get detailed metrics and analytics about the traffic that flows through the host, instrument your syslog-ng configuration as follows:
Note
You can use Axolet with an un-instrumented syslog-ng configuration file, but that limits available metrics to host statistics (for example, disk, memory, queue information). You won’t be able to access data about the actual traffic flowing through the host. To collect traffic-related metrics, instrument configuration with
metrics-probe()
stanzas. The example below shows how to instrument the configuration to highlight common macros such as
$HOST
and
$PROTOCOL
. If you want to customize the collected metrics or need help with the instrumentation,
contact us.
-
Download the following configuration snippet to the syslog-ng host, for example, as /etc/syslog-ng/conf.d/axoflow-instrumentation.conf
for syslog-ngor /opt/syslog-ng/etc/axoflow-instrumentation.conf
for syslog-ng PE
.
-
Include it in at the top of your configuration file:
@version: current
@include "axoflow-instrumentation.conf"
-
Edit every destination statement to include a parser { metrics-output(destination(my-destination)); };
line (making sure to include the channel
construct if your destination block does not already contain it):
destination my-destination {
channel {
parser { metrics-output(destination(my-destination)); };
destination { file("/dev/null"); };
};
};
-
Reload the configuration of syslog-ng.
systemctl reload syslog-ng
-
To enable log tapping for the traffic that flows through the host, add the d_axotap
destination in your log paths. This allows the Axolet agent to collect rate-limited log samples from that specific point in the pipeline.
Note
There are multiple ways to add tapping to your configuration, which in some cases can introduce unwanted side effects.
Contact us to help you create a safe tapping configuration for your use case!
-
Find a log path that you want to tap into and add the tap destination in a safe way, for example, by adding it in a separate inner log path:
log {
source(s_udp514);
log { destination(d_udp514); };
log { destination(d_axotap); };
};
-
Reload the configuration of syslog-ng.
systemctl reload syslog-ng
-
Unregister the service so that auto service registration can discover the new tapping point:

-
Test log tapping from the top right menu:

7.1.5 - Axolet
Axolet is the agent software for Axoflow. Its primary purpose is to discover the log collector services that are running on the host (for example, AxoSyslog or SC4S), and report their operating statistics (counters) to Axoflow.
7.1.5.1 - Install Axolet
Axolet is the agent software for Axoflow. Its primary purpose is to discover the log collector services that are running on the host (for example, AxoSyslog or SC4S), and report their operating statistics (counters) to Axoflow.
It is simple to install Axolet on individual hosts, or collectively via orchestration tools such as chef or puppet.
What the install script does
When you deploy Axolet, you run a command that installs the required software packages, configures them and sets up the connection with Axoflow.
The installer script installs the axolet
packages, then executes the configure-axolet
command with the right parameters. (If the packages are already installed, the installer will update them unless the none
package format is selected when generating the provisioning command.)
The install script is designed to be run as root (sudo), but you can configure Axolet to run as a non-root user.
The installer script performs the following main steps:
- Executes prerequisite checks:
- Tests the network connection with the console endpoints.
- Checks if the operating system is supported.
- Downloads and installs the
axolet
RPM or DEB package.
- The package contains the
axolet
and configure-axolet
commands, and the axolet.service
systemd unit file.
- The
configure-axolet
command is executed with a configuration snippet on its standard input which contains a token required for registering into the management platform. The command:
-
Writes the initial /etc/axolet/config.json
configuration file.
Note: if the file already exists it will only be overwritten if the Overwrite config option is enabled when generating the provisioning command.
-
Enables and starts the axolet
service.
axolet
performs the following main steps on its first execution:
- Generates and persists a unique identifier (GUID).
- Initiates a cryptographic handshake process to Axoflow Console.
- Axoflow Console issues a client certificate to Axolet, which will be stored in the above mentioned
config.json
file.
- The service waits for an approval on Axoflow Console. Once you approve the host registration request, axolet starts to
send telemetry data to Axoflow Console. It keeps doing so as long as the agent is registered.

Prerequisites
Axolet should work on most Red Hat and Debian compatible Linux distributions. For production environments, we recommend using Red Hat Enterprise Linux 9.
Network access
The hosts must be able to access the following domains related to the Axoflow Console:
Install Axolet
To install Axolet on a host and onboard it to Axoflow, complete the following steps. If you need to reinstall Axolet for some reason, see Reinstall Axolet.
-
Open the Axoflow Console at https://<your-tenant-id>.cloud.axoflow.io/
.
-
Select Provisioning > Select type and platform.

-
Select Edge > Linux.

The curl
command can be run manually or inserted into a template in any common software deployment package. When run, a script is downloaded that sets up the Axolet
process to run automatically at boot time via systemd
. For advanced installation options, see Advanced installation options.
-
Copy the deployment one-liner and run it on the host you are onboarding into Axoflow.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
Example output:
Do you want to install Axolet now? [Y]
y
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 31.6M 100 31.6M 0 0 2127k 0 0:00:15 0:00:15 --:--:-- 2075k
Verifying packages...
Preparing packages...
axolet-0.40.0-1.aarch64
Created symlink /etc/systemd/system/multi-user.target.wants/axolet.service → /usr/lib/systemd/system/axolet.service.
Now continue with onboarding the host on the Axoflow web UI.
-
On the Axoflow Console, reload the Provisioning page. A registration request for the new host should be displayed. Accept it.
-
Axolet starts sending metrics from the host. Check the Topology page to see the new host.
-
Continue to onboard the host as described for your specific log collector agent.
Manage Axolet
This section describes how to start, stop and check the status of the Axolet service on Linux.
Start Axolet
To start Axolet, execute the following command. For example:
systemctl start axolet
If the service starts successfully, no output will be displayed.
The following message indicates that Axolet can not start (see Check Axolet status):
Job for axolet.service failed because the control process exited with error code. See `systemctl status axolet.service` and `journalctl -xe` for details.
Stop Axolet
To stop Axolet
-
Execute the following command.
systemctl stop axolet
-
Check the status of the Axolet service (see Check Axolet status).
Restart Axolet
To restart Axolet, execute the following command.
systemctl restart axolet
Reload the configuration without restarting Axolet
To reload the configuration file without restarting Axolet, execute the following command.
systemctl reload axolet
Check the status of Axolet service
To check the status of Axolet service
-
Execute the following command.
systemctl --no-pager status axolet
-
Check the Active:
field, which shows the status of the Axolet service. The following statuses are possible:
For AxoRouter hosts, you can check the Axolet logs from the Axoflow Console using Log tapping for Agent logs.
Upgrade Axolet
Axoflow Console raises an alert for the host when a new Axolet version is available. To upgrade to the new version, re-run the one-liner installation command you used to install Axolet, or select Provisioning > Select type and platform to create a new one.
Run axolet as non-root
You can run Axolet as non-root user, but operators must have access to the following commands:
-
/usr/bin/systemctl * axolet.service
: Controls the axolet.service
systemd unit. Usually *
is start
, stop
, restart
, enable
, and status
. Used by the operators for troubleshooting.
-
/usr/local/bin/configure-axolet
: Creates initial axolet
configuration and enables/starts the axolet
service. Executed by the bootstrap script.
-
Command to install and upgrade the axolet
package. Executed by the bootstrap script if the packages aren’t already installed.
- On RPM-based Linux distributions:
/usr/bin/rpm -Uv axo*.rpm
- On DEB-based Linux distributions:
/usr/bin/dpkg -i axo*.deb
You can permit the syslogng
user to run these commands by running on of the following:
sudo tee /etc/sudoers.d/configure-axoflow <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axolet
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axolet.service
# for rpm installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/rpm -Uv axo*.rpm
A
sudo tee /etc/sudoers.d/configure-axoflow <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axolet
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axolet.service
# for deb installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/dpkg -i axo*.deb
A
7.1.5.2 - Advanced installation options
When installing Axolet, you can set a number of advanced options if needed for your environment. Setting the advanced options in the Axoflow Console automatically updates the one-liner command that you can copy and run.

Alternatively, before running the one-liner you can use one of the following methods:
Proxy settings
Use the HTTP proxy, HTTPS proxy, No proxy parameters to configure HTTP proxy settings for the installer. To avoid using the proxy for the Axolet service, enable the Avoid proxy parameter as well. Lowercase variable names are preferred because they work universally.
Installation options
You can pass the following parameters to the installation script as environment variables, or as URL parameters.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
API server host
|
|
Default value: |
|
Environment variable |
|
URL parameter |
api_server_host |
Description: Override the host part of the API endpoint for the host.
Avoid proxy
|
|
Default value: |
false |
Available values: |
true , false |
Environment variable |
AXOLET_AVOID_PROXY |
URL parameter |
avoid_proxy |
Description: If set to true
, the value of the *_proxy
variables will only be used for downloading the installer, but not for the axolet
service itself. If set to false
, the Axolet service will use the variables from the installer.
Capabilities
|
|
Default value: |
CAP_SYS_PTRACE |
Available values: |
Whitespace-separated list of capability names with CAP_ prefix. |
Environment variable |
AXOLET_CAPS |
URL parameter |
caps |
Description: Ambient Linux capabilities the axolet
service will use.
Configuration directory
|
|
Default value: |
/etc/axolet |
Environment variable |
AXOLET_CONFIG_DIR |
URL parameter |
config_dir |
Description: The directory where the configuration files are stored.
HTTP proxy
|
|
Default value: |
empty string |
Environment variable |
AXOLET_HTTP_PROXY |
URL parameter |
http_proxy |
Description: Use a proxy to access Axoflow Console from the host.
HTTPS proxy
|
|
Default value: |
empty string |
Environment variable |
AXOLET_HTTPS_PROXY |
URL parameter |
https_proxy |
Description: Use a proxy to access Axoflow Console from the host.
No proxy
|
|
Default value: |
empty string |
Environment variable |
AXOLET_NO_PROXY |
URL parameter |
no_proxy |
Description: Comma-separated list of hosts that shouldn’t use proxy to access Axoflow Console from the host.
Overwrite config
|
|
Default value: |
false |
Available values: |
true , false |
Environment variable |
AXOLET_CONFIG_OVERWRITE |
URL parameter |
config_overwrite |
Description: If set to true
, the configuration process will overwrite existing configuration (/etc/axolet/config.json
). This means that the agent will get a new GUID and it will require approval on the Axoflow Console.
|
|
Default value: |
auto |
Available values: |
auto , dep , rpm , tar , none |
Environment variable |
AXOLET_INSTALL_PACKAGE |
URL parameter |
install_package |
Description: File format of the installer package.
Service group
|
|
Default value: |
root |
Environment variable |
AXOLET_GROUP |
URL parameter |
group |
Description: Name of the group and Axolet will be running as. It should be either root
or the group syslog-ng
is running as.
Service user
|
|
Default value: |
root |
Environment variable |
AXOLET_USER |
URL parameter |
user |
Description: Name of the user Axolet will be running as. It should be either root
or the user syslog-ng
is running as. See also Run axolet as non-root.
Start service
|
|
Default value: |
AXOLET_START=true |
Available values: |
true , false |
Environment variable |
AXOLET_START |
URL parameter |
start |
Description: Start axolet agent at the end of installation. Use false for preparing golden images. In this case axolet
will generate a new GUID on the first boot after cloning the image.
If you are preparing a host for cloning with Axolet already installed, set the following environment variable in your (root) shell session, before running the one-liner command. For example:
export START_AXOLET=false
curl ... # Run the command copied from the Provisioning page
This way Axolet will only start and initialize after the first reboot.
7.1.5.3 - Run axolet as non-root
If the log collector agent (AxoSyslog or syslog-ng) is running as a non-root user, you may want to configure the Axolet agent to run as the same user.
To do that, set the AXOLET_USER
and AXOLET_GROUP
environment variables to the user’s username and groupname. For details, see Advanced installation options.
Operators will need to have access to the following commands:
-
/usr/bin/systemctl * axolet.service
: Controls the axolet.service
systemd unit. Usually *
is start
, stop
, restart
, enable
, and status
. Used by the operators for troubleshooting.
-
/usr/local/bin/configure-axolet
: Creates initial axolet
configuration and enables/starts the axolet
service. Executed by the bootstrap script.
-
Command to install and upgrade the axolet
package. Executed by the bootstrap script if the packages aren’t already installed.
- On RPM-based Linux distributions:
/usr/bin/rpm -Uv axo*.rpm
- On DEB-based Linux distributions:
/usr/bin/dpkg -i axo*.deb
For example, you can permit the syslogng
user to run these commands by running the following commands:
sudo tee /etc/sudoers.d/configure-axoflow <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axolet
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axolet.service
# for rpm installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/rpm -Uv axo*.rpm
A
sudo tee /etc/sudoers.d/configure-axoflow <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axolet
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axolet.service
# for deb installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/dpkg -i axo*.deb
A
7.1.5.4 - Reinstall Axolet
Reinstall Axolet to a (cloned) machine
To re-install Axolet on a (cloned) machine that has an earlier installation running, complete the following steps.
-
Log in to the host as root and execute the following commands:
systemctl stop axolet
rm -r /etc/axolet/
-
Follow the regular installation steps described in Axolet.
Recover Axolet after the root CA certificate was rotated
-
Log in to the host as root and execute the following commands:
export AXOLET_KEEP_GUID=y AXOLET_CONFIG_OVERWRITE=y PATH=$PATH:/usr/local/bin
-
Run the one-liner from the Axoflow Console Provisioning page in the same shell.
-
The installation may result in an error message, but Axolet should eventually recover. You can check by running:
journalctl -b -u axolet -n 20 -f
7.1.6 - Data sources
7.1.7 - Destinations
7.1.8 - Windows host - agent based solution
Axoflow provides a customized OpenTelemetry Collector distribution to collect data from Microsoft Windows hosts.
Prerequisites
The Axoflow OpenTelemetry Collector supports the following Windows versions:
- Windows Server 2025
- Windows Server 2022
- Windows 11
- Windows 10
Installation
-
Download the installation package for your platform from the Assets section of the release. We provide MSI installers and binary releases for amd64 and arm64 architectures.
-
Run the installer. The installer installs:
- the collector agent (by default) to
C:\Program Files\Axoflow\OpenTelemetry Collector\axoflow-otel-collector.exe
, and
- a default configuration file (
C:\Program Files\Axoflow\OpenTelemetry Collector\config.yaml
) that must be edited before use.
Configuration
If you have already installed the agent, complete the following steps to configure it.
-
Open the configuration file (C:\Program Files\Axoflow\OpenTelemetry Collector\config.yaml
).
-
Set the IP address and port of the AxoRouter host where you want to send data from this Windows host. Use the IP address and port of the AxoRouter OpenTelemetry connector (for example, 10.0.2.2:4317
). Here’s how to find the IP address of your AxoRouter. (By default, every AxoRouter has an OpenTelemetry connector enabled.)
exporters:
otlp/axorouter:
endpoint: 10.0.2.2:4317
tls:
insecure: true
-
(Optional) Customize the Event log sources. The default configuration collects data from the following channels:
application
,
security
,
system
.
To include additional channels:
-
Add a new windowseventlog
receiver under the receivers
section, like this:
receivers:
windowseventlog/<CHANNEL_NAME>:
channel: <CHANNEL_NAME>
raw: true
-
Include the new receiver in a pipeline under the service.pipelines
section, for example:
service:
pipelines:
logs/eventlog:
receivers: [windowseventlog/application, windowseventlog/system, windowseventlog/security, windowseventlog/<CHANNEL_NAME>]
processors: [resource/agent, resourcedetection/system]
exporters: [otlp/axorouter]
-
(Optional) Configure collecting DNS logs from the host.
-
Check the path of the DNS log file by running with the following PowerShell command:
(Get-DnsServerDiagnostics).LogFilePath
-
Enter the path into the receivers.filelog/windows_dns_debug_log.include
section of the configuration file. Note that you have to escape the backslashes in the path, for example, C:\\Windows\\System32\\DNS\\dns.log
.
receivers:
filelog/windows_dns_debug_log:
include: ['<ESCAPED_DNS_LOGFILE_PATH>']
...
-
(Optional) Configure collecting DHCP logs from the host.
-
Check the path of the DHCP log files by running with the following PowerShell command:
(Get-DhcpServerAuditLog).Path
DHCP server log files usually start with the DhcpSrvLog
(for IPv4) or the DhcpV6SrvLog
(for IPv6) prefixes.
-
Enter the path of the IPv4 log files without the filename into the receivers.filelog/windows_dhcp_server_v4_auditlog.include
section of the configuration file.
Note that you have to escape the backslashes in the path, for example, C:\\Windows\\System32\\DHCP\\
.
receivers:
filelog/windows_dhcp_server_v4_auditlog:
include: ['<ESCAPED_DHCP_SERVER_LOGS_PATH>\\DhcpSrvLog*']
...
filelog/windows_dhcp_server_v6_auditlog:
include: ['<ESCAPED_DHCPV6_SERVER_LOGS_PATH>\\DhcpV6SrvLog*']
...
-
Enter the path of the IPv6 log files without the filename into the receivers.filelog/windows_dhcp_server_v6_auditlog.include
section of the configuration file.
Note that you have to escape the backslashes in the path, for example, C:\\Windows\\System32\\DNS\\dns.log
.
-
Save the file.
-
Restart the service.
Restart-Service axoflow-otel-collector
The agent starts sending data to the configured AxoRouter.
-
Add the Windows host where you’ve installed the OpenTelemetry Collector to Axoflow Console as a data source.
-
Open the Axoflow Console.
-
Select Topology > Create New Item > Source.

-
Select Microsoft Windows as the type of the source.

-
Set the IP address and the host name (FQDN) of the host.
-
Select Create.
-
Create a flow between the data source and the OpenTelemetry connector of AxoRouter. You can use the Select messages processing step (with the meta.connector.type = otlp
and meta.product =* windows
query) to route only the Windows data received by the AxoRouter OpenTelemetry connector to your destination.

-
(Optional) If you want the Windows host to collect data from other hosts without installing separate agents on them, you can configure remote event log collection.
Collect event logs of remote sources
Once you’ve deployed the Axoflow OpenTelemetry Collector on a Windows host, you can configure it to collect event logs from remote Windows hosts and forward them to AxoRouter, without having to install it on these hosts. Complete the following steps.
-
Open the configuration file of Axoflow OpenTelemetry Collector (C:\Program Files\Axoflow\OpenTelemetry Collector\config.yaml
).
-
Add a remote
section to your windowseventlog
receiver, like this:
receivers:
windowseventlog:
channel: application
remote:
-
Add the credentials that the Axoflow OpenTelemetry Collector can use to access the remote Windows host.
receivers:
windowseventlog:
channel: application
remote:
- credentials:
username: "my-log-collector-user"
password: "my-password"
domain: "authentication-domain"
-
Add the address of the remote host under the servers
key. If the same credentials can be used to access multiple hosts, you can list them.
receivers:
windowseventlog:
channel: application
remote:
- credentials:
username: "my-log-collector-user"
password: "my-password"
domain: "authentication-domain"
servers:
- "remote-server-1"
- "remote-server-2"
-
You can repeat the credentials
block to configure other remote hosts with other credentials, for example:
receivers:
windowseventlog:
channel: application
remote:
- credentials:
username: "my-log-collector-user"
password: "my-password"
domain: "authentication-domain"
servers:
- "remote-server-1"
- "remote-server-2"
- credentials:
username: "my-other-log-collector-user"
password: "my-other-password"
domain: "authentication-domain"
servers:
- "remote-server-3"
- "remote-server-4"
-
Save the file.
-
Restart the service.
Restart-Service axoflow-otel-collector
The agent starts sending data to the configured AxoRouter.
-
Add the newly configured hosts to the Axoflow Console. Open the Topology page, select + Source > Detected. The new sources will appear here when AxoRouter starts receiving their logs. Alternatively, you can start log tapping on the AxoRouter that receives the event logs, find the new hosts, and click Register source.
The AxoRouter connector adds the following fields to the meta
variable:
field |
value |
meta.connector.type |
otlp |
meta.connector.name |
<name of the connector> |
meta.product |
`windows |
7.2 - Topology
Based on the collected metrics, Axoflow visualizes the topology of your security data pipeline. The topology allows you to get a semi-real-time view of how your edge-to-edge data flows, and drill down to the details and metrics of your pipeline elements to quickly find data transport issues.

Select the name of a source host or a router to show the details and metrics of that pipeline elements.
If a host has active alerts, it’s indicated on the topology as well.

Traffic volume
Select bps or eps in the top bar to show the volume of the data flow on the topology paths in bytes per second or events per second.
Filter hosts
To find or display only specific hosts, you can use the filter bar.
-
Basic Search mode searches in the values of the following fields of the host: Name, IP Address, GUID, FQDN, and the labels of the host.
Basic Search is case insensitive. Adding multiple keywords searches for matches in any of the previous fields. This is equivalent to the @ANY =* keyword1 AND @ANY =* keyword2
AQL query.
-
AQL Query Search mode allows you to search in specific labels of the hosts.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.

If a filter is active, by default the Axoflow Console will display the matching hosts and the elements that are downstream from the matching hosts. For example, if an aggregator like AxoRouter matches the filter, only the aggregators and destinations where the matching host is sending data are displayed, the sources that are sending data to the matching aggregator aren’t shown. To display only the matching host without the downstream pipeline elements, select Filter strategy > Strict.
Grouping hosts
You can select labels to group hosts in the Group by field, so the visualization of large topologies with lots of devices remains useful. Axoflow Console automatically adds names to the groups. The following example shows hosts grouped by their region based on their location labels.

You can select one or more features to group hosts by. When multiple features are selected, groups will be formed by combining values of these features. This can be useful in cases where your information is split into multiple features, for example, if you have region
(with values eu
, us
, etc.) and zone
(with values central-1
, east-1
, east-2
, west-1
, etc.) labels on your hosts and select the label.region
and label.zone
features for grouping, you’ll groups will be the following:
- A cross-product of the selected feature values:
eu,central-1
, eu,east-1
, eu,east-2
, eu,west-1
, us,central-1
, us,east-1, and so on. Obviously, in this example, several of these groups (like
eu,east-1,
eu-east-2, and us,central-1
) will be empty, unless your hosts are mislabeled.
- Groups for the individual values, for hosts that have only one of the selected features:
eu
, us
, central-1
, east-1
, east-2
, west-1
, etc.
Queues
Select queue in the top bar to show the status of the memory and disk queues of the hosts. Select the status indicators of a host to display the details of the queues.

The following details about the queue help you diagnose which connections of the host are having throughput or connectivity issues:
- capacity: The total capacity of the buffer in bytes.
- usage: How much of the total capacity is currently in use.
- driver: The type of the AxoSyslog driver used for the connection the queue belongs to.
- id: The identifier of the connection.
Disk-based queues also have the following information:
- path: The location of the disk-buffer file on the host.
- reliable: Indicates wether the disk-buffer is set to be reliable or not.
- worker: The ID of the AxoSyslog worker thread the queue belongs to.
Both memory-based and disk-based queues have additional details that depend on the destination.
7.3 - Hosts
Axoflow collects and shows you a wealth of information about the hosts of your security data pipeline.
7.3.1 - Find a host
To find a specific host, you have the following options:
-
Open the Topology page, then click the name of the host you’re interested in. If you have many host and it’s difficult to find the one you need, use filtering, or grouping.

-
Open the Hosts page and find the host you’re interested in.

To find or display only specific hosts, you can use the filter bar.
-
Basic Search mode searches in the values of the following fields of the host: Name, IP Address, GUID, FQDN, and the labels of the host.
Basic Search is case insensitive. Adding multiple keywords searches for matches in any of the previous fields. This is equivalent to the @ANY =* keyword1 AND @ANY =* keyword2
AQL query.
-
AQL Query Search mode allows you to search in specific labels of the hosts.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
You can also select one or more features (for example, label.location
) to group hosts by in the Group by field. For details on how grouping works, see Grouping hosts.

7.3.2 - Host information
The Hosts page contains a quick overview of every data source and data processor node. The exact information depends on the type of the host: hosts managed by Axoflow provide more information than external hosts.

The following information is displayed:
- Hostname or IP address
- Metadata labels. These include labels added automatically during the Axoflow curation process (like product name and vendor labels), as well as any custom labels you’ve assigned to the host.
For hosts that have the Axoflow agent (Axolet) installed:
- The version on the agent
- The name and version of the log collector running on the host (for example, AxoSyslog or Splunk Connect for Syslog)
- Operating system, version, and architecture
- Resource information: CPU and memory usage, disk buffer usage. Click on a resource to open its resource history on the Metrics & health page of the host.
- Traffic information: volume of the incoming and outgoing data traffic on the host.
- Cloud-related labels: If the host is running in the cloud, the provider, region, zone labels are automatically available.
In addition, for AxoRouter hosts the following information is also displayed:
- The connectors (for example, OpenTelemetry, Syslog) configured on the host, based on the Connector rules that match this host. Note that you cannot directly edit the connectors, only the connector rules used to create the connectors.
For more details about the host, select the hostname, or click ⋮.

7.3.3 - Custom labels and metadata
To add custom labels to a host, complete the following steps. Note that these are static labels. To add labels dynamically based on the contents of the processed data, use the processing steps in data flows.
-
Find the host on the Hosts or the Topology page, and click on its hostname. The overview of the host is displayed.

-
Select Edit.

You can add custom labels in <label-name>:<value>
format (for example, the group or department a source device belongs to), or a generic description about the host. You can use the labels for quickly finding the host on the Hosts page, and also for filtering when configuring Flows.
When using labels in filters, processing steps, or search bars, note that:
- Labels added to AxoRouter hosts get the
axo_host_
prefix.
- Labels added to data sources get the
host_
prefix. For example, if you add a rack label to an edge host, it’ll be added to the data received from the host as host_rack
.
On other pages, like the Host Overview page, the labels are displayed without the prefixes.
-
Select Save.
7.3.4 - Services
The Services page of a host shows information about the data collector or router services running on the host.
This page is only available for managed pipeline elements.
The following information is displayed:
- Name: Name of the service
- Version: Version number of the service
- Type: Type of the service (which data collector or processor it’s running)
- Supervisor: The type of the supervisor process. For example, Splunk Connect for Syslog (
sc4s
) runs a syslog-ng process under the hood.
Icons and colors indicate the status of the service: running
, stopped
, or not registered
.

Service configuration
To check the configuration of the service, select Configuration. This shows the list of related environment variables and configuration files. Select a file or environment variable to display its value. For example:

To display other details of the service (for example, the location of the configuration file or the binary), select Details.

Manage service
To reload a registered service, select Reload.
7.4 - Log tapping
Log tapping in Axoflow samples the log flow. You can use labels to filter for specific messages (like ones with parse errors) and tap only those messages. To not get overwhelmed with events, Axoflow automatically samples the output: if many messages match the selected filter, only a subset is shown (about 1 message per second). Using log tapping, you can quickly troubleshoot both parsing/curation errors and destination ingest (API) errors, and check:
- What was in the original message?
- What is sent in the final payload to the destination?
To see log tapping in action, check this blog post. To tap into your log flow, or to display the logs of the log collector agent, complete the following steps.
Note
Neither AxoRouter nor Axoflow Console records any log data. The
Flow tapping and
Log tapping features only send a sample of the data to your browser while the tapping is active.
-
Open the Topology page, then select the host where you want to tap the logs, for example, an AxoRouter deployment.
-
Select ⋮ > Tap log flow.

-
Tap into the log flow.
- To see the input data, select Input log flow > Start.
- To see the output data, select Output log flow > Start.
- To see the logs of the log collector agent and Axolet running on the host, select Agent logs.
You can use labels to filter the messages and sample only the matching ones.

-
When the logs you’re interested in show up, click Stop Log Tap, then click a log message to see its details.

Note
For sources that are not yet registered in the Axoflow Console host database, the Register source button is shown at the end of the message. Click it to add the source to Axoflow Console: this will allow attributing the logs coming from the given source as such, enriching the messages, the analytics data, as well as the Topology page.

-
If you don’t know what the message means, select AI Analytics to ask our AI to interpret it.

Filter the messages
You can add labels to the Filter By Label field to sample only messages matching the filter. If you specify multiple labels, only messages that match all filters will be sampled. For example, the following filter selects messages from a specific source IP, sent to a specific destination IP.

7.5 - Alerts
Axoflow raises alerts for a number of events in your security data pipeline, for example:
- a destination becomes unavailable or isn’t accepting messages,
- a host is dropping packets or messages,
- a host becomes unavailable,
- the disk queues of a host are filling up,
- abandoned (orphaned) disk buffer files are on the host,
- the configuration of a managed host failed to synchronize or reload,
- traffic of a flow has unexpectedly dropped,
- a new AxoRouter or Axolet version is available.
Alerts are indicated at a number of places:
-
On the Alerting page. Select an alert to display its details.

-
On the Topology page:

-
On the Overview page of the host:

-
On the Metrics & Health page of the host the alerts are shown as an overlay to the metrics:

7.6 - Private connections
7.6.1 - Google Private Service Connect
If you want your hosts in Google Cloud to access the Axoflow Console without leaving the Google network, we recommend that you use Google Cloud Private Service Connect (PSC) to secure the connection from your VPC to Axoflow.
Prerequisites
Contact Axoflow and provide the list of projects so we can set up an endpoint for your PSC. You will receive information from us that you’ll need to properly configure your connection.
You will also need to allocate a dedicated IP address for the connection in a subnet that’s accessible for the hosts.
Steps
After you have received the details of your target endpoint from Axoflow, complete the following steps to configure Google Cloud Private Service Connect from your VPC to Axoflow Console.
-
Open Google Cloud Console and navigate to Private Service Connect > Connected endpoints.
-
Select the project you want to connect to Axoflow.
-
Navigate to Connect endpoint and complete the following steps.
- Select Target > Published service.
- Set Target service to the service name you’ve received from Axoflow. The service name should be similar to:
projects/axoflow-shared/regions/<region-code>/serviceAttachments/<your-tenant-ID>
- Set Endpoint name to the name you prefer, or the one recommended by Axoflow. The recommended service name is similar to:
psc-axoflow-<your-tenant-ID>
- Select your VPC in the Network field.
- Set Subnet where the endpoint should appear. Since subnets are regional resources, select a subnet in the region you received from Axoflow.
- Select Create IP address and allocate an address for the endpoint. Save the address, you’ll need it later to verify that the connection is working.
- Select Enable global access.
- There is no need to enable the directory API even if it’s offered by Google.
- Select Add endpoint.
-
Test the connection.
-
Log in to a machine where you want to use the PSC using SSH.
-
Test the connection. Run the following command using the IP address you’ve allocated for the endpoint.
curl -vk https://<IP-address-allocated-for-the-endpoint>
If the connection is established, you’ll receive an HTTP 404 response.
-
If the connection is established, configure DNS resolution on the hosts. Complete the following steps.
Setting up selected machines to use the PSC
-
Add the following entry to the /etc/hosts
file of the machine.
<IP-address-allocated-for-the-endpoint> <your-tenant-id>.cloud.axoflow.io kcp.<your-tenant-id>.cloud.axoflow.io telemetry.<your-tenant-id>.cloud.axoflow.io
-
Run the following command to test DNS resolution:
curl -v https://<your-tenant-id>.cloud.axoflow.io
It should load an HTML page from the IP address of the endpoint.
-
If the host is running axolet, restart it by running:
sudo systemctl restart axolet.service
Check the axolet logs to verify that there’re no errors:
sudo journalctl -fu axolet
-
Deploy the changes of the /etc/hosts
file to all your VMs.
Setting up whole VPC networks to use the PSC
-
Open Google Cloud Console and in the Cloud DNS service navigate to the Create a DNS zone page.
-
Create a new private zone with the zone name <your-tenant-id>.cloud.axoflow.io
, and select the networks you want to use the PSC in.
-
Add the following three A records, all of which targeted to the <IP-address-allocated-for-the-endpoint>
:
<your-tenant-id>.cloud.axoflow.io
kcp.<your-tenant-id>.cloud.axoflow.io
telemetry.<your-tenant-id>.cloud.axoflow.io
8 - Data management
This section describes how data Axoflow processes your security data, and how to configure and monitor your data flows in Axoflow.
8.1 - Processing elements
Axoflow processes the data transported in your security data pipeline in the following stages:
-
Sources: Data enters the pipeline from a data source. A data source can be an external appliance or application, or a log collector agent managed by Axoflow.
-
Custom metadata on the source: You can configure Axoflow to automatically add custom metadata to the data received from a source.
-
Router: The AxoRouter data aggregator processes the data it receives from the sources:
- Connector: AxoRouter hosts receive data using source connectors. The different connectors are responsible for different protocols (like Syslog or OpenTelemetry). Some metadata labels are added to the data based on the connector it was received.
- Metadata: AxoRouter classifies and identifies the incoming messages and adds metadata, for example, the vendor and product of the identified source.
- Data extraction: AxoRouter extracts the relevant information from the content of the messages, and makes it available as structured data.
The router can perform other processing steps, as configured in the flows that apply to the specific router (see next step).
-
Flow: You can configure flows in the Axoflow Console that Axoflow uses to configure the AxoRouter instances to filter, route, and process the security data. Flows also allow you to automatically remove unneeded or redundant information from the messages, reducing data volume and SIEM and storage costs.
-
Destination: The router sends data to the specified destination in a format optimized for the specific destination.
8.2 - Flow management
Axoflow uses flows to manage the routing and processing of security data.
8.2.1 - Flow overview
Axoflow uses flows to manage the routing and processing of security data. A flow applies to one or more AxoRouter instances. The Flows page lists the configured flows, and also highlights if any alerts apply to a flow.

Each flow consists of the following main elements:
- A Router selector that specifies the AxoRouter instances the flow applies to. Multiple flows can apply to a single AxoRouter instance.
- Processing steps that filter and select the messages to process, set/unset message fields, and perform different data transformation and data reduction.
- A Destination where the AxoRouter instances of the flow deliver the data. Destinations can be external destinations (for example, a SIEM), or other AxoRouter instances.
Based on the flows, Axoflow Console automatically generates and deploys the configuration of the AxoRouter instances. Click ⋮ or the name of the flow to display the details of the flow.

Filter flows
To find or display only specific flows, you can use the filter bar.
-
Basic Search mode searches in the following fields of the flow: Name, Destination, Description.
Basic Search is case insensitive. Adding multiple keywords searches for matches in any of the previous fields. This is equivalent to the @ANY =* keyword1 AND @ANY =* keyword2
AQL query.
-
AQL Query Search mode allows you to search in specific fields of the flows.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.

Disable flow
You can disable a flow without deleting it if needed by clicking the toggle on the right of the flow name.
CAUTION:
Disabling a flow immediately stops log forwarding for the flow. Any data that’s not forwarded using another flow can be irrevocably lost.

8.2.2 - Create flow
To create a new flow, open the Axoflow Console, then complete the following steps:
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Classify, a Parse, and a Reduce step, in that order, to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet AND meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

8.2.3 - Processing steps
Flows can include a list of processing steps to select and transform the messages. All log messages received by the routers of the flow are fed into the pipeline constructed from the processing steps, then the filtered and processed messages are forwarded to the specified destination.
CAUTION:
Processing steps are executed in the order they appear in the flow. Make sure to arrange the steps in the proper order to avoid problems.
To add processing steps to an existing flow, complete the following steps.
-
Open the Flows page and select the flow you want to modify.
-
Select Process > Add new Processing Step.
-
Select the type of the processing step to add.

The following types of processing steps are available:
- Classify: Automatically classify the data and assign vendor, product, and service labels. This step (or its equivalent in the Connector rule) is required for the Parse and Reduce steps to work.
- Parse: Automatically parse the data to extract structured information from the content of the message. The Classify step (or its equivalent in the Connector rule) is required for the Parse step to work.
- Select Messages: Filter the messages using a query. Only the matching messages will be processed in subsequent steps.
- Set Fields: Set the value of one or more fields on the message object.
- Unset Field: Unset the specified field of the message object.
- Reduce: Automatically remove redundant content from the messages. The Classify and Parse steps (or their equivalent in the Connector rule) are required for the Reduce step to work.
- Regex: Execute a regular expression on the message.
- FilterX: Execute an arbitrary Filterx snippet.
- Probe: A measurement point that provides metrics about the throughput of the flow.
-
Configure the processing step as needed for your environment.
-
(Optional) To apply the processing step only to specific messages from the flow, set the Condition field of the step. Conditions act as filters, but apply only to this step. Messages that don’t match the condition will be processed by the subsequent steps.
Conditions use AQL queries, making complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
-
Axoflow Console autocompletes the built-in labels and field names, as well as their most frequent values, but doesn’t autocomplete custom labels.
-
Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
-
You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
-
(Optional) If needed, drag-and-drop the step to change its location in the flow.
-
Select Save.
Classify
Automatically classify the data and assign vendor, product, and service labels. This step (or its equivalent in the Connector rule) is a prerequisite for the Parse and Reduce steps, and must be executed before them. AxoRouter can parse data from the supported data sources. If your source is not listed, contact us.

FilterX
Execute an arbitrary FilterX snippet. The following example checks if the meta.host.labels.team
field exists, and sets the meta.openobserve
variable to a JSON value that contains stream
as a key with the value of the meta.host.labels.team
field.

Parse
Prerequisite: The Classify step (or its equivalent in the Connector rule) is required for the Parse step to work.
The Parse step automatically parses the data from the content of the message, and replaces the message content (the log.body
field in the internal message schema) with the structured information. AxoRouter can parse data from the supported data sources. If your source is not listed, contact us. Alternatively, you can create your own parser using FilterX processing steps.
For example, if the message content is in the Common Event Format (CEF), AxoRouter creates a structured FilterX dictionary from its content, like this:
CEF:0|KasperskyLab|SecurityCenter|13.2.0.1511|KLPRCI_TaskState|Completed successfully|1|foo=foo bar=bar baz=test
Parsed and structured content:
{
"version":"0",
"device_vendor":"KasperskyLab",
"device_product":"SecurityCenter",
"device_version":"13.2.0.1511",
"device_event_class_id":"KLPRCI_TaskState",
"name":"Completed successfully",
"agent_severity":"1",
"foo":"foo=bar",
"bar":"bar=baz",
"baz":"test"
}

Probe
Set a measurement point that provides metrics about the throughput of the flow. Note that you can’t name your probes input
and output
, as these are reserved.

For details, see Flow metrics.
Reduce
Prerequisite: The Classify and Parse steps (or their equivalent in the Connector rule) are required for the Reduce step to work.
The Reduce step automatically removes redundant content from the messages. For details on how data reduction works, see Classify and reduce security data.

Regex
Use a regular expression to process the message.
-
Input is an AxoSyslog template string, for example, ${MESSAGE}
that the regular expression will be matched with.
-
Pattern is a Perl Compatible Regular Expression (PCRE). If you want to match the pattern to the whole string, use the ^
and $
anchors at the beginning and end of the pattern.
You can use https://regex101.com/ with the ECMAScript flavor to validate your regex patterns. (When denoting named capture groups, ?P
is not supported, use ?
.)
Named or anonymous capture groups denote the parts of the pattern that will be extracted to the Field field. Named capture groups can be specified like (?<group_name>pattern)
, which will be available as field_name["group_name"]
. Anonymous capture groups will be numbered, for example, the second group from (first.*) (bar|BAR)
will be available as field_name["2"]
. When specifying variables, note the following points:
-
If you want to include the results in the outgoing message or in its metadata for other processing steps, you must:
- Reference an existing variable or field. This field will be updated with the matches.
- For new variables, use names for the variables that start with the
$
character, or
- declare them explicitly. For details, see the FilterX data model in the AxoSyslog documentation.
-
The field will be set to an empty dictionary ({}
) when the pattern doesn’t match.
For example, the following processing step parses an unclassified log message from a firewall into a structured format in the log.body
field:
-
Input: ${RAWMSG}
-
Pattern:
^(?<priority><\d+>)(?<version>\d+)\s(?<timestamp>[\d\-T:.Z]+)\s(?<hostname>[\w\-.]+)\s(?<appname>[\w\-]+)\s(?<event_id>\d+|\-)\s(?<msgid>\-)\s(?<unused>\-)\s(?<rule_id>[\w\d]+)\s(?<network>\w+)\s(?<action>\w+)\s(?<result>\w+)\s(?<rule_seq>\d+)\s(?<direction>\w+)\s(?<length>\d+)\s(?<protocol>\w+)\s(?<src_ip>\d+\.\d+\.\d+\.\d+)\/(?<src_port>\d+)->(?<dest_ip>\d+\.\d+\.\d+\.\d+)\/(?<dest_port>\d+)\s(?<tcp_flags>\w+)\s(?<policy>[\w\-.]+)
-
Field: log.body

Select messages
Filter the messages using a query. Only the matching messages will be processed in subsequent steps.
The following example selects messages that have the resource.attributes["service.name"]
label set to nginx
.

You can also select messages using various other metadata about the connection and the host that sent the data, the connector that received the data, the classification results, and also custom labels, for example:
- a specific connector:
meta.connector.name = axorouter-mysyslog-connector
- a type of connector:
meta.connector.type = otlp
- a sender IP address:
meta.connection.src_ip = 192.168.1.1
- a specific product, if classification is enabled in an earlier processing step or in the connector that received the message:
meta.product = fortigate
(see Vendors for the metadata of a particular product)
- a custom label you’ve added to the host:
meta.host.labels.location = eu-west-1
In addition to exact matches, you can use the following operators:
!=
not equal (string match)
=*
contains (substring match)
!*
: doesn’t contain
=~
: matches regular expression
!~
: doesn’t match regular expression
==~
: matches case sensitive regular expression
!=~
: doesn’t match case sensitive regular expression
You can combine multiple expressions using the AND (+
) operator. (To delete an expression, hit SHIFT+Backspace.)
Note
To check the metadata of a particular message, you can use Log tapping. The metadata associated with the event is under the Event > meta section.

Set fields
Set specific field of the message. You can use static values, and also dynamic values that were extracted from the message automatically by Axoflow or manually in a previous processing step. If the field doesn’t exist, it’s automatically created. The following example sets the resource.attributes.format
field to parsed
.

The type of the field can be string
, number
, and expression
. Select expression to use set the value of the field using a FilterX expression.
If you start typing the name of the field, the UI shows matching and similar field names:

Set destination options
If you select the Set Destination Options in the Destination step of the flow, it automatically inserts a special Set Fields processing step into the flow, that allows you to set destination-specific fields, for example, the index for a Splunk destination, or the stream name for an OpenObserve destination. You can insert this processing step multiple times if you want to set the fields to different values using conditions.

When using Set Destination Options, note the if you leave a parameter empty, AxoRouter will send it as an empty field to the destination. If you don’t want to send that field at all, delete the field.
Unset field
Unset a specific field of the message.

8.2.4 - Flow metrics
The Metrics page of a flow shows the amount of data (in events per second or bytes per second) processed by the flow. By default, the flow provides metrics for the input and the output of the flow. To get more information about the data flow, add Probe steps to the flow. For example, you can add a probe after:
- Select Messages steps to check the amount of data that match the filter, or
- Reduce steps to check the ratio of data saved.

Note
By default, Axoflow stores metrics for 30 days, unless specified otherwise in your support contract.
Contact us if you want to increase the data retention time.
The top of the page is a funnel graph that shows the amount of data (in events/sec or bytes/sec) processed at each probe of the flow. To show the funnel graph between a probe and the output, click the INPUT field on the left of the funnel graph and select a probe. That way you can display a diagram that starts at a probe (for example, after the initial Select Messages step of the flow) and shows all subsequent probes. Note that OUTPUT on the right of the funnel graph shows the relative amount of data at the output compared to the input. This value refers to the entire flow, it’s not affected by selecting a probe for the funnel graph.

The bottom part of the page shows how the amount of data processed by a probe (by default, the input probe) changed over time. Click the name of a probe on the left to show the metrics of a different probe. Click on the timeline to show the input-probe-output states corresponding to the selected timestamp. Clicking the timeline updates the funnel graph as well. You can adjust the time range in the filter bar at the top.

You can adjust the data displayed using the filter bar:
-
Time period: Select the calendar_month
icon to change the time period that’s displayed on the charts. You can use absolute (calendar) time, or relative time (for example, the last 2 days).
Axoflow stores all dates in Coordinated Universal Time (UTC), and automatically converts it to the timezone of set in your browser/operating system.
- Data unit: Select the summarize
icon to change the data unit displayed on the charts (events/second or bytes/second).
8.2.5 - Flow analytics
The Analytics page of a flow allows you to analyze the data throughput of the flow using Sankey and Sunburst diagrams. For details on using these analytics, see Analytics.

8.2.6 - Paths
Create a path
Open the Axoflow Console.
-
Select Topology > Create New Item > Path.

-
Select your data source in the Source host field.

-
Select the target router or aggregator this source is sending its data to in the Target host field, for example, axorouter
.
-
Select the Target connector. The connector determines how the destination receives the data (for example, using which protocol or port).
-
Select Create. The new path appears on the Topology page.

8.2.7 - Flow tapping
Flow tapping allows you to sample the data flowing in a Flow and see how it processes the logs. It’s useful to test the flow when you’re configuring or adjusting Processing Steps.
Note
Neither AxoRouter nor Axoflow Console records any log data. The
Flow tapping and
Log tapping features only send a sample of the data to your browser while the tapping is active.
Prerequisites
Flow tapping works only on flows that have been deployed to an AxoRouter instance. This means that if you change a processing step in a flow, you must first Save the changes (or select Save and Test).
If you want to experiment with a flow, we recommend that you clone (
) the flow and configure it to use the /dev/null destination, then redo adjusting the final processing steps in the original flow.
Tap into a flow
To tap into a Flow, complete the following steps.
-
Open Axoflow Console and select Flows.
-
Select the flow you want to tap into, then select Process > Test.

-
Select the router where you want to tap into the flow, then click Start Log Tapping.

-
Axoflow Console displays a sample of the logs from the data flow in matched pairs for the input and the output of the flow.
The input point is after the source connector finished processing the incoming message (including parsing and classification performed by the connector).
The output shows the messages right before they are formatted according to the specific destination’s requirements.

-
You can open the details of the messages to compare fields, values, and so on. By default, the messages are linked: opening the details of a message in the output automatically opens the details of the message input window, and vice versa (provided the message you clicked exists in both windows).

If you want to compare a message with other messages, click link
to disable linking.
-
If the flow contains one or more Probe steps, you can select the probe as the tapping point. That way you can check how the different processing steps modify your messages. You can change the first and the second tapping point by clicking
and
, respectively.

-
Diff view. To display the line-by-line difference between the state of a message at two stages of processing, scroll down in the details of the message and select Show diff.

The internal JSON representation of the message and its metadata is displayed, compared to the previous probe of the flow. Obviously, at the input of the flow the entire message is new.

You can change the probe in the right-hand pane, and the left side automatically switches to the previous probe in the flow. For example, the following screenshot shows a message after parsing. You can see that the initial string representation of the message body has been replaced with a structured JSON object.

9 - Sources
The following chapters show you how to configure specific appliances, applications, and other sources to send their log data to Axoflow.
9.1 - Generic tips
The following section gives you a generic overview on how to configure a source to send its log data to Axoflow. If there is a specific section for your source, follow that instead of the generic procedure. For details, see the documentation of your source.
CAUTION:
Make sure to set data forwarding on your appliances/servers as described in this guide. Different settings like alternate message formats or ports might be valid, but can result in data loss or incorrect parsing.
Default connectors
By default, Axoflow Console has Connector rules that create the following connectors on AxoRouter deployments.
Parsing and classification is enabled for the default connectors. To create other connectors, or modify the default ones see Connector rules. (The default connector rules have “Factory default connector rule” in their description.)
Open ports
By default, AxoRouter accepts data on the following ports (unless you’ve modified the default connector rules):
- 514 UDP and TCP for RFC3164 (BSD-syslog) and RFC5424 (IETF-syslog) formatted traffic. AxoRouter automatically recognizes and handles both formats.
- 601 TCP for RFC5424 (IETF-syslog) and RFC3164 (BSD-syslog) formatted traffic. AxoRouter automatically recognizes and handles both formats.
- 6514 TCP for TLS-encrypted syslog traffic.
- 4317 TCP for OpenTelemetry log data.
To receive data on other ports or other protocols, configure other connector rules for the AxoRouter host.
For TLS-encrypted syslog connections, create a new connector rule or edit an existing one, and configure the keys and certificates needed to encrypt the connections. For details, see Syslog.
Note
Make sure to enable the ports you’re using on the firewall of your host.
Prerequisites
To configure a source to send data to Axoflow, make sure that:
Steps
-
Log in to your device. You need administrator privileges to perform the configuration.
-
If needed, enable syslog forwarding on the device.
-
Set AxoRouter as the syslog server. Typically, you can configure the following parameters:
-
Name or IP Address of the syslog server: Set the address of your AxoRouter.
-
Protocol: If possible, set TCP or TLS.
Note
If you’re sending data over TLS, make sure to configure a TLS-enabled
connector rule in Axoflow.
-
Syslog Format: If possible, set RFC5424 (or equivalent), otherwise leave the default.
-
Port: Set a port appropriate for the protocol and syslog format you have configured.
By default, AxoRouter accepts data on the following ports (unless you’ve modified the default connector rules):
- 514 UDP and TCP for RFC3164 (BSD-syslog) and RFC5424 (IETF-syslog) formatted traffic. AxoRouter automatically recognizes and handles both formats.
- 601 TCP for RFC5424 (IETF-syslog) and RFC3164 (BSD-syslog) formatted traffic. AxoRouter automatically recognizes and handles both formats.
- 6514 TCP for TLS-encrypted syslog traffic.
- 4317 TCP for OpenTelemetry log data.
To receive data on other ports or other protocols, configure other connector rules for the AxoRouter host.
For TLS-encrypted syslog connections, create a new connector rule or edit an existing one, and configure the keys and certificates needed to encrypt the connections. For details, see Syslog.
Note
Make sure to enable the ports you’re using on the firewall of your host.
-
Add the source to Axoflow Console.
-
Open the Axoflow Console and select Topology.
-
Select Create New Item > Source.
- If the source is actively sending data to an AxoRouter instance, select Detected, then select your source.
- Otherwise, select the vendor and product corresponding to your source from the Predefined sources, then enter the parameters of the source, like IP address and FQDN.
Note
During
log tapping, you can add hosts that are actively sending data to an AxoRouter instance by clicking
Register source.
-
(Optional) Add custom labels as needed.
-
Select Create.
Note
If your syslog source is running
syslog-ng
, Splunk Connect for Syslog (SC4S), or AxoSyslog as its log forwarder agent, consider installing Axolet on the host and instrumenting the configuration of the log forwarder to receive detailed metrics about the host and the processed data. For details, see
Manage and monitor the pipeline.
9.2 - Connector rules
Connectors define the methods and entry points where AxoRouter receives data from the sources.
In addition to providing a simple protocol specific listener, Connectors implement critical processing steps - for example classification and parsing - right at the ingestion point to enrich data before it gets processed by Flows.
On top of this, Connector rules are high-level policies that determine what kind of Connectors should be available on a set of AxoRouter instances based on dynamic host labels.
To see every connector rule configured in Axoflow Console, open the Connectors page from the main menu.

Connector rules have the following main elements:
-
the way they accept data (for example, via syslog, using the OpenTelemetry protocol), and other protocol-specific parameters (for example, port number)
-
the router selector, which determines the list of AxoRouter instances that will create a connector based on that rule.
You can use any labels and metadata of the AxoRouter hosts in the Router selectors, for example, the hostname of the AxoRouter, or any custom labels.
Selecting a connector rule shows the details of the connector rule, including the list of Matched hosts: the AxoRouter deployments that will have a connector based on that connector rule. If you click on the name of a matched router, Connectors page of the AxoRouter host opens, showing you the connectors configured for that host.
Create connector rule
To create a new connector rule, complete the following steps.
-
Select Connectors > Create new rule. (Alternatively, you can select Add Connector > Create a connector rule on the Connectors page of an AxoRouter host.)

-
Select the type of connector you want to create. For example, OpenTelemetry. The following connector types are available:
-
Configure the connector rule.
-
Enter a name for the connector rule into the Rule Name field.

-
(Optional) Add labels to the connector rule. You will be able to use these labels in Flow Processing steps, for example, in the Query field of Select Messages steps.
-
Set the Router Selector for the connector rule. The selector determines which AxoRouter instances will have a connector based on this connector rule.

- If you leave the Router Selector field empty, the rule will match every AxoRouter instance.
- To select only a specific AxoRouter instance, set the
name
field with the name of the instance as selector.
- If you set multiple fields in the selector, the connector rule will apply only to AxoRouter instances that match all elements of the selector. (There in an AND relationship between the fields.)
-
(Optional) Enter a Suffix for the connector rule. This suffix will be used in the name of the connector instances created on the AxoRouter hosts. For example, if the name of a matching AxoRouter instance is “my-axorouter”, and the suffix of the rule is “otlp-rule”, the connector created for the AxoRouter will be named “my-axorouter-otlp-rule”.
If the Suffix field is empty, the name of the connector rule is used instead.
-
(Optional) Enter a description for the rule.
-
If needed, enable the Classify and Parse preprocessing steps so AxoRouter automatically identifies and parses messages sent by supported data sources. If your source is not listed, contact us.
Enabling these options processes all data received by the connectors created based on this connector rule. If you want to apply classification and parsing more selectively, you can use the Classify and Parse processing steps in your Flows.
Note that the Parse processing step requires Classify to be enabled. Parsing automatically parses the data from the content of the message, and replaces the message content (the log.body
field in the internal message schema) with the structured information.
-
(Optional) If needed, you can add other preprocessing steps to the connector rule. You can use the same processing steps as in flows. These can be useful if you use a dedicated connector rule to fix or annotate data coming from some specific sources.
-
Configure the protocol-specific settings. For details, see the specific pages:
-
Select Create.
Axoflow automatically creates connectors on the AxoRouter hosts that match the Router Selector.
Make sure to enable the ports you’ve configured in the connector on the firewall of the AxoRouter host, and on other firewalls between the AxoRouter host and your data sources.
Modify connector rule
To modify an existing connector rule, complete the following steps.
Note
- You cannot directly modify the connector of an AxoRouter host, only via modifying a connector rule.
- Modifying a connector rule affects every AxoRouter that match the Router selector of the rule.
- To apply an existing connector rule to a new AxoRouter instance, you can:
- Add a label to the AxoRouter host so the Router selector of the connector rule matches the new host, and/or
- Change the Router selector of the connector rule to match the new host.
-
Find the connector rule you want to modify:
- Select Connectors from the main menu, then select the connector rule you want to modify.
- Alternatively, find the AxoRouter instance whose connector you want to modify on the Hosts or Topology page, then select Connectors. Find the connector you want to modify, then select ⋮ > Edit connector rule.
-
Modify the configuration of the connector rule as needed.
CAUTION:
The changes are applied immediately after you click
Update. Double-check your changes to avoid losing data.
-
Select Update.
Add AxoRouter to existing connector rule
To add an AxoRouter to an existing connector rule, you have two options, depending on the Router Selector of the connector rule:
9.3 - OpenTelemetry
Receive logs, metrics, and traces from OpenTelemetry clients over the OpenTelemetry Protocol (OTLP/gRPC).
Prerequisites
If you want to enable TLS encryption for this connector to encrypt the communication with the sources, you’ll need to set appropriate keys and certificates.
CAUTION:
Copy the keys and certificates to AxoRouter before starting to configure the connector. Otherwise, you won’t be able to make configuration changes that require reloading the AxoRouter service, including starting log tapping or flow tapping.
Note the following points:
-
Keys and certificates must be in PEM format.
-
If the file contains a certificate chain, the file must begin with the certificate of the host, followed by the CA certificate that signed the certificate of the host, and any other signing CAs in order.
-
You must manually copy these files to their place on the AxoRouter host, currently you can’t distribute them from Axoflow Console.
The files must be readable by the axorouter
service.
-
The recommended path for certificates is under /etc/axorouter/user-config/
(for example, /etc/axorouter/user-config/tls-key.pem
). (If you need to use a different path, you have to append an option like -v /your/path:/your/path
to the AXOROUTER_PODMAN_ARGS
variable of /etc/axorouter/container.env
.)
-
When referring to the key or certificate during when configuring the connector, use absolute paths (for example, /etc/axorouter/user-config/tls-key.pem
).
Add new OpenTelemetry connector
To add a new connector to an AxoRouter host, complete the following steps:
-
Select Connectors > Create new rule. (Alternatively, you can select Add Connector > Create a connector rule on the Connectors page of an AxoRouter host.)

-
Select OpenTelemetry.
-
Configure the connector rule.
-
Enter a name for the connector rule into the Rule Name field.

-
(Optional) Add labels to the connector rule. You will be able to use these labels in Flow Processing steps, for example, in the Query field of Select Messages steps.
-
Set the Router Selector for the connector rule. The selector determines which AxoRouter instances will have a connector based on this connector rule.

- If you leave the Router Selector field empty, the rule will match every AxoRouter instance.
- To select only a specific AxoRouter instance, set the
name
field with the name of the instance as selector.
- If you set multiple fields in the selector, the connector rule will apply only to AxoRouter instances that match all elements of the selector. (There in an AND relationship between the fields.)
-
(Optional) Enter a Suffix for the connector rule. This suffix will be used in the name of the connector instances created on the AxoRouter hosts. For example, if the name of a matching AxoRouter instance is “my-axorouter”, and the suffix of the rule is “otlp-rule”, the connector created for the AxoRouter will be named “my-axorouter-otlp-rule”.
If the Suffix field is empty, the name of the connector rule is used instead.
-
(Optional) Enter a description for the rule.
-
If needed, enable the Classify and Parse preprocessing steps so AxoRouter automatically identifies and parses messages sent by supported data sources. If your source is not listed, contact us.
Enabling these options processes all data received by the connectors created based on this connector rule. If you want to apply classification and parsing more selectively, you can use the Classify and Parse processing steps in your Flows.
Note that the Parse processing step requires Classify to be enabled. Parsing automatically parses the data from the content of the message, and replaces the message content (the log.body
field in the internal message schema) with the structured information.
-
(Optional) If needed, you can add other preprocessing steps to the connector rule. You can use the same processing steps as in flows. These can be useful if you use a dedicated connector rule to fix or annotate data coming from some specific sources.
-
If needed, configure the port number where you want to receive data.

-
Configure TLS settings for the connector.
-
Select Server TLS.
-
Set the path to the key and certificates.
When using TLS, set the paths for the certificates and keys used for the TLS-encrypted communication with the clients. For details, see Prerequisites.
- Server certificate path: The certificate that AxoRouter shows to the clients.
- Server private key path: The private key of the server certificate.
- CA certificate path: The CA certificate that AxoRouter uses to verify the certificate of the destination if Verify peer certificate is enabled.
-
To require the certificates of the clients to be valid, select Verify client certificate. When selected, the certificate of the client cannot be self-signed, and its common name must match the hostname or IP address of the client.
-
Select Create.
Axoflow automatically creates connectors on the AxoRouter hosts that match the Router Selector.
Make sure to enable the ports you’ve configured in the connector on the firewall of the AxoRouter host, and on other firewalls between the AxoRouter host and your data sources.
Labels
label |
value |
connector.type |
otlp |
connector.name |
The Name of the connector |
connector.port |
The port number where the connector receives data |
9.4 - Syslog
The Syslog connector can receive all kinds of syslog messages. You can configure it to receive data on specific ports, and apply parsing, classification, and enrichment to the messages, in addition to standard syslog parsing.
Prerequisites
If you want to enable TLS encryption for this connector to encrypt the communication with the sources, you’ll need to set appropriate keys and certificates.
CAUTION:
Copy the keys and certificates to AxoRouter before starting to configure the connector. Otherwise, you won’t be able to make configuration changes that require reloading the AxoRouter service, including starting log tapping or flow tapping.
Note the following points:
-
Keys and certificates must be in PEM format.
-
If the file contains a certificate chain, the file must begin with the certificate of the host, followed by the CA certificate that signed the certificate of the host, and any other signing CAs in order.
-
You must manually copy these files to their place on the AxoRouter host, currently you can’t distribute them from Axoflow Console.
The files must be readable by the axorouter
service.
-
The recommended path for certificates is under /etc/axorouter/user-config/
(for example, /etc/axorouter/user-config/tls-key.pem
). (If you need to use a different path, you have to append an option like -v /your/path:/your/path
to the AXOROUTER_PODMAN_ARGS
variable of /etc/axorouter/container.env
.)
-
When referring to the key or certificate during when configuring the connector, use absolute paths (for example, /etc/axorouter/user-config/tls-key.pem
).
Add new syslog connector
To create a new syslog connector, complete the following steps:
-
Select Connectors > Create new rule. (Alternatively, you can select Add Connector > Create a connector rule on the Connectors page of an AxoRouter host.)

-
Select Syslog.
-
Select the template to use one of the standard syslog ports and networking protocols, for example, UDP 514 for the RFC3164 syslog protocol.
To configure a different port, or to specify the protocol elements manually, select Custom.

-
Configure the connector rule.
-
Enter a name for the connector rule into the Rule Name field.

-
(Optional) Add labels to the connector rule. You will be able to use these labels in Flow Processing steps, for example, in the Query field of Select Messages steps.
-
Set the Router Selector for the connector rule. The selector determines which AxoRouter instances will have a connector based on this connector rule.

- If you leave the Router Selector field empty, the rule will match every AxoRouter instance.
- To select only a specific AxoRouter instance, set the
name
field with the name of the instance as selector.
- If you set multiple fields in the selector, the connector rule will apply only to AxoRouter instances that match all elements of the selector. (There in an AND relationship between the fields.)
-
(Optional) Enter a Suffix for the connector rule. This suffix will be used in the name of the connector instances created on the AxoRouter hosts. For example, if the name of a matching AxoRouter instance is “my-axorouter”, and the suffix of the rule is “otlp-rule”, the connector created for the AxoRouter will be named “my-axorouter-otlp-rule”.
If the Suffix field is empty, the name of the connector rule is used instead.
-
(Optional) Enter a description for the rule.
-
If needed, enable the Classify and Parse preprocessing steps so AxoRouter automatically identifies and parses messages sent by supported data sources. If your source is not listed, contact us.
Enabling these options processes all data received by the connectors created based on this connector rule. If you want to apply classification and parsing more selectively, you can use the Classify and Parse processing steps in your Flows.
Note that the Parse processing step requires Classify to be enabled. Parsing automatically parses the data from the content of the message, and replaces the message content (the log.body
field in the internal message schema) with the structured information.
-
(Optional) If needed, you can add other preprocessing steps to the connector rule. You can use the same processing steps as in flows. These can be useful if you use a dedicated connector rule to fix or annotate data coming from some specific sources.
-
Select the protocol to use for receiving syslog data: TCP, UDP, or TLS.

When using TLS, set the paths for the certificates and keys used for the TLS-encrypted communication with the clients. For details, see Prerequisites.
- CA certificate path: The CA certificate that AxoRouter uses to authenticate the clients.
- Server certificate path: The certificate that AxoRouter shows to the clients.
- Server private key path: The private key of the server certificate.
-
(Optional) If explicitly needed for your use case, you can configure Framing manually. Otherwise, leave it on Auto. Enable framing (On) if the payload contains the length of the message as specified in RFC6587 3.4.1. Disable (Off) for non-transparent-framing RFC6587 3.4.2.
-
Set the Port of the connector. The port number must be unique on the AxoRouter host. Axoflow Console will not provision a connector to an AxoRouter if it would cause a port collision, but other software on the given host may already be using the chosen port number (for example, an SSH server on TCP port 22). In this case, AxoRouter won’t be able to reload the configuration and it will indicate an error.
-
(Optional) If you’re expecting high-volume UDP traffic on your AxoRouter instances that will have this connector, enable UDP loadbalancing by entering the number of UDP sockets to use. The maximum recommended value is the number of cores available in the AxoRouter host.
UDP loadbalancing in AxoRouter is based on eBPF. Note that if you enable loadbalancing, messages of the same high-traffic source may be processed out of order. For details on how eBPF loadbalancing works, see the Scaling syslog to 1M EPS with eBPF blog post.
-
(Optional) If needed for your environment, set protocol-specific connector options as needed.
You can also modify the product and vendor labels of the connector. In that case, Axoflow will treat the incoming messages as it was received and classified as data from the specified product. This is useful if you want to send data from a specific product to a dedicated port.
These labels and other parameters of the connector will be available under the meta.connector
key as metadata for the messages received via the connector, and can be used in routing decisions and processing steps. You can check the metadata of the messages using log tapping.

-
Select Create.
Axoflow automatically creates connectors on the AxoRouter hosts that match the Router Selector.
Make sure to enable the ports you’ve configured in the connector on the firewall of the AxoRouter host, and on other firewalls between the AxoRouter host and your data sources.
Protocol-specific connector options
- Encoding: The character set of the messages, for example,
UTF-8
.
- Maximum connections: The maximum number of simultaneous connections the connector can receive.
- Socket buffer size: The size of the socket buffer (in bytes).
TCP options
- TCP Keepalive Time Interval: The interval (number of seconds) between subsequential keepalive probes, regardless of the traffic exchanged in the connection.
- TCP Keepalive Probes: The number of unacknowledged probes to send before considering the connection dead.
- TCP Keepalive Time: The interval (in seconds) between the last data packet sent and the first keepalive probe.
TLS options
For TLS, you can use the TCP-specific options, and also the following:
- Require MTLS: If enabled, the peers must have a TLS certificate, otherwise AxoRouter will reject the connection.
- Verify client certificate: If enabled, AxoRouter verifies certificate of the peer, and rejects connections with invalid certificates.
Labels
The AxoRouter syslog connector adds the following meta labels:
label |
value |
connector.type |
syslog |
connector.name |
The Name of the connector |
connector.port |
The port number where the connector receives data |
9.5 - Webhook
Webhook connectors of AxoRouter can be used to receive log events through HTTP(S) POST requests.
You can specify static and dynamic URLs to receive the data. AxoRouter automatically parses the JSON payload of the request, and adds it to the log.body
field of the message as a JSON object. Other types of payload (including invalid JSON objects) is added to the log.body
field as a string. Note that you can add further parsing to the body using processing steps in Flows, for example, using FilterX or Regex processing steps.
Prerequisites
To receive data via HTTPS, you’ll need a key and a certificate that the connector will show to the clients.
If you want to enable TLS encryption for this connector to encrypt the communication with the sources, you’ll need to set appropriate keys and certificates.
CAUTION:
Copy the keys and certificates to AxoRouter before starting to configure the connector. Otherwise, you won’t be able to make configuration changes that require reloading the AxoRouter service, including starting log tapping or flow tapping.
Note the following points:
-
Keys and certificates must be in PEM format.
-
If the file contains a certificate chain, the file must begin with the certificate of the host, followed by the CA certificate that signed the certificate of the host, and any other signing CAs in order.
-
You must manually copy these files to their place on the AxoRouter host, currently you can’t distribute them from Axoflow Console.
The files must be readable by the axorouter
service.
-
The recommended path for certificates is under /etc/axorouter/user-config/
(for example, /etc/axorouter/user-config/tls-key.pem
). (If you need to use a different path, you have to append an option like -v /your/path:/your/path
to the AXOROUTER_PODMAN_ARGS
variable of /etc/axorouter/container.env
.)
-
When referring to the key or certificate during when configuring the connector, use absolute paths (for example, /etc/axorouter/user-config/tls-key.pem
).
Add new webhook connector
To add a new connector to an AxoRouter host, complete the following steps:
-
Select Connectors > Create new rule. (Alternatively, you can select Add Connector > Create a connector rule on the Connectors page of an AxoRouter host.)

-
Select Webhook.
-
Configure the connector rule.
-
Enter a name for the connector rule into the Rule Name field.

-
(Optional) Add labels to the connector rule. You will be able to use these labels in Flow Processing steps, for example, in the Query field of Select Messages steps.
-
Set the Router Selector for the connector rule. The selector determines which AxoRouter instances will have a connector based on this connector rule.

- If you leave the Router Selector field empty, the rule will match every AxoRouter instance.
- To select only a specific AxoRouter instance, set the
name
field with the name of the instance as selector.
- If you set multiple fields in the selector, the connector rule will apply only to AxoRouter instances that match all elements of the selector. (There in an AND relationship between the fields.)
-
(Optional) Enter a Suffix for the connector rule. This suffix will be used in the name of the connector instances created on the AxoRouter hosts. For example, if the name of a matching AxoRouter instance is “my-axorouter”, and the suffix of the rule is “otlp-rule”, the connector created for the AxoRouter will be named “my-axorouter-otlp-rule”.
If the Suffix field is empty, the name of the connector rule is used instead.
-
(Optional) Enter a description for the rule.
-
If needed, enable the Classify and Parse preprocessing steps so AxoRouter automatically identifies and parses messages sent by supported data sources. If your source is not listed, contact us.
Enabling these options processes all data received by the connectors created based on this connector rule. If you want to apply classification and parsing more selectively, you can use the Classify and Parse processing steps in your Flows.
Note that the Parse processing step requires Classify to be enabled. Parsing automatically parses the data from the content of the message, and replaces the message content (the log.body
field in the internal message schema) with the structured information.
-
(Optional) If needed, you can add other preprocessing steps to the connector rule. You can use the same processing steps as in flows. These can be useful if you use a dedicated connector rule to fix or annotate data coming from some specific sources.
-
Select the protocol you want to use: HTTPS or HTTP.
-
Set the port number where the webhook will receive the POST requests, for example, 8080
.
-
Set the endpoints where the webhook will receive data in the Paths field. You can use static paths, or regular expressions. In regular expressions you can use named capture groups to automatically set macro values in AxoRouter. For example, the /events/(?P<HOST>.*)
path sets the hostname for the data received in the request based on the second part of the URL: a request to the /events/my-example-host
URL sets the host field of that message to my-example-host
.
By default, the /events
and /events/(?P<HOST>.*)
paths are active.
-
For HTTPS endpoints, set the path to the Key and the Certificate files. AxoRouter uses these to encrypt the TLS channel. You can use absolute paths (for example, /etc/axorouter/user-config/tls-key.pem
). The key and the certificate must be in PEM format.
You must manually copy these files to their place, currently you can’t distribute them from Axoflow.
-
Select Create.
Axoflow automatically creates connectors on the AxoRouter hosts that match the Router Selector.
Make sure to enable the ports you’ve configured in the connector on the firewall of the AxoRouter host, and on other firewalls between the AxoRouter host and your data sources.
-
Send a test request to the webhook.
-
Open the Overview tab of the AxoRouter host where you’ve created the webhook connector and check the IP address or FQDN of the host.
-
Open a terminal on your computer and send a POST request to the path you’ve configured in the webhook.
curl -X POST -H 'Content-Type: application/json' --data '{"host":"myhost", "body": "sample webhook message" }' <;router-IP-address>:<webhook-port-number>/<webhook-path>/
For example, if you’ve used the default values of the webhook connector you can use:
curl -X POST -H 'Content-Type: application/json' --data '{"host":"myhost", "body": "sample webhook message" }' <;router-IP-address>/events/
Expected output:
Note
You can also use
Log tapping on the AxoRouter host to see the incoming messages.
The AxoRouter webhook connector adds the following fields to the meta
variable:
field |
value |
meta.connector.type |
webhook |
meta.connector.name |
<name of the connector> |
meta.connector.port |
<port of the connector> |
meta.connector.labels.product |
Default value: webhook |
meta.connector.labels.vendor |
Default value: generic |
9.6 - Windows Event Collector (WEC)
The AxoRouter Windows Events connector can receive Windows Event Logs by running a Windows Event Collector (WEC) server. After enabling the Windows Events connector, you can configure your Microsoft Windows hosts to forward their event logs to AxoRouter using Windows Event Forwarding (WEF).
Windows Event Forwarding (WEF) reads any operational or administrative event logged on a Windows host and forwards the events you choose to a Windows Event Collector (WEC) server - in this case, AxoRouter.
Prerequisites
When using TLS authentication, you’ll need a
- CA certificate (in PEM format) that AxoRouter uses to authenticate the clients.
- A certificate and the matching private key (in PEM format) that AxoRouter shows to the clients.
These files must be available on the AxoRouter host, and readable by the axorouter
service for the connector to work.
Add new Windows Event Log connector
To add a new connector to an AxoRouter host, complete the following steps.
Note
Currently you can configure one Windows Events connector for an AxoRouter.
-
Select Connectors > Create new rule. (Alternatively, you can select Add Connector > Create a connector rule on the Connectors page of an AxoRouter host.)

-
Select Windows Events.
-
Configure the connector rule.
-
Enter a name for the connector rule into the Rule Name field.

-
(Optional) Add labels to the connector rule. You will be able to use these labels in Flow Processing steps, for example, in the Query field of Select Messages steps.
-
Set the Router Selector for the connector rule. The selector determines which AxoRouter instances will have a connector based on this connector rule.

- If you leave the Router Selector field empty, the rule will match every AxoRouter instance.
- To select only a specific AxoRouter instance, set the
name
field with the name of the instance as selector.
- If you set multiple fields in the selector, the connector rule will apply only to AxoRouter instances that match all elements of the selector. (There in an AND relationship between the fields.)
-
(Optional) Enter a Suffix for the connector rule. This suffix will be used in the name of the connector instances created on the AxoRouter hosts. For example, if the name of a matching AxoRouter instance is “my-axorouter”, and the suffix of the rule is “otlp-rule”, the connector created for the AxoRouter will be named “my-axorouter-otlp-rule”.
If the Suffix field is empty, the name of the connector rule is used instead.
-
(Optional) Enter a description for the rule.
-
If needed, enable the Classify and Parse preprocessing steps so AxoRouter automatically identifies and parses messages sent by supported data sources. If your source is not listed, contact us.
Enabling these options processes all data received by the connectors created based on this connector rule. If you want to apply classification and parsing more selectively, you can use the Classify and Parse processing steps in your Flows.
Note that the Parse processing step requires Classify to be enabled. Parsing automatically parses the data from the content of the message, and replaces the message content (the log.body
field in the internal message schema) with the structured information.
-
(Optional) If needed, you can add other preprocessing steps to the connector rule. You can use the same processing steps as in flows. These can be useful if you use a dedicated connector rule to fix or annotate data coming from some specific sources.
-
Configure the protocol-level settings of the connector.

-
Set the Hostname field. The clients will address this hostname. Note that:
- The Common Name of the server’s certificate (set in the following steps) must contain this hostname, otherwise the clients will reject the connection.
- You’ll have to use this hostname when configuring the Subscription Manager address in the Group Policy Editor.
-
(Optional) If for some reason don’t want to run the connection on the default port (5986
), adjust the Port field.
-
Set the paths for the certificates and keys used for the TLS-encrypted communication with the clients.
Use absolute paths (for example, /etc/axorouter/user-config/tls-key.pem
). The key and the certificate must be in PEM format. You have to make sure that these files are available on the AxoRouter host, currently you can’t distribute them from Axoflow Console.
- CA certificate path: The CA certificate that AxoRouter uses to authenticate the clients. If you want to limit which clients are accepted, set the More options > Certificate subject filter field.
- Server certificate path: The certificate that AxoRouter shows to the clients.
- Server private key path: The private key of the server certificate.
-
Configure the subscriptions of the connector.

-
Select Add new Subscription.
-
(Optional) Set a name for the subscription. If you leave it empty, Axoflow Console automatically generates a name.
-
Enter the event filter query into the Query field. This query specifies which events are collected by the subscription. For details on the query syntax, see the Microsoft documentation.
A single query can retrieve events from a maximum of 256 different channels.
For example, the following example queries every event from the Security, System, Application, and Setup channels.
<Query Id="0">
<Select Path="Application">*</Select>
<Select Path="Security">*</Select>
<Select Path="Setup">*</Select>
<Select Path="System">*</Select>
</Query>
-
(Optional) If needed, you can configure other low-level options in the More options section. For details, see Additional options.
-
Select Create.
Axoflow automatically creates connectors on the AxoRouter hosts that match the Router Selector.
Make sure to enable the ports you’ve configured in the connector on the firewall of the AxoRouter host, and on other firewalls between the AxoRouter host and your data sources.
-
Configure Windows Event Forwarding (WEF) on your clients to forward their events to the AxoRouter WEC connector.
When configuring the Subscription Manager address in the Group Policy Editor, use the hostname you’ve set in the connector

Additional options
You can set the following options of the WEC connector under Subscriptions > More options.

-
Certificate subject filter: A simple string to filter the clients based on the Common Name of their certificate. You can use the *
and ?
wildcard characters.
-
UUID: A unique ID for the subscription. If empty, Axoflow Console automatically generates it.
-
Heartbeat interval: The number of seconds, before the client will send a heartbeat message. The client sends heartbeat messages if it has no new events to send. Default value: 3600s
-
Connection retry interval: Time between reconnection attempts. Default value: 60s
-
Connection retry count: Number of times the client will attempt to reconnect if AxoRouter is unreachable. Default value: 10
-
Max time: The maximum number of seconds the client aggregates new events before sending them in a batch. Default value: 30s
-
Max elements: The maximum number of events that the client aggregates before sending them in a batch. By default it’s empty, meaning that only the Max time and Max envelope size options limit the aggregation. Default value: empty
-
Max envelope size: The maximum number of bytes in the SOAP envelope used to deliver the events. Default value: 512000 bytes
-
Locale: The language in which rendering information is expected, for example, en-US
. Default value: Client choose
-
Data locale: The language in which numerical data is expected to be formatted, for example, en-US
. Default value: Client choose
-
Read existing events: If enabled (Yes), the event source sends:
- all existing events that match the filter, and
- any events that subsequently occur for that event source.
If disabled (No), existing events will be ignored.
Default value: No
-
Ignore channel error: Subscription queries that result in errors will terminate the processing of the clients. Enable this option to ignore such errors. Default value: Yes
-
Content format: Determines whether to include rendering information (RenderedText
) with events or not (Raw
). Default value: Raw
The AxoRouter Windows Events connector adds the following fields to the meta
variable:
field |
value |
meta.connector.type |
windowsEvents |
meta.connector.name |
<name of the connector> |
meta.connector.port |
<port of the connector> |
9.7 - Vendors
Prerequisites
Steps
To onboard a source that is specifically supported by Axoflow, complete the following steps. Onboarding allows you to collect metrics about the host, and display the host on the Topology page.
-
Open the Axoflow Console.
-
Select Topology.
-
Select Create New Item > Source.

-
If the source is already sending logs to an AxoRouter instance that is registered in the Axoflow Console, select Detected, then select the source.
Otherwise, select the type of the source you want to onboard, and follow the on-screen instructions.

-
Connect the source to the destination or AxoRouter instance it’s sending logs to.
-
Select Topology > Create New Item > Path.

-
Select your data source in the Source host field.

-
Select the target router or aggregator this source is sending its data to in the Target host field, for example, axorouter
.
-
Select the Target connector. The connector determines how the destination receives the data (for example, using which protocol or port).
-
Select Create. The new path appears on the Topology page.

-
Configure the source to send logs to an AxoRouter instance. Specific instructions regarding individual vendors are listed below, along with default metadata (labels) and specific metadata for Splunk.
Note
Unless instructed otherwise, configure your source to send the logs to the Syslog connector of AxoRouter, using the appropriate port. Use RFC5424 if the source supports it.
- 514 UDP and TCP for RFC3164 (BSD-syslog) and RFC5424 (IETF-syslog) formatted traffic. AxoRouter automatically recognizes and handles both formats.
- 601 TCP for RFC5424 (IETF-syslog) and RFC3164 (BSD-syslog) formatted traffic. AxoRouter automatically recognizes and handles both formats.
- 6514 TCP for TLS-encrypted syslog traffic.
9.7.1 - A10 Networks
9.7.1.1 - vThunder
vThunder:
Delivers application load balancing, traffic management, and DDoS protection for enterprise networks.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
a10networks |
product |
vthunder |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
a10networks:vThunder:cef |
a10networks:vThunder |
netwaf |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: A10_LOAD_BALANCER
.
9.7.2 - Amazon
9.7.2.1 - CloudWatch
CloudWatch:
Monitors AWS resources and applications by collecting metrics, logs, and setting alarms.
Axoflow can collect data from your Amazon CloudWatch. At a high level, the process looks like this:
- Deploy an Axoflow Cloud Connector that will collect the data from your CloudWatch. Axoflow Cloud Connector is a simple container that you can deploy into AWS, another cloud provider, or on-prem.
- The connector forwards the collected data to the OpenTelemetry connector of an AxoRouter instance. This AxoRouter can be deployed within AWS, another cloud provider, or on-prem.
- Configure a Flow on Axoflow Console that processes and routes the collected data to your destination (for example, Splunk or another SIEM).
Prerequisites
Steps
To collect data from AWS CloudWatch, complete the following steps.
-
Deploy an Axoflow Cloud Connector.
-
Access the Kubernetes node or virtual machine where you want to deploy Axoflow Cloud Connector.
-
Set the following environment variable to the IP address of the AxoRouter where you want to forward the data from CloudWatch. This IP address must be accessible from the connector. You can find the IP address of AxoRouter on the Hosts > AxoRouter > Overview page.
export AXOROUTER_ENDPOINT=<AxoRouter-IP-address>
-
(Optional) By default, the connector stores positional and other persistence-related data in the /etc/axoflow-otel-collector/storage
directory. In case you want to use a different directory, set the STORAGE_DIRECTORY
environment variable.
-
Run the following command to generate a UUID for the connector. Axoflow Console will use this ID to identify the connector.
UUID_FULL=$(uuidgen 2>/dev/null || cat /proc/sys/kernel/random/uuid 2>/dev/null || python3 -c "import uuid; print(uuid.uuid4())")
export AXOCLOUDCONNECTOR_DEVICE_ID=$(echo "$UUID_FULL" | cut -d'-' -f1)
-
Set TLS encryption to secure the communication between Axoflow Cloud Connector and AxoRouter.
Configure the TLS-related settings of Axoflow Cloud Connector using the following environment variables.
Variable |
Required |
Default |
Description |
AXOROUTER_TLS_INSECURE |
No |
false |
Disables TLS encryption if set to true |
AXOROUTER_TLS_INCLUDE_SYSTEM_CA_CERTS_POOL |
No |
false |
Set to true to use the system CA certificates |
AXOROUTER_TLS_CA_FILE |
No |
- |
Path to the CA certificate file used to validate the certificate of AxoRouter |
AXOROUTER_TLS_CA_PEM |
No |
- |
PEM-encoded CA certificate |
AXOROUTER_TLS_INSECURE_SKIP_VERIFY |
No |
false |
Set to true to disable TLS certificate verification of AxoRouter |
AXOROUTER_TLS_CERT_FILE |
No |
- |
Path to the certificate file of Axoflow Cloud Connector |
AXOROUTER_TLS_CERT_PEM |
No |
- |
PEM-encoded client certificate |
AXOROUTER_TLS_KEY_FILE |
No |
- |
Path to the client private key file of Axoflow Cloud Connector |
AXOROUTER_TLS_KEY_PEM |
No |
- |
PEM-encoded client private key |
AXOROUTER_TLS_MIN_VERSION |
No |
1.2 |
Minimum TLS version to use |
AXOROUTER_TLS_MAX_VERSION |
No |
- |
Maximum TLS version to use |
Note
You’ll have to include the TLS-related environment variables you set in the docker command used to deploy Axoflow Cloud Connector.
-
Configure the authentication that the Axoflow Cloud Connector will use to access CloudWatch. Set the environment variables for the authentication method you want to use.
-
AWS Profile with a configuration file: Set the region and the AWS_PROFILE
export AWS_PROFILE=""
export AWS_REGION=""
-
AWS Credentials: To use AWS access keys, set an access key and a matching secret.
export AWS_ACCESS_KEY_ID=""
export AWS_SECRET_ACCESS_KEY=""
export AWS_REGION=""
-
EC2 instance profile:
-
Deploy the Axoflow Cloud Connector. The exact command depends on the authentication method and the TLS settings you want to configure.
-
AWS Profile with a configuration file: Set the region and the AWS_PROFILE. Also, pass the TLS-related settings you’ve set earlier.
docker run --rm \
-v "${STORAGE_DIRECTORY}":"${STORAGE_DIRECTORY}" \
-e AWS_PROFILE="${AWS_PROFILE}" \
-e AWS_REGION="${AWS_REGION}" \
-e AWS_SDK_LOAD_CONFIG=1 \
-e AXOROUTER_ENDPOINT="${AXOROUTER_ENDPOINT}" \
-e STORAGE_DIRECTORY="${STORAGE_DIRECTORY}" \
-e AXOCLOUDCONNECTOR_DEVICE_ID="${AXOCLOUDCONNECTOR_DEVICE_ID}" \
-e <TlS-related-environment-variable>="${<TlS-related-environment-variable>}" \
-v "${HOME}/.aws:/cloudconnectors/.aws:ro" \
ghcr.io/axoflow/axocloudconnectors:latest
-
AWS Credentials: To use AWS access keys, set an access key and a matching secret. Also, pass the TLS-related settings you’ve set earlier.
docker run --rm \
-v "${STORAGE_DIRECTORY}":"${STORAGE_DIRECTORY}" \
-e AWS_ACCESS_KEY_ID="${AWS_ACCESS_KEY_ID}" \
-e AWS_SECRET_ACCESS_KEY="${AWS_SECRET_ACCESS_KEY}" \
-e AWS_REGION="${AWS_REGION}" \
-e AXOROUTER_ENDPOINT="${AXOROUTER_ENDPOINT}" \
-e AXOCLOUDCONNECTOR_DEVICE_ID="${AXOCLOUDCONNECTOR_DEVICE_ID}" \
-e <TlS-related-environment-variable>="${<TlS-related-environment-variable>}" \
-e STORAGE_DIRECTORY="${STORAGE_DIRECTORY}" \
ghcr.io/axoflow/axocloudconnectors:latest
-
EC2 instance profile: Also, pass the TLS-related settings you’ve set earlier.
docker run --rm \
-v "${STORAGE_DIRECTORY}":"${STORAGE_DIRECTORY}" \
-e AWS_REGION="${AWS_REGION}" \
-e AXOROUTER_ENDPOINT="${AXOROUTER_ENDPOINT}" \
-e AXOCLOUDCONNECTOR_DEVICE_ID="${AXOCLOUDCONNECTOR_DEVICE_ID}" \
-e <TlS-related-environment-variable>="${<TlS-related-environment-variable>}" \
-e STORAGE_DIRECTORY="${STORAGE_DIRECTORY}" \
ghcr.io/axoflow/axocloudconnectors:latest
The Axoflow Cloud Connector starts forwarding logs to the AxoRouter instance.
-
Add the appliance to Axoflow Console.
- Open the Axoflow Console and select Topology.
- Select Create New Item > Source.
- Select AWS CloudWatch.
- Enter the IP address and the FQDN of the Axoflow Cloud Connector instance.
- Select Create.
-
Create a Flow to route the data from the AxoRouter instance to a destination. You can use the Labels of this source to select messages from this source.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
amazon |
product |
aws-cloudwatch |
format |
otlp |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
aws:cloudwatchlogs |
aws-activity |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: AWS_CLOUDWATCH
.
9.7.3 - Broadcom
9.7.3.1 - Edge Secure Web Gateway (Edge SWG)
Edge Secure Web Gateway (Edge SWG):
Secures web traffic through policy enforcement, SSL inspection, and real-time threat protection.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
broadcom |
product |
edge-swg |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
bluecoat:proxysg:access:syslog |
netops |
bluecoat:proxysg:access:kv |
netproxy |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: BROADCOM_EDGE_SWG
.
Earlier name/vendor
- Blue Coat Proxy
- Blue Coat ProxySG
- Symantec ProxySG
- Symantec Edge Secure Web Gateway
- Symantec Edge SWG
9.7.3.2 - NSX
NSX:
Provides network virtualization, micro-segmentation, and security for software-defined data centers.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Configure your NSX appliances, NSX Edges, and hypervisors to send their logs to the Syslog (autodetect and classify) connector of an AxoRouter instance. Use either:
- The TCP protocol (port 601 when using the default connector), or
- TLS-encrypted TCP protocol (port 6514 when using the default connector)
For details on configuring NSX, see Configure Remote Logging in the NSX Administration Guide.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
broadcom |
product |
nsx |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
vmware:nsxlog:dfwpktlogs |
netfw |
vmware:nsxlog:firewall-pktlog |
netfw |
vmware:nsxlog:nsx |
infraops |
vmware:nsxlog:nsxv |
infraops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: VMWARE_NSX
.
Earlier name/vendor
- VMware NSX
- NSX-T Data Center
9.7.4 - Check Point
9.7.4.1 - Anti-Bot
Anti-Bot:
Detects and blocks botnet communications and command-and-control traffic to prevent malware infections.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
anti-bot |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:endpoint |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_EDR
.
9.7.4.2 - Anti-Malware
Anti-Malware:
Protects endpoints from viruses, ransomware, and other malware using signature and behavior analysis.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
anti-malware |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:endpoint |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_EDR
.
9.7.4.3 - Anti-Phishing
Anti-Phishing:
Prevents phishing attacks by analyzing email content and links to block credential theft attempts.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
anti-phishing |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:email |
email |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_EMAIL
.
9.7.4.4 - Anti-Spam and Email Security
Anti-Spam and Email Security:
Blocks spam and malicious email content using reputation checks and email filtering techniques.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
antispam-emailsecurity |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:email |
email |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_EMAIL
.
9.7.4.5 - CPMI Client
CPMI Client:
Legacy Check Point management client used to interface with security policies and logs.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
cpmi-client |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cp_log |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_FIREWALL
.
9.7.4.6 - cpmidu_update_tool
cpmidu_update_tool:
Utility used to update configuration and database files for Check Point Multi-Domain environments.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
cpmidu-update-tool |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:audit |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_FIREWALL
.
9.7.4.7 - Database Tool
Database Tool:
Command-line tool to extract, query, or update Check Point configuration and policy databases.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
database-tool |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:audit |
netops |
9.7.4.8 - Edge Secure Web Gateway (Edge SWG)
Edge Secure Web Gateway (Edge SWG):
Provides configuration profiles for secure mobile access and web filtering on iOS devices.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
ios-profiles |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:network |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_HARMONY
.
9.7.4.9 - Endpoint Compliance
Endpoint Compliance:
Checks endpoint status and posture before granting network access, enforcing security policies.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
endpoint-compliance |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:endpoint |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_EDR
.
9.7.4.10 - Endpoint Management
Endpoint Management:
Centralized platform for managing endpoint protection, updates, and policy enforcement.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
endpoint-management |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:endpoint |
netops |
9.7.4.11 - Forensics
Forensics:
Analyzes security incidents on endpoints to uncover attack vectors and malicious activity.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
forensics |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:endpoint |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_EDR
.
9.7.4.12 - GO Password Reset
GO Password Reset:
Facilitates secure password reset processes for users across integrated environments.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
go-password-reset |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:audit |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_AUDIT
.
9.7.4.13 - HTTPS Inspection
HTTPS Inspection:
Decrypts and inspects HTTPS traffic to detect hidden threats within encrypted web sessions.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
https-inspection |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:firewall |
netfw |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_FIREWALL
.
9.7.4.14 - IPS
IPS:
Detects and blocks known and unknown exploits, malware, and vulnerabilities in network traffic.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
ips |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:ids |
netids |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_EDR
.
9.7.4.15 - MDS Query Tool
MDS Query Tool:
CLI tool for querying multi-domain configurations and policies in Check Point environments.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
mds-query-tool |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cp_log |
netops |
9.7.4.16 - Media Encryption & Port Protection
Media Encryption & Port Protection:
Secures USB ports and encrypts removable media to protect sensitive data on endpoints.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
media-port |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:endpoint |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_EDR
.
9.7.4.17 - Mobile Access
Mobile Access:
Enables secure remote access to corporate apps and data from mobile devices.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
mobile-access |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:network |
netops |
9.7.4.18 - Next-Generation Firewall (NGFW)
Next-Generation Firewall (NGFW):
Next-generation firewall providing intrusion prevention, application control, and threat protection.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
firewall |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:firewall |
netfw |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_FIREWALL
.
9.7.4.19 - QoS
QoS:
Implements bandwidth control and traffic prioritization policies for optimized network usage.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
qos |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:firewall |
netfw |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_FIREWALL
.
9.7.4.20 - Quantum
Quantum:
Unified threat prevention platform delivering firewall, VPN, and intrusion prevention capabilities.
If you’d like to send data from this source to AxoRouter, contact our support team for details.
9.7.4.21 - Query Database
Query Database:
Accesses and queries internal policy or object databases in Check Point systems.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
query-database |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:audit |
netops |
9.7.4.22 - SmartConsole
SmartConsole:
Graphical interface for managing Check Point security policies, logs, and monitoring.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
smartconsole |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:audit |
netops |
9.7.4.23 - SmartUpdate
SmartUpdate:
Tool for updating and managing licenses, software, and hotfixes in Check Point environments.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
smartupdate |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:audit |
netops |
9.7.4.24 - Threat Emulation and Anti-Exploit
Threat Emulation and Anti-Exploit:
Emulates files in a virtual environment to detect and block advanced persistent threats and exploits.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
threat-emulation |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:endpoint |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_EDR
.
9.7.4.25 - URL Filtering
URL Filtering:
Controls and logs web access based on URL categories and custom site rules to enforce policy.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
url-filtering |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:firewall |
netfw |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CHECKPOINT_FIREWALL
.
9.7.4.26 - Web API
Web API:
Provides programmatic access to Check Point security management through RESTful API endpoints.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
checkpoint |
product |
web-api |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cp_log |
checkpoint:audit |
netops |
9.7.5 - Cisco
9.7.5.1 - Access Control System (ACS)
Access Control System (ACS):
Centralizes network access control with RADIUS and TACACS+ for authentication and authorization.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
acs |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:acs |
netauth |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CISCO_ACS
.
9.7.5.2 - Adaptive Security Appliance (ASA)
Adaptive Security Appliance (ASA):
Provides stateful firewall, VPN support, and advanced threat protection for secure network perimeters.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
asa |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:asa |
netfw |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CISCO_ASA_FIREWALL
.
9.7.5.3 - Application Control Engine (ACE)
Application Control Engine (ACE):
Provides application-aware load balancing, SSL offload, and traffic control for Cisco networks.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ace |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ace |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CISCO_ACE
.
9.7.5.4 - Cisco IOS
Cisco IOS:
Network operating system for Cisco routers and switches, enabling routing, switching, and security.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ios |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ios |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CISCO_IOS
.
9.7.5.5 - Digital Network Architecture (DNA)
Digital Network Architecture (DNA):
Provides software-defined networking, policy automation, and analytics for enterprise infrastructure.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
dna |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:dna |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CISCO_DNAC
.
9.7.5.6 - Email Security Appliance (ESA)
Email Security Appliance (ESA):
Protects email systems from spam, phishing, malware, and data loss with advanced threat filtering.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
esa |
format |
text-plain | cef |
Note that the device can be configured to send plain syslog text or CEF-formatted output.
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, index, and source settings:
sourcetype |
index |
source |
cisco:esa:http |
email |
esa:http |
cisco:esa:textmail |
email |
esa:textmail |
cisco:esa:amp |
email |
esa:amp |
cisco:esa:antispam |
email |
esa:antispam |
cisco:esa:system_logs |
email |
esa:system_logs |
cisco:esa:system_logs |
email |
esa:euq_logs |
cisco:esa:system_logs |
email |
esa:service_logs |
cisco:esa:system_logs |
email |
esa:reportd_logs |
cisco:esa:system_logs |
email |
esa:sntpd_logs |
cisco:esa:system_logs |
email |
esa:smartlicense |
cisco:esa:error_logs |
email |
esa:error_logs |
cisco:esa:error_logs |
email |
esa:updater_logs |
cisco:esa:content_scanner |
email |
esa:content_scanner |
cisco:esa:authentication |
email |
esa:authentication |
cisco:esa:http |
email |
esa:http |
cisco:esa:textmail |
email |
esa:textmail |
cisco:esa:amp |
email |
esa:amp |
cisco:esa |
email |
program: <variable> |
cisco:esa:cef |
email |
esa:consolidated |
Tested with: Splunk Add-on for Cisco ESA
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CISCO_EMAIL_SECURITY
.
9.7.5.7 - Firepower
Firepower:
Provides next-gen firewall features including intrusion prevention, app control, and malware protection.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
firepower |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:firepower:syslog |
netids |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CISCO_FIREPOWER_FIREWALL
.
9.7.5.8 - Firepower Threat Defence (FTD)
Firepower Threat Defence (FTD):
Unifies firewall, VPN, and intrusion prevention into a single software for comprehensive threat defense.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ftd |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ftd |
netfw |
9.7.5.9 - Firewall Services Module (FWSM)
Firewall Services Module (FWSM):
Delivers multi-context, high-performance firewall services integrated into Cisco Catalyst switches.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
fwsm |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:fwsm |
netfw |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CISCO_FWSM
.
9.7.5.10 - HyperFlex (HX, UCSH)
HyperFlex (HX, UCSH):
Infrastructure solution combining compute, storage, and networking in a single system.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ucsh |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ucsh:hx |
infraops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CISCO_UCS
.
9.7.5.11 - Identity Services Engine (ISE)
Identity Services Engine (ISE):
Manages network access control and enforces policies with user and device authentication capabilities.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
For details on configuring your Identity Services Engine to forward its logs to an AxoRouter instance, see Configure Remote Syslog Collection Locations in Cisco Identity Services Engine (ISE) Administrator Guide.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ise |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ise:syslog |
netauth |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CISCO_ISE
.
9.7.5.12 - Integrated Management Controller (IMC)
Integrated Management Controller (IMC):
Provides out-of-band server management for Cisco UCS, enabling hardware monitoring and configuration.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
cimc |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:cimc |
infraops |
9.7.5.13 - IOS XR
IOS XR:
High-performance, modular network operating system for carrier-grade routing and scalability.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
xr |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:xr |
netops |
9.7.5.14 - Meraki MX
Meraki MX:
Cloud-managed network appliance offering firewall, VPN, SD-WAN, and security in a single platform.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
meraki |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:meraki |
netfw |
Tested with: TA-meraki
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CISCO_MERAKI
.
9.7.5.15 - Private Internet eXchange (PIX)
Private Internet eXchange (PIX):
Legacy firewall appliance delivering stateful inspection and secure network access control.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
pix |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:pix |
netfw |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CISCO_PIX_FIREWALL
.
9.7.5.16 - TelePresence Video Communication Server (VCS)
TelePresence Video Communication Server (VCS):
Enables video conferencing control and call routing for Cisco TelePresence systems and endpoints.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
tvcs |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:tvcs |
main |
9.7.5.17 - Unified Computing System Manager (UCSM)
Unified Computing System Manager (UCSM):
Centralized management platform for Cisco Unified Computing System (UCS) servers and resources.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ucsm |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ucs |
infraops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CISCO_UCS
.
9.7.5.18 - Unified Communications Manager (UCM)
Unified Communications Manager (UCM):
Delivers unified voice, video, messaging, and mobility services in enterprise IP telephony systems.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ucm |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ucm |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CISCO_UCM
.
9.7.5.19 - Viptela
Viptela:
Software-defined WAN solution providing secure connectivity, centralized control, and traffic optimization.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
viptela |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:viptela |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CISCO_VIPTELA
.
9.7.6 - Citrix
9.7.6.1 - Netscaler
Netscaler:
Offers application delivery, load balancing, and security features for optimized app performance.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
citrix |
product |
netscaler |
format |
text-plain |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
citrix:netscaler:appfw:cef |
netfw |
citrix:netscaler:syslog |
netfw |
citrix:netscaler:appfw |
netfw |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CITRIX_NETSCALER_WEB_LOGS
.
9.7.7 - Corelight
9.7.7.1 - Open Network Detection & Response (NDR)
Open Network Detection & Response (NDR):
Provides network detection and response by analyzing traffic for advanced threats and anomalous behavior.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
corelight |
product |
ndr-platform |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
index |
corelight_alerts |
main |
corelight_conn |
main |
corelight_corelight |
main |
corelight_corelight_metrics_bro |
main |
corelight_corelight_metrics_iface |
main |
corelight_dhcp |
main |
corelight_dpd |
main |
corelight_etc_viz |
main |
corelight_evt_all |
main |
corelight_evt_http |
main |
corelight_evt_suri |
main |
corelight_files |
main |
corelight_ftp |
main |
corelight_http |
main |
corelight_http_red |
main |
corelight_idx |
main |
corelight_irc |
main |
corelight_kerberos |
main |
corelight_metrics_bro |
main |
corelight_metrics_iface |
main |
corelight_rdp |
main |
corelight_smb |
main |
corelight_smb_files |
main |
corelight_socks |
main |
corelight_ssh |
main |
corelight_ssh_red |
main |
corelight_ssl |
main |
corelight_st_base |
main |
corelight_suri |
main |
corelight_suricata_corelight |
main |
corelight_x509 |
main |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CORELIGHT
.
9.7.8 - CyberArk
9.7.8.1 - Privileged Threat Analytics (PTA)
Privileged Threat Analytics (PTA):
Analyzes privileged account behavior to detect threats and suspicious activity in real time.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cyberark |
product |
pta |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cyberark:pta:cef |
main |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CYBERARK_PTA
.
9.7.8.2 - Vault
Vault:
Stores and manages privileged credentials, session recordings, and access control policies securely.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cyberark |
product |
vault |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cyberark:epv:cef |
netauth |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: CYBERARK
.
9.7.9 - F5 Networks
9.7.9.1 - BIG-IP
BIG-IP:
Provides load balancing, traffic management, and application security for optimized service delivery.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
f5 |
product |
bigip |
format |
text-plain | JSON | kv |
Note that the device can be configured to send plain syslog text, JSON, or key-value pairs.
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
f5:bigip:syslog |
netops |
f5:bigip:ltm:access_json |
netops |
f5:bigip:asm:syslog |
netops |
f5:bigip:apm:syslog |
netops |
f5:bigip:ltm:ssl:error |
netops |
f5:bigip:ltm:tcl:error |
netops |
f5:bigip:ltm:traffic |
netops |
f5:bigip:ltm:log:error |
netops |
f5:bigip:gtm:dns:request:irule |
netops |
f5:bigip:gtm:dns:response:irule |
netops |
f5:bigip:ltm:http:irule |
netops |
f5:bigip:ltm:failed:irule |
netops |
nix:syslog |
netops |
Tested with: Splunk Add-on for F5 BIG-IP
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: F5_BIGIP_APM
.
9.7.10 - FireEye
9.7.11 - Forcepoint
9.7.11.1 - Email Security
Email Security:
Protects email systems from spam, phishing, malware, and data exfiltration using advanced threat defense.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
forcepoint |
product |
email |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
index |
forcepoint:email:cef |
email |
forcepoint:email:kv |
email |
forcepoint:email:leef |
email |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: FORCEPOINT_EMAILSECURITY
.
Earlier name/vendor
9.7.11.2 - Next-Generation Firewall (NGFW)
Next-Generation Firewall (NGFW):
Next-gen firewall with deep packet inspection, policy enforcement, and integrated intrusion prevention.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
forcepoint |
product |
firewall |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
index |
websense:cg:cef |
netproxy |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: FORCEPOINT_FIREWALL
.
Earlier name/vendor
9.7.11.3 - WebProtect
WebProtect:
Provides web traffic filtering, malware protection, and data loss prevention for secure internet access.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
forcepoint |
product |
webprotect |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
index |
websense:cg:cef |
netproxy |
websense:cg:kv |
netproxy |
websense:cg:leef |
netproxy |
Earlier name/vendor
9.7.12 - Fortinet
9.7.12.1 - FortiGate firewalls
FortiGate firewalls:
Enterprise firewall platform offering threat protection, VPN, and traffic filtering for secure networking.
The following sections show you how to configure FortiGate Next-Generation Firewall (NGFW) to send their log data to Axoflow.
CAUTION:
Make sure to set data forwarding on your appliances/servers as described in this guide. Different settings like alternate message formats or ports might be valid, but can result in data loss or incorrect parsing.
Prerequisites
Steps
Note: The steps involving the FortiGate user interface are just for your convenience, for details, see the official FortiGate documentation.
-
Log in to your FortiGate device. You need administrator privileges to perform the configuration.
-
Register the address of your AxoRouter as an Address Object.
-
Select Log & Report > Log Settings > Global Settings.
-
Configure the following settings:
- Event Logging: Click All.
- Local traffic logging: Click All.
- Syslog logging: Enable this option.
- IP address/FQDN: Enter the address of your AxoRouter:
%axorouter-ip%
-
Click Apply.
-
Add the source to Axoflow Console.
-
Open the Axoflow Console and select Topology.
-
Select Create New Item > Source.
- If the source is actively sending data to an AxoRouter instance, select Detected, then select your source.
- Otherwise, select the vendor and product corresponding to your source from the Predefined sources, then enter the parameters of the source, like IP address and FQDN.
Note
During
log tapping, you can add hosts that are actively sending data to an AxoRouter instance by clicking
Register source.
-
(Optional) Add custom labels as needed.
-
Select Create.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
fortinet |
product |
fortigate |
format |
kv |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
fortigate_event |
netops |
fortigate_traffic |
netfw |
fortigate_utm |
netfw |
Tested with: Fortinet FortiGate Add-On for Splunk technical add-on
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: FORTINET_FIREWALL
.
9.7.12.2 - FortiMail
FortiMail:
Secures inbound and outbound email with spam filtering, malware protection, and advanced threat detection.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
fortinet |
product |
fortimail |
format |
kv |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
fml:log |
email |
Tested with: FortiMail Add-on for Splunk technical add-on
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: FORTINET_FORTIMAIL
.
9.7.12.3 - FortiWeb
FortiWeb:
Web application firewall protecting websites from attacks like XSS, SQL injection, and bot threats.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
fortinet |
product |
fortiweb |
product |
kv |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
fwb_log |
netops |
fwb_attack |
netids |
fwb_event |
netops |
fwb_traffic |
netfw |
Tested with: Fortinet FortiWeb Add-0n for Splunk technical add-on
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: FORTINET_FORTIWEB
.
9.7.13 - Fortra
9.7.13.1 - Powertech SIEM Agent for IBM i
Powertech SIEM Agent for IBM i:
Monitors IBM i system activity and forwards security events for centralized analysis in SIEM platforms.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
forta |
product |
powertech-siem-agent |
format |
cef |
format |
leef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
source |
index |
PowerTech:SIEMAgent:cef |
PowerTech:SIEMAgent |
netops |
PowerTech:SIEMAgent:leef |
PowerTech:SIEMAgent |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: FORTRA_POWERTECH_SIEM_AGENT
.
Earlier name/vendor
Powertech Interact
9.7.14 - General Unix/Linux host
9.7.14.1 - Generic Linux services
Generic Linux services:
A generic placeholder for program classifications
These classifications include non-vendor specific services and applications commonly found on Linux/Unix hosts.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
nix |
product |
generic |
service.name |
dnsmasq, sshd |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
nix:syslog |
netdns |
nix:syslog |
netops |
Tested with: Splunk Add-on for Infoblox
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: NIX_SYSTEM
.
9.7.15 - Imperva
9.7.15.1 - Incapsula
Incapsula:
Cloud-based WAF, DDoS protection, and bot mitigation service for securing web applications and APIs.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
imperva |
product |
incapsula |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
source |
index |
cef |
Imperva:Incapsula |
netwaf |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: IMPERVA_CEF
.
9.7.15.2 - SecureSphere
SecureSphere:
Provides on-prem web application, database, and file security with granular activity monitoring.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
imperva |
product |
securesphere |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
index |
imperva:waf:firewall:cef |
netwaf |
imperva:waf:security:cef |
netwaf |
imperva:waf |
netwaf |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: IMPERVA_SECURESPHERE
.
9.7.16 - Infoblox
9.7.16.1 - NIOS
NIOS:
Delivers secure DNS, DHCP, and IPAM (DDI) services with centralized network control and automation.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
infloblox |
product |
nios |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
source |
infoblox:threatprotect |
netids |
Infoblox:NIOS |
infoblox:dns |
netids |
Infoblox:NIOS |
Tested with: Splunk Add-on for Infoblox
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: INFOBLOX, INFOBLOX_DNS
.
9.7.17 - Internet Systems Consortium (ISC)
9.7.17.1 - DHCPd
DHCPd:
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
isc |
product |
dhcpd |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
source |
isc:dhcpd |
netipam |
program:dhcpd |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: ISC_DHCP
.
9.7.18 - Ivanti
9.7.18.1 - Connect secure
Connect secure:
Provides dynamic IP address assignment and network configuration for DHCP-enabled devices.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
ivanti |
product |
connect-secure |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
source |
index |
pulse:connectsecure |
|
netfw |
pulse:connectsecure:web |
|
netproxy |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: IVANTI_CONNECT_SECURE
.
Earlier name/vendor
Pulse Connect Secure
9.7.19 - Juniper
9.7.19.1 - Junos OS
Junos OS:
Junos OS is the network operating system for Juniper physical and virtual networking and security products.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
juniper |
product |
junos |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
juniper:junos:aamw:structured |
netfw |
juniper:junos:firewall |
netfw |
juniper:junos:firewall |
netids |
juniper:junos:firewall:structured |
netfw |
juniper:junos:firewall:structured |
netids |
juniper:junos:idp |
netids |
juniper:junos:idp:structured |
netids |
juniper:legacy |
netops |
juniper:junos:secintel:structured |
netfw |
juniper:junos:snmp |
netops |
juniper:structured |
netops |
Tested with: Splunk Add-on for Juniper
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: JUNIPER_JUNOS
.
9.7.20 - Kaspersky
9.7.20.1 - Endpoint Security
Endpoint Security:
Protects endpoints from malware, ransomware, and intrusions with antivirus, firewall, and threat detection.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
kaspersky |
product |
endpoint_security |
format |
text-plain | cef | leef |
Note that the device can be configured to send plain syslog text, LEEF, or CEF-formatted output.
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
kaspersky:cef |
epav |
kaspersky:es |
epav |
kaspersky:gnrl |
epav |
kaspersky:klau |
epav |
kaspersky:klbl |
epav |
kaspersky:klmo |
epav |
kaspersky:klna |
epav |
kaspersky:klpr |
epav |
kaspersky:klsr |
epav |
kaspersky:leef |
epav |
kaspersky:sysl |
epav |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: KASPERSKY_ENDPOINT
.
9.7.21 - MicroFocus
9.7.22 - Microsoft
9.7.22.1 - Azure Event Hubs
Azure Event Hubs:
Big data streaming platform to ingest and process events.
Axoflow can collect data from your Azure Event Hubs. At a high level, the process looks like this:
- Deploy an Axoflow Cloud Connector that will collect the data from your Event Hub. Axoflow Cloud Connector is a simple container that you can deploy into Azure, another cloud provider, or on-prem.
- The connector forwards the collected data to the OpenTelemetry connector of an AxoRouter instance. This AxoRouter can be deployed within Azure, another cloud provider, or on-prem.
- Configure a Flow on Axoflow Console that processes and routes the collected data to your destination (for example, Splunk or another SIEM).
Prerequisites
Steps
To collect data from Azure Event Hubs, complete the following steps.
-
Deploy an Axoflow Cloud Connector into Azure.
-
Access the Kubernetes node or virtual machine.
-
Set the following environment variable to the IP address of the AxoRouter where you want to forward the data from Event Hubs. This IP address must be accessible from the connector. You can find the IP address of AxoRouter on the Hosts > AxoRouter > Overview page.
export AXOROUTER_ENDPOINT=<AxoRouter-IP-address>
-
(Optional) By default, the connector stores positional and other persistence-related data in the /etc/axoflow-otel-collector/storage
directory. In case you want to use a different directory, set the STORAGE_DIRECTORY
environment variable.
-
Run the following command to generate a UUID for the connector. Axoflow Console will use this ID to identify the connector.
UUID_FULL=$(uuidgen 2>/dev/null || cat /proc/sys/kernel/random/uuid 2>/dev/null || python3 -c "import uuid; print(uuid.uuid4())")
export AXOCLOUDCONNECTOR_DEVICE_ID=$(echo "$UUID_FULL" | cut -d'-' -f1)
-
Set TLS encryption to secure the communication between Axoflow Cloud Connector and AxoRouter.
Configure the TLS-related settings of Axoflow Cloud Connector using the following environment variables.
Variable |
Required |
Default |
Description |
AXOROUTER_TLS_INSECURE |
No |
false |
Disables TLS encryption if set to true |
AXOROUTER_TLS_INCLUDE_SYSTEM_CA_CERTS_POOL |
No |
false |
Set to true to use the system CA certificates |
AXOROUTER_TLS_CA_FILE |
No |
- |
Path to the CA certificate file used to validate the certificate of AxoRouter |
AXOROUTER_TLS_CA_PEM |
No |
- |
PEM-encoded CA certificate |
AXOROUTER_TLS_INSECURE_SKIP_VERIFY |
No |
false |
Set to true to disable TLS certificate verification of AxoRouter |
AXOROUTER_TLS_CERT_FILE |
No |
- |
Path to the certificate file of Axoflow Cloud Connector |
AXOROUTER_TLS_CERT_PEM |
No |
- |
PEM-encoded client certificate |
AXOROUTER_TLS_KEY_FILE |
No |
- |
Path to the client private key file of Axoflow Cloud Connector |
AXOROUTER_TLS_KEY_PEM |
No |
- |
PEM-encoded client private key |
AXOROUTER_TLS_MIN_VERSION |
No |
1.2 |
Minimum TLS version to use |
AXOROUTER_TLS_MAX_VERSION |
No |
- |
Maximum TLS version to use |
Note
You’ll have to include the TLS-related environment variables you set in the docker command used to deploy Axoflow Cloud Connector.
-
Set the AZURE_EVENTHUB_CONNECTION_STRING
environment variable.
export AZURE_EVENTHUB_CONNECTION_STRING="Endpoint=sb://<NamespaceName>.servicebus.windows.net/;SharedAccessKeyName=<KeyName>;SharedAccessKey=<KeyValue>;EntityPath=<EventHubName>"
-
Deploy the Axoflow Cloud Connector by running the following command. Also, pass the TLS-related settings you’ve set earlier.
docker run --rm \
-v "${STORAGE_DIRECTORY}":"${STORAGE_DIRECTORY}" \
-e AZURE_EVENTHUB_CONNECTION_STRING="${AZURE_EVENTHUB_CONNECTION_STRING}" \
-e AXOROUTER_ENDPOINT="${AXOROUTER_ENDPOINT}" \
-e STORAGE_DIRECTORY="${STORAGE_DIRECTORY}" \
-e <TlS-related-environment-variable>="${<TlS-related-environment-variable>}" \
-e AXOCLOUDCONNECTOR_DEVICE_ID="${AXOCLOUDCONNECTOR_DEVICE_ID}" \
ghcr.io/axoflow/axocloudconnectors:latest
The Axoflow Cloud Connector starts forwarding logs to the AxoRouter instance.
-
Add the appliance to Axoflow Console.
- Open the Axoflow Console and select Topology.
- Select Create New Item > Source.
- Select Azure Event Hubs.
- Enter the IP address and the FQDN of the Axoflow Cloud Connector instance.
- Select Create.
-
Create a Flow to route the data from the AxoRouter instance to a destination. You can use the Labels of this source to select messages from this source.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
microsoft |
product |
azure-event-hubs |
format |
otlp |
Event Hubs Audit logs labels
label |
value |
vendor |
microsoft |
product |
azure-event-hubs-audit |
format |
otlp |
Event Hubs Provisioning logs labels
label |
value |
vendor |
microsoft |
product |
azure-event-hubs-provisioning |
format |
otlp |
Event Hubs Signin logs labels
label |
value |
vendor |
microsoft |
product |
azure-event-hubs-signin |
format |
otlp |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
mscs:azure:eventhub:log |
azure-activity |
Configure the TLS-related settings of Axoflow Cloud Connector using the following environment variables.
Variable |
Required |
Default |
Description |
AXOROUTER_TLS_INSECURE |
No |
false |
Disables TLS encryption if set to true |
AXOROUTER_TLS_INCLUDE_SYSTEM_CA_CERTS_POOL |
No |
false |
Set to true to use the system CA certificates |
AXOROUTER_TLS_CA_FILE |
No |
- |
Path to the CA certificate file used to validate the certificate of AxoRouter |
AXOROUTER_TLS_CA_PEM |
No |
- |
PEM-encoded CA certificate |
AXOROUTER_TLS_INSECURE_SKIP_VERIFY |
No |
false |
Set to true to disable TLS certificate verification of AxoRouter |
AXOROUTER_TLS_CERT_FILE |
No |
- |
Path to the certificate file of Axoflow Cloud Connector |
AXOROUTER_TLS_CERT_PEM |
No |
- |
PEM-encoded client certificate |
AXOROUTER_TLS_KEY_FILE |
No |
- |
Path to the client private key file of Axoflow Cloud Connector |
AXOROUTER_TLS_KEY_PEM |
No |
- |
PEM-encoded client private key |
AXOROUTER_TLS_MIN_VERSION |
No |
1.2 |
Minimum TLS version to use |
AXOROUTER_TLS_MAX_VERSION |
No |
- |
Maximum TLS version to use |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: AZURE_EVENTHUB
.
9.7.22.2 - Cloud App Security (MCAS)
Cloud App Security (MCAS):
Monitors cloud app usage, detects anomalies, and enforces security policies across SaaS services.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
microsoft |
product |
cas |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
cef |
microsoft:cas |
main |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: MICROSOFT_DEFENDER_CLOUD_ALERTS
.
9.7.22.3 - Windows hosts
Windows hosts:
Event logs from core services like security, system, DNS, and DHCP for operational and forensic analysis.
To collect event logs from Microsoft Windows hosts, Axoflow supports both agent-based and agentless methods.
Labels
Labels assigned to data received from Windows hosts depend on how AxoRouter receives the data. For details, see Windows host - agent based solution and Windows Event Collector (WEC).
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
windows:eventlog:snare |
oswin |
windows:eventlog:xml |
oswin |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: WINEVTLOG, WINEVTLOG_XML, WINDOWS_DHCP, WINDOWS_DNS
.
9.7.23 - MikroTik
9.7.23.1 - RouterOS
RouterOS:
Router operating system providing firewall, bandwidth management, routing, and hotspot functionality.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
mikrotik |
product |
routeros |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
routeros |
netfw |
routeros |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: MIKROTIK_ROUTER
.
9.7.24 - NetFlow Logic
9.7.24.1 - NetFlow Optimizer
NetFlow Optimizer:
Aggregates and transforms flow data (NetFlow, IPFIX) into actionable security and performance insights.
The following sections show you how to configure NetFlow Optimizer to send their log data to Axoflow.
CAUTION:
Make sure to set data forwarding on your appliances/servers as described in this guide. Different settings like alternate message formats or ports might be valid, but can result in data loss or incorrect parsing.
Prerequisites
Steps
Note: The steps involving the NetFlow Optimizer user interface are just for your convenience, for details, see the official documentation.
-
Log in to NetFlow Optimizer.
-
Select Outputs, then click the plus sign to add an output to NetFlow Optimizer.
-
Configure a Syslog (UDP) output:
- Name: Enter a name for the output, for example,
Axoflow
.
- Address: The IP address of the AxoRouter instance where you want to send the messages.
- Port: Set this parameter to 514.

-
Click Save.
-
Add the source to Axoflow Console.
-
Open the Axoflow Console and select Topology.
-
Select Create New Item > Source.
- If the source is actively sending data to an AxoRouter instance, select Detected, then select your source.
- Otherwise, select the vendor and product corresponding to your source from the Predefined sources, then enter the parameters of the source, like IP address and FQDN.
Note
During
log tapping, you can add hosts that are actively sending data to an AxoRouter instance by clicking
Register source.
-
(Optional) Add custom labels as needed.
-
Select Create.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
netflow |
product |
optimizer |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
flowintegrator |
flowintegrator |
9.7.25 - Netgate
9.7.25.1 - pfSense
pfSense:
Open-source firewall and router platform with VPN, traffic shaping, and intrusion detection capabilities.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
netgate |
product |
pfsense |
format |
csv | text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
pfsense:filterlog |
netfw |
pfsense:<program> |
netops |
The pfsense:<program>
variant is simply a generic linux event that is generated by the underlying OS on the appliance.
Tested with: TA-pfsense
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: PFSENSE
.
9.7.26 - Netmotion
9.7.26.1 - Netmotion
Netmotion:
Provides secure, optimized remote access with performance monitoring for mobile and distributed workforces.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
netmotion |
product |
netmotion |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
netmotion:reporting |
netops |
netmotion:mobilityserver:nm_mobilityanalyticsappdata |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: NETMOTION
.
9.7.27 - NETSCOUT
9.7.27.1 - Arbor Edge Defense (AED)
Arbor Edge Defense (AED):
Edge-based DDoS protection and threat mitigation system to block attacks before they enter the network.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
netscout |
product |
arbor-edge |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
source |
index |
netscout:aed |
netscout:aed |
netids |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: ARBOR_EDGE_DEFENSE
.
9.7.27.2 - Arbor Pravail (APS)
Arbor Pravail (APS):
Monitors and mitigates advanced persistent threats and malware with inline packet inspection.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
netscout |
product |
arbor-pravail |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
source |
index |
netscout:aps |
netscout:aps |
netids |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: ARBOR_EDGE_DEFENSE
.
Earlier name/vendor
- Arbor Networks Pravail (APS)
9.7.28 - OpenText
9.7.28.1 - ArcSight
ArcSight:
SIEM platform for collecting, correlating, and analyzing security event data across IT environments.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
opentext |
product |
arcsight |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
source |
index |
cef |
ArcSight:ArcSight |
main |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: ARCSIGHT_CEF
.
Earlier name/vendor
MicroFocus ArcSight
9.7.28.2 - Self Service Password Reset (SSPR)
Self Service Password Reset (SSPR):
Allows users to securely reset their own passwords without IT assistance, reducing helpdesk load.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
opentext |
product |
sspr |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
index |
sspr |
netauth |
Earlier name/vendor
NetIQ Self Service Password Reset
9.7.29 - Palo Alto Networks
9.7.29.1 - Cortex XSOAR
Cortex XSOAR:
Security orchestration, automation, and response platform for threat detection and incident management.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
palo-alto-networks |
product |
cortex-xsoar |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, source, and index settings:
sourcetype |
source |
index |
cef |
tim:cef |
infraops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: PAN_XSOAR
.
Earlier name/vendor
Threat Intelligence Management (TIM)
9.7.29.2 - Palo Alto firewalls
Palo Alto firewalls:
Firewall operating system delivering network security features including traffic control and threat prevention.
The following sections show you how to configure Palo Alto Networks Next-Generation Firewall devices to send their log data to Axoflow.
CAUTION:
Make sure to set data forwarding on your appliances/servers as described in this guide. Different settings like alternate message formats or ports might be valid, but can result in data loss or incorrect parsing.
Prerequisites
Steps
Note: The steps involving the Palo Alto Networks Next-Generation Firewall user interface are just for your convenience, for details, see the official PAN-OS® documentation.
-
Log in to your firewall device. You need administrator privileges to perform the configuration.
-
Configure a Syslog server profile.
-
Select Device > Server Profiles > Syslog.
-
Click Add and enter a Name for the profile, for example, axorouter
.
-
Configure the following settings:
- Syslog Server: Enter the IP address of your AxoRouter:
%axorouter-ip%
- Transport: Select TCP or TLS.
- Port: Set the port to
601
. (This is needed for the recommended IETF log format. If for some reason you need to use the BSD format, set the port to 514
.)
- Format: Select IETF.
- Syslog logging: Enable this option.
-
Click OK.
-
Configure syslog forwarding for Traffic, Threat, and WildFire Submission logs. For details, see Configure Log Forwarding the official PAN-OS® documentation.
- Select Objects > Log Forwarding.
- Click Add.
- Enter a Name for the profile, for example,
axoflow
.
- For each log type, severity level, or WildFire verdict, select the Syslog server profile.
- Click OK.
- Assign the log forwarding profile to a security policy to trigger log generation and forwarding.
- Select Policies > Security and select a policy rule.
- Select Actions, then select the Log Forwarding profile you created (for example,
axoflow
).
- For Traffic logs, select one or both of the Log at Session Start and Log At Session End options.
- Click OK.
-
Configure syslog forwarding for System, Config, HIP Match, and Correlation logs.
- Select Device > Log Settings.
- For System and Correlation logs, select each Severity level, select the Syslog server profile, and click OK.
- For Config, HIP Match, and Correlation logs, edit the section, select the Syslog server profile, and click OK.
-
Click Commit.
-
Add the source to Axoflow Console.
-
Open the Axoflow Console and select Topology.
-
Select Create New Item > Source.
- If the source is actively sending data to an AxoRouter instance, select Detected, then select your source.
- Otherwise, select the vendor and product corresponding to your source from the Predefined sources, then enter the parameters of the source, like IP address and FQDN.
Note
During
log tapping, you can add hosts that are actively sending data to an AxoRouter instance by clicking
Register source.
-
(Optional) Add custom labels as needed.
-
Select Create.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
palo-alto-networks |
product |
panos |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
pan:audit |
netops |
pan:globalprotect |
netfw |
pan:hipmatch |
epintel |
pan:traffic |
netfw |
pan:threat |
netproxy |
pan:system |
netops |
Tested with: Palo Alto Networks Add-on for Splunk technical add-on
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: PAN_FIREWALL
.
9.7.30 - Ping Identity
9.7.30.1 - PingAccess
PingAccess:
A centralized access security solution which provides secure access to applications and APIs down to the URL level
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
ping-identity |
product |
pingaccess |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
source.engine |
pingaccess |
netauth |
Tested with: SecureAuth IdP Splunk App
9.7.31 - Powertech
9.7.32 - Progress
9.7.32.1 - Flowmon Anomaly Detection System (ADS)
Flowmon Anomaly Detection System (ADS):
Detects network anomalies and threats through flow-based behavior analysis and machine learning.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
progress |
product |
flowmon-ads |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
flowmon-ads |
netids |
Earlier name/vendor
9.7.33 - Riverbed
9.7.33.1 - SteelConnect
SteelConnect:
WAN optimization appliance that accelerates application performance and reduces bandwidth usage.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
riverbed |
product |
steelconnect |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
riverbed:syslog |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: STEELHEAD
.
9.7.33.2 - SteelHead
SteelHead:
Software-defined WAN solution for centralized network management and secure cloud connectivity.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
riverbed |
product |
steelhead |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
riverbed:steelhead |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: RIVERBED
.
9.7.34 - RSA
9.7.34.1 - Authentication Manager
Authentication Manager:
Manages two-factor authentication using RSA SecurID tokens for secure access to enterprise resources.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
rsa |
product |
authentication-manager |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
rsa:securid:admin:syslog |
netauth |
rsa:securid:system:syslog |
netauth |
rsa:securid:runtime:syslog |
netauth |
rsa:securid:syslog |
netauth |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: RSA_AUTH_MANAGER
.
9.7.35 - rsyslog
Axoflow treats rsyslog sources as a generic syslog source. To send data from rsyslog to Axoflow, just configure rsyslog to send data to an AxoRouter instance using the syslog protocol.
Note that even if rsyslog is acting as a relay (receiving data from other clients and forwarding them to AxoRouter), on the Topology page it will be displayed as a data source.
Prerequisites
9.7.36 - SecureAuth
9.7.36.1 - Identity Platform
Identity Platform:
Delivers identity and access management with adaptive authentication and single sign-on capabilities.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
secureauth |
product |
idp |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
secureauth:idp |
netops |
Tested with: SecureAuth IdP Splunk App
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: SECUREAUTH_SSO
.
9.7.37 - Skyhigh Security
9.7.37.1 - Secure Web Gateway
Secure Web Gateway:
Inspects and filters web traffic to protect against malware, enforce policies, and prevent data loss.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
skyhigh |
product |
secure-web-gateway |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
mcafee:wg:leef |
mcafee:wg |
netproxy |
mcafee:wg:kv |
mcafee:wg |
netproxy |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: MCAFEE_WEBPROXY
.
Earlier name/vendor
McAfee Secure Web Gateway
9.7.38 - SonicWall
9.7.38.1 - SonicWall
SonicWall:
Delivers firewall, VPN, and deep packet inspection to protect networks from cyber threats and intrusions.
The following sections show you how to configure SonicWall firewalls to send their log data to Axoflow.
CAUTION:
Make sure to set data forwarding on your appliances/servers as described in this guide. Different settings like alternate message formats or ports might be valid, but can result in data loss or incorrect parsing.
Prerequisites
Steps for SonicOS 7.x
Note: The steps involving the SonicWall user interface are just for your convenience, for details, see the official SonicWall documentation.
-
Log in to your SonicWall device. You need administrator privileges to perform the configuration.
-
Register the address of your AxoRouter as an Address Object.
-
Select MENU > OBJECT.
-
Select Match Objects > Addresses > Address objects.
-
Click Add Address.
-
Configure the following settings:
- Name: Enter a name for the AxoRouter, for example,
AxoRouter
.
- Zone Assignment: Select the correct zone.
- Type: Select Host.
- IP Address: Enter the IP address of your AxoRouter:
%axorouter-ip%
-
Click Save.
-
Set your AxoRouter as a syslog server.
-
Navigate to Device > Log > Syslog.
-
Select the Syslog Servers tab.
-
Click Add.
-
Configure the following options:
- Name or IP Address: Select the Address Object of AxoRouter.
- Server Type: Select Syslog Server.
- Syslog Format: Select Enhanced.
If your Syslog server does not use default port 514, type the port number in the Port field.
By default, AxoRouter accepts data on the following ports (unless you’ve modified the default connector rules):
- 514 UDP and TCP for RFC3164 (BSD-syslog) and RFC5424 (IETF-syslog) formatted traffic. AxoRouter automatically recognizes and handles both formats.
- 601 TCP for RFC5424 (IETF-syslog) and RFC3164 (BSD-syslog) formatted traffic. AxoRouter automatically recognizes and handles both formats.
- 6514 TCP for TLS-encrypted syslog traffic.
- 4317 TCP for OpenTelemetry log data.
To receive data on other ports or other protocols, configure other connector rules for the AxoRouter host.
For TLS-encrypted syslog connections, create a new connector rule or edit an existing one, and configure the keys and certificates needed to encrypt the connections. For details, see Syslog.
Note
Make sure to enable the ports you’re using on the firewall of your host.

-
Add the source to Axoflow Console.
-
Open the Axoflow Console and select Topology.
-
Select Create New Item > Source.
- If the source is actively sending data to an AxoRouter instance, select Detected, then select your source.
- Otherwise, select the vendor and product corresponding to your source from the Predefined sources, then enter the parameters of the source, like IP address and FQDN.
Note
During
log tapping, you can add hosts that are actively sending data to an AxoRouter instance by clicking
Register source.
-
(Optional) Add custom labels as needed.
-
Select Create.
Steps for SonicOS 6.x
Note: The steps involving the SonicWall user interface are just for your convenience, for details, see the official SonicWall documentation.
-
Log in to your SonicWall device. You need administrator privileges to perform the configuration.
-
Register the address of your AxoRouter as an Address Object.
-
Select MANAGE > Policies > Objects > Address Objects.
-
Click Add.
-
Configure the following settings:
- Name: Enter a name for the AxoRouter, for example,
AxoRouter
.
- Zone Assignment: Select the correct zone.
- Type: Select Host.
- IP Address: Enter the IP address of your AxoRouter:
%axorouter-ip%
-
Click Add.
-
Set your AxoRouter as a syslog server.
-
Navigate to MANAGE > Log Settings > SYSLOG.
-
Click ADD.
-
Configure the following options:
- Syslog ID: Enter an ID for the firewall. This ID will be used as the hostname in the log messages.
- Name or IP Address: Select the Address Object of AxoRouter.
- Server Type: Select Syslog Server.
- Enable the Enhanced Syslog Fields Settings.
-
Click OK.
-
Add the source to Axoflow Console.
-
Open the Axoflow Console and select Topology.
-
Select Create New Item > Source.
- If the source is actively sending data to an AxoRouter instance, select Detected, then select your source.
- Otherwise, select the vendor and product corresponding to your source from the Predefined sources, then enter the parameters of the source, like IP address and FQDN.
Note
During
log tapping, you can add hosts that are actively sending data to an AxoRouter instance by clicking
Register source.
-
(Optional) Add custom labels as needed.
-
Select Create.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
dell |
product |
sonicwall |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
dell:sonicwall |
netfw |
Tested with: Dell SonicWall Add-on for Splunk technical add-on
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: SONIC_FIREWALL
.
9.7.39 - Superna
9.7.39.1 - Eyeglass
Eyeglass:
Manages and automates data protection, DR, and reporting for PowerScale environments.
The following sections show you how to configure Superna Eyeglass to send their log data to Axoflow.
CAUTION:
Make sure to set data forwarding on your appliances/servers as described in this guide. Different settings like alternate message formats or ports might be valid, but can result in data loss or incorrect parsing.
Prerequisites
Steps
Note: The steps involving the Superna Eyeglass user interface are just for your convenience, for details, see the official documentation.
-
Log in to Ransomware Defender and open the Zero Trust menu.
-
Click the plus sign to add a webhook target.
-
Set the parameters of the webhook.
- Name: Enter a name for the webhook, for example,
Axoflow
.
- URL: Enter the URL of the webhook connector of the AxoRouter instance where you want to post messages.
- Event Severity Filter: Select the severities of the events that you want to forward to the webhook.
- Lifecycle filter: Select the lifecycle changes that trigger a post message to the webhook.
-
Click Save, then the Test webhooks button. This will send a post message with a sample payload.
-
Add the source to Axoflow Console.
-
Open the Axoflow Console and select Topology.
-
Select Create New Item > Source.
- If the source is actively sending data to an AxoRouter instance, select Detected, then select your source.
- Otherwise, select the vendor and product corresponding to your source from the Predefined sources, then enter the parameters of the source, like IP address and FQDN.
Note
During
log tapping, you can add hosts that are actively sending data to an AxoRouter instance by clicking
Register source.
-
(Optional) Add custom labels as needed.
-
Select Create.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
superna |
product |
eyeglass |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
superna:eyeglass |
main |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: SUPERNA_EYEGLASS
.
9.7.40 - syslog-ng
By default, Axoflow treats syslog-ng sources as a generic syslog source.
- The easiest way to send data from syslog-ng to Axoflow is to configure it to send data to an AxoRouter instance using the syslog protocol.
- If you’re using syslog-ng Open Source Edition version 4.4 or newer, use the
syslog-ng-otlp()
driver to send data to AxoRouter using the OpenTelemetry Protocol.
Note that even if syslog-ng is acting as a relay (receiving data from other clients and forwarding them to AxoRouter), on the Topology page it will be displayed as a data source.
Prerequisites
9.7.41 - Thales
9.7.41.1 - Vormetric Data Security Platform
Vormetric Data Security Platform:
Provides data encryption, key management, and access controls across cloud and on-premise environments.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
thales |
product |
vormetric |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
thales:vormetric |
netauth |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: VORMETRIC
.
9.7.42 - Trellix
9.7.42.1 - Central Management System (CMS)
Central Management System (CMS):
Centralized policy and configuration management platform for Trellix security products.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
trellix |
product |
cms |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
trellix:cms |
trellix:cms |
netops |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: FIREEYE_CMS
.
9.7.42.2 - Email Threat Prevention (ETP)
Email Threat Prevention (ETP):
Analyzes and filters email traffic to block phishing, malware, and targeted email-based threats.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
trellix |
product |
etp |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
fe_etp |
fireeye |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: FIREEYE_ETP
.
9.7.42.3 - Endpoint Security (HX)
Endpoint Security (HX):
Detects and responds to advanced threats on endpoints using behavior-based analysis and threat intel.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
trellix |
product |
hx |
format |
text-json |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
hx_json |
fireeye |
fe_json |
fireeye |
hx_cef_syslog |
fireeye |
Tested with: FireEye Add-on for Splunk Enterprise
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: FIREEYE_HX
.
Earlier name/vendor
FireEye Endpoint Security (HX)
9.7.42.4 - ePolicy Orchestrator (EPO)
ePolicy Orchestrator (EPO):
Analyzes and filters email traffic to block phishing, malware, and targeted email-based threats.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
trellix |
product |
epo |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
source |
mcafee:epo:syslog |
epav |
trellix_endpoint_security |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: MCAFEE_EPO
.
Earlier name/vendor
McAfee ePolicy Ochestrator (EPO)
9.7.42.5 - MPS
MPS:
Appliance for detecting and blocking advanced threats through inline malware inspection.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
trellix |
product |
mps |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
source |
index |
trellix:mps |
trellix:mps |
netops |
9.7.43 - Trend Micro
9.7.43.1 - Deep Security Agent
Deep Security Agent:
Provides anti-malware, intrusion prevention, and log inspection for cloud and on-prem servers.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
trend-micro |
product |
deep-security-agent |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
deepsecurity |
epintel |
deepsecurity-system_events |
epintel |
deepsecurity-intrusion_prevention |
epintel |
deepsecurity-firewall |
epintel |
deepsecurity-antimalware |
epintel |
deepsecurity-integrity_monitoring |
epintel |
deepsecurity-log_inspection |
epintel |
deepsecurity-web_reputation |
epintel |
deepsecurity-app_control |
epintel |
deepsecurity-system_events |
epintel |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: TRENDMICRO_DEEP_SECURITY
.
9.7.44 - Ubiquiti
9.7.44.1 - Unifi
Unifi:
Manages network devices including routers, switches, and access points with centralized control.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
ubiquiti |
product |
unifi |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
ubnt |
netops |
ubnt:cef |
netops |
ubnt:dhcp |
netops |
ubnt:dnsmasq |
netops |
ubnt:edgeswitch |
netops |
ubnt:hostapd |
netops |
ubnt:link |
netops |
ubnt:mcad |
netops |
ubnt:sudo |
netops |
ubnt:wireless |
netops |
ubnt:fw |
netfw |
ubnt:fw:cef |
netfw |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: UBIQUITI_SWITCH
.
9.7.45 - Varonis
9.7.45.1 - DatAdvantage
DatAdvantage:
Monitors data access and permissions to detect insider threats and automate compliance reporting.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
varonis |
product |
datadvantage |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
varonis:ta |
main |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: VARONIS
.
9.7.46 - Vectra AI
Earlier name/vendor
Vectra Cognito
9.7.46.1 - X-Series
X-Series:
Detects and investigates cyberattacks across cloud, data center, and enterprise networks using AI.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
vectra |
product |
x-series |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
vectra:cognito:detect |
main |
vectra:cognito:accountdetect |
main |
vectra:cognito:accountscoring |
main |
vectra:cognito:audit |
main |
vectra:cognito:campaigns |
main |
vectra:cognito:health |
main |
vectra:cognito:hostscoring |
main |
vectra:cognito:accountlockdown |
main |
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: VECTRA_DETECT
.
9.7.47 - Zscaler appliances
9.7.47.1 - Zscaler Nanolog Streaming Service
Zscaler Nanolog Streaming Service:
Cloud-based secure internet gateway that inspects traffic for threats and enforces policies.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
zscaler |
product |
nss |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
zscalernss-alerts |
netops |
zscalernss-tunnel |
netops |
zscalernss-web |
netproxy |
zscalernss-web:leef |
netproxy |
Tested with: Zscaler Technical Add-On for Splunk
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: ZSCALER_INTERNET_ACCESS
.
9.7.47.2 - Zscaler Log Streaming Service
Zscaler Log Streaming Service:
Provides secure remote access to internal apps without exposing them to the public internet.
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
zscaler |
product |
lss |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
zscalerlss-zpa-app |
netproxy |
zscalerlss-zpa-audit |
netproxy |
zscalerlss-zpa-auth |
netproxy |
zscalerlss-zpa-bba |
netproxy |
zscalerlss-zpa-connector |
netproxy |
Tested with: Zscaler Technical Add-On for Splunk
Sending data to Google SecOps
When sending the data collected from this source to a dynamic Google SecOps destination, Axoflow sets the following log type: ZSCALER_ZPA, ZSCALER_ZPA_AUDIT
.
10 - Destinations
The following chapters show you how to configure Axoflow to send your data to specific destinations, like SIEMs or object storage solutions.
10.1 - Amazon
10.1.1 - Amazon S3
Amazon S3:
Scalable cloud storage service for storing and retrieving any amount of data via object storage.
To add an Amazon S3 destination to Axoflow, complete the following steps.
Prerequisites
-
An existing S3 bucket configured for programmatic access, and the related ACCESS_KEY
and SECRET_KEY
of a user that can access it. The user needs to have the following permissions:
kms:Decrypt
kms:Encrypt
kms:GenerateDataKey
s3:ListBucket
s3:ListBucketMultipartUploads
s3:AbortMultipartUpload
s3:ListMultipartUploadParts
s3:PutObject
-
To configure Axoflow, you’ll need the bucket name, region (or URL), access key, and the secret key of the bucket.
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + Create New Item > Destination.
-
Configure the destination.
-
Select Amazon S3.
-
Enter a name for the destination.

-
Enter the name of the bucket you want to use.
-
Enter the region code of the bucket into the Region field (for example, us-east-1
.), or select the Use custom endpoint URL option, and enter the URL of the endpoint into the URL field.
-
Enter the Access key and the Secret key for the account you want to use.
-
Enter the Object key (or key name), which uniquely identifies the object in an Amazon S3 bucket, for example: my-logs/${HOSTNAME}/
.
You can use AxoSyslog macros in this field.
- Select the Object key timestamp format you want to use, or select Use custom object key timestamp and enter a custom template. For details on the available date-related macros, see the AxoSyslog documentation.
- Set the maximal size of the S3 object. If an object reaches this size, Axoflow appends an index ("-1", “-2”, …) to the end of the object key and starts a new object after rotation.
- Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Classify, a Parse, and a Reduce step, in that order, to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet AND meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

10.2 - Apache Kafka
Apache Kafka:
Distributed event streaming platform used for building real-time data pipelines and stream processing.
If you’d like to send data from this source to AxoRouter, contact our support team for details.
10.3 - Clickhouse
If you’d like to send data from this source to AxoRouter, contact our support team for details.
10.4 - CrowdStrike
CrowdStrike:
Cloud-native platform for endpoint detection, threat hunting, and security analytics at scale.
If you’d like to send data from this source to AxoRouter, contact our support team for details.
10.5 - Elastic
10.5.1 - Elastic Cloud and Elasticsearch
Elastic Cloud and Elasticsearch:
Distributed search and analytics engine for real-time log analysis, observability, and data indexing.
To add an Elasticsearch destination to Axoflow, complete the following steps.
Prerequisites
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + Create New Item > Destination.
-
Configure the destination.
-
Select Elasticsearch.
-
Select which type of configuration you want to use:
- Simple: Send all data into a single index.
- Dynamic: Send data to an index based on the content or metadata of the incoming messages (or to a default index).
- Advanced: Allows you to specify a custom URL endpoint.
-
Enter a name for the destination.

-
Configure the endpoint of the destination.
- Advanced: Enter your Elasticsearch URL into the URL field, for example,
http://my-elastic-server:9200/_bulk
- Simple and Dynamic:
- Select the HTTPS or HTTP protocol to use to access your destination.
- Enter the Hostname and Port of the destination.
-
Specify the Elasticsearch index to send the data to.
- Simple: Enter the expression that specifies the Elasticsearch index to use into the Index field, for example:
test-${YEAR}${MONTH}${DAY}
. All data will be sent into this index.
- Dynamic and Advanced: Enter the expression that specifies the default index. The data will be sent into this index if no other index is set during the processing of the message (for example, by the processing steps of the Flow).
You can use AxoSyslog macros in this field.
-
Enter the username and password for the account you want to use.
-
(Optional)
By default, Axoflow rejects connections to the destination server if the certificate of the server is invalid (for example, it’s expired, signed by an unknown CA, or its CN and the name of the server is mismatched: the Common Name
or the subject_alt_name
parameter of the peer’s certificate must contain the hostname or the IP address (as resolved from AxoRouter) of the peer).
Warning
When validating a certificate, the entire certificate chain must be valid, including the CA certificate. If any certificate of the chain is invalid, Axoflow will reject the connection.
If you want to accept invalid certificates (or no certificate) from the destination servers, disable the Verify server certificate option.
-
(Optional) Set other options as needed for your environments.
- Timeout: The number of seconds to wait for a log-forwarding request to complete, and attempt to reconnect the server if exceeded. The default (
0
) is unlimited. For more details, see the AxoSyslog documentation.
- Timeout: Number of seconds to wait for a log-forwarding request to complete. If the timeout is exceeded, AxoRouter attempts to reconnect the destination. The default value is unlimited. For more details, see the AxoSyslog documentation.
- Batch Lines: Number of lines sent to the destination in one batch. AxoRouter waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For more details, see the AxoSyslog documentation.
- Batch Timeout: Maximal time in milliseconds to wait for a batch to be filled before sending it. The default value (
-1
) means that the batch should be full before sending, which may result in a long wait if the incoming traffic is low. For more details, see the AxoSyslog documentation.
- Number of Workers: Used for scaling the destination in case of high message load. Specifies the number of worker threads AxoRouter uses for sending messages to the destination. The default is 1. If high message throughput is not a concern, leave it on 1. For maximum performance, increase it up to the number of CPU cores available on AxoRouter. For more details, see the AxoSyslog documentation.
-
Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Classify, a Parse, and a Reduce step, in that order, to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet AND meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

10.6 - Generic
10.6.1 - /dev/null
/dev/null:
Blackhole destination for testing or discarding events without storing them.
This is a null destination that discards (drops) every data it receives, but reports that it has successfully received the data. This is useful sometimes for testing and performance measurements, for example, to find out if a real destination is the bottleneck, or another element in the upstream pipeline.
CAUTION:
All data that’s sent to the /dev/null destination only is irrevocably lost.
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + Create New Item > Destination.
-
Configure the destination.
-
Select /dev/null.
-
Enter a name for the destination.

-
(Optional): Add custom labels to the destination.
-
Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Classify, a Parse, and a Reduce step, in that order, to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet AND meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

10.6.2 - syslog
syslog:
Standard protocol for system logs. Our source automatically detects, parses, and classifies incoming syslog events without pre-configuration.
The syslog destination forwards your security data in an RFC-3164 or RFC-5424 compliant syslog format, using the UDP, TCP, or TLS-encrypted TCP protocols.
Prerequisites
If you want to enable TLS encryption for this connector to encrypt the communication with the sources, you’ll need to set appropriate keys and certificates.
CAUTION:
Copy the keys and certificates to AxoRouter before starting to configure the connector. Otherwise, you won’t be able to make configuration changes that require reloading the AxoRouter service, including starting log tapping or flow tapping.
Note the following points:
-
Keys and certificates must be in PEM format.
-
If the file contains a certificate chain, the file must begin with the certificate of the host, followed by the CA certificate that signed the certificate of the host, and any other signing CAs in order.
-
You must manually copy these files to their place on the AxoRouter host, currently you can’t distribute them from Axoflow Console.
The files must be readable by the axorouter
service.
-
The recommended path for certificates is under /etc/axorouter/user-config/
(for example, /etc/axorouter/user-config/tls-key.pem
). (If you need to use a different path, you have to append an option like -v /your/path:/your/path
to the AXOROUTER_PODMAN_ARGS
variable of /etc/axorouter/container.env
.)
-
When referring to the key or certificate during when configuring the connector, use absolute paths (for example, /etc/axorouter/user-config/tls-key.pem
).
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + Create New Item > Destination.
-
Configure the destination.
-
Select Syslog.
-
Select the template to use one of the standard syslog ports and transport protocols—for example, UDP port 514, which is commonly used for the RFC3164 syslog protocol.
To configure a different port, or to specify the protocol elements manually, select Custom.

-
Enter a name for the destination.

-
(Optional): Add custom labels to the destination.
-
Select the protocol to use for receiving syslog data: TCP, UDP, or TLS.

-
Select the syslog format to use: BSD (RFC3164) or Syslog (RFC5424).
-
(Optional) If explicitly needed for your use case, you can configure *Framing manually when using the Syslog (RFC5424) format. Enable framing (On) if the payload contains the length of the message as specified in RFC6587 3.4.1. Disable (Off) for non-transparent-framing RFC6587 3.4.2.
-
If you’ve selected Protocol > TLS, set the TLS-related options.
When using TLS, set the paths for the certificates and keys used for the TLS-encrypted communication with the clients. For details, see Prerequisites.
- Client certificate path: The certificate that AxoRouter shows to the destination server.
- Client private key path: The private key of the client certificate.
- CA certificate path: The CA certificate that AxoRouter uses to verify the certificate of the destination if Verify peer certificate is enabled.
-
Set the Address and the Port of the destination. Usually:
- 514 UDP and TCP for RFC3164 (BSD-syslog) and RFC5424 (IETF-syslog) formatted traffic. AxoRouter automatically recognizes and handles both formats.
- 601 TCP for RFC5424 (IETF-syslog) and RFC3164 (BSD-syslog) formatted traffic. AxoRouter automatically recognizes and handles both formats.
- 6514 TCP for TLS-encrypted syslog traffic.
-
Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Classify, a Parse, and a Reduce step, in that order, to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet AND meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

Protocol-specific destination options
If needed, select More options to set the following:
- TCP Keepalive Time Interval: The interval (number of seconds) between subsequential keepalive probes, regardless of the traffic exchanged in the connection.
- TCP Keepalive Probes: The number of unacknowledged probes to send before considering the connection dead.
- TCP Keepalive Time: The interval (in seconds) between the last data packet sent and the first keepalive probe.
10.7 - Google
10.7.1 - BigQuery
Axoflow will soon support sending data to Google BigQuery using a high-performance gRPC-based destination.
If you’d like to send data from this source to AxoRouter, contact our support team for details.
10.7.2 - Google Cloud Pub/Sub
Google Cloud Pub/Sub:
Asynchronous messaging service for ingesting and distributing event data between services and applications.
To add a Google Cloud Pub/Sub destination to Axoflow, complete the following steps. Axoflow can send data to Google Cloud Pub/Sub using its gRPC interface.
Prerequisites
For details, see the Google Pub/Sub tutorial.
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + Create New Item > Destination.
-
Configure the destination.
-
Select Pub/Sub.
-
Select which type of configuration you want to use:
- Simple: Send all data into a project and topic.
- Dynamic: Send data to a project and topic based on the content or metadata of the incoming messages (or to a default project/topic).
-
Enter a name for the destination.

-
Specify the project and topic to send the data to.
- Simple: Enter the ID of the GCP Project and the Topic. All data will be sent to this topic.
- Dynamic: Enter the expression that specifies the default Project and Topic. The data will be sent here unless it is set during the processing of the message (for example, by the processing steps of the Flow).
You can use AxoSyslog macros in this field.
-
Configure the authentication method to access the GCP project.
-
Automatic (ADC): Use the service account attached to the cloud resource (VM) that hosts AxoRouter.
-
Service Account File: Specify the path where a service account key file is located (for example, /etc/axorouter/user-config/
). You must manually copy that file to its place, currently you can’t distribute it from Axoflow.
-
None: Disable authentication completely. Only available when the More options > Service Endpoint option is set.
CAUTION:
Do not disable authentication in production.
-
(Optional) Set other options as needed for your environments.
- Service Endpoint: Use a custom API endpoint. Leave it empty to use the default:
https://pubsub.googleapis.com
- Timeout: Number of seconds to wait for a log-forwarding request to complete. If the timeout is exceeded, AxoRouter attempts to reconnect the destination. The default value is unlimited. For more details, see the AxoSyslog documentation.
- Batch Lines: Number of lines sent to the destination in one batch. AxoRouter waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For more details, see the AxoSyslog documentation.
- Batch Timeout: Maximal time in milliseconds to wait for a batch to be filled before sending it. The default value (
-1
) means that the batch should be full before sending, which may result in a long wait if the incoming traffic is low. For more details, see the AxoSyslog documentation.
- Number of Workers: Used for scaling the destination in case of high message load. Specifies the number of worker threads AxoRouter uses for sending messages to the destination. The default is 1. If high message throughput is not a concern, leave it on 1. For maximum performance, increase it up to the number of CPU cores available on AxoRouter. For more details, see the AxoSyslog documentation.
-
Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Classify, a Parse, and a Reduce step, in that order, to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet AND meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

Pub/Sub attributes
You can embed custom attributes as metadata in Pub/Sub messages in the processing steps of the Flow that’s sending data to the Pub/Sub destination.
To do that:
-
Add a FilterX processing step to the Flow.
-
Edit the Statements field of the processing step:
-
Add the meta.pubsub.attributes = json();
line to add an empty JSON object to the messages.
-
Set your custom attributes under the meta.pubsub.attributes
key. For example, if you want to include the timestamp as a custom attribute as well, you can use:
meta.pubsub.attributes = {"timestamp": $S_ISODATE};
10.7.3 - Google Security Operations (SecOps)
Google Security Operations (SecOps):
To add a Google Security Operations (SecOps) destination to Axoflow, complete the following steps.
Prerequisites
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + Create New Item > Destination.
-
Configure the destination.
-
Select SecOps.
-
Select which type of configuration you want to use:
- Simple: Send all data into a namespace with a single log type.
- Dynamic: Send data to a namespace with a log type based on the content or metadata of the incoming messages (or to a default namespace/log type).
- Advanced: Specify the full endpoint URL.
-
Enter a name for the destination.

-
Enter the URL endpoint for the destination.
- Simple and Dynamic: Enter base URL for the regional service endpoint to use into the Service Endpoint field, for example,
https://malachiteingestion-pa.googleapis.com
for the US region.
- Advanced: Enter the full URL to use for ingesting logs into the Endpoint URL field, for example,
https://malachiteingestion-pa.googleapis.com/v2/unstructuredlogentries:batchCreate
.
-
Specify the namespace and log type to send the data to.
- Simple and Advanced: Enter the Namespace where to send the data and the Log type to assign to the data. All data will be sent to this namespace with the specified log type.
- Dynamic: Enter the default Namespace and Log type. The data will be sent into this namespace with the specified log type unless it is set during the processing of the message (for example, by the processing steps of the Flow).
You can use AxoSyslog macros in this field.
-
Enter the unique ID of your Google SecOps instance into the Customer ID field.
-
Configure the authentication method to access the GCP project.
- Automatic (ADC): Use the service account attached to the cloud resource (VM) that hosts AxoRouter.
- Service Account File: Specify the path where a service account key file is located (for example,
/etc/axorouter/user-config/serviceaccount.json
). You must manually copy that file to its place, currently you can’t distribute it from Axoflow.
-
(Optional) Set other options as needed for your environments.
- Timeout: Number of seconds to wait for a log-forwarding request to complete. If the timeout is exceeded, AxoRouter attempts to reconnect the destination. The default value is unlimited. For more details, see the AxoSyslog documentation.
- Batch Lines: Number of lines sent to the destination in one batch. AxoRouter waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For more details, see the AxoSyslog documentation.
- Batch Timeout: Maximal time in milliseconds to wait for a batch to be filled before sending it. The default value (
-1
) means that the batch should be full before sending, which may result in a long wait if the incoming traffic is low. For more details, see the AxoSyslog documentation.
- Number of Workers: Used for scaling the destination in case of high message load. Specifies the number of worker threads AxoRouter uses for sending messages to the destination. The default is 1. If high message throughput is not a concern, leave it on 1. For maximum performance, increase it up to the number of CPU cores available on AxoRouter. For more details, see the AxoSyslog documentation.
-
Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Classify, a Parse, and a Reduce step, in that order, to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet AND meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

10.8 - Grafana
10.8.1 - Grafana Loki
Grafana Loki:
Scalable log aggregation system optimized for storing and querying logs by labels using Grafana.
If you’d like to send data from this source to AxoRouter, contact our support team for details.
10.9 - Microsoft
10.9.1 - Azure Monitor
Sending data to Azure Monitor is practically identical to using the Sentinel destination. Follow the procedure described in Microsoft Sentinel.
10.9.2 - Microsoft Sentinel
Microsoft Sentinel:
Cloud-native SIEM and SOAR platform for collecting, analyzing, and responding to security threats.
To add a Microsoft Sentinel or Azure Monitor destination to Axoflow, complete the following steps. Axoflow Console can configure your AxoRouters to send data to the built-in syslog table of Azure Monitor.
Prerequisites
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + Create New Item > Destination.
-
Configure the destination.
-
Select Sentinel or Azure Monitor.
-
Enter a name for the destination.

-
Configure the credentials needed for authentication.

- Tenant ID: Directory (tenant) ID of the environment where you’re sending the data. (Practically everything belongs to the tenant ID: the Entra ID application, the Log analytics workspace, Sentinel, the DCE and the DCR, and so on.)
- Application ID: Application (client) ID of the Microsoft Entra ID application.
- Application secret: The Client secret of the Microsoft Entra ID application.
- Scope: The scope for the authentication token. Usually you can leave empty to use the default value (
https://monitor.azure.com//.default
).
-
Specify the details of the table to send the data to.
-
(Optional) Set other options as needed for your environments.
- Timeout: The number of seconds to wait for a log-forwarding request to complete, and attempt to reconnect the server if exceeded. The default (
0
) is unlimited. For more details, see the AxoSyslog documentation.
- Timeout: Number of seconds to wait for a log-forwarding request to complete. If the timeout is exceeded, AxoRouter attempts to reconnect the destination. The default value is unlimited. For more details, see the AxoSyslog documentation.
- Batch Lines: Number of lines sent to the destination in one batch. AxoRouter waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For more details, see the AxoSyslog documentation.
- Batch Timeout: Maximal time in milliseconds to wait for a batch to be filled before sending it. The default value (
-1
) means that the batch should be full before sending, which may result in a long wait if the incoming traffic is low. For more details, see the AxoSyslog documentation.
- Number of Workers: Used for scaling the destination in case of high message load. Specifies the number of worker threads AxoRouter uses for sending messages to the destination. The default is 1. If high message throughput is not a concern, leave it on 1. For maximum performance, increase it up to the number of CPU cores available on AxoRouter. For more details, see the AxoSyslog documentation.
-
Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Classify, a Parse, and a Reduce step, in that order, to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet AND meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

10.10 - OpenObserve
OpenObserve:
High-performance log analytics platform supporting structured log ingestion, search, and visualization.
To add an OpenObserve destination to Axoflow, complete the following steps.
Prerequisites
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + Create New Item > Destination.
-
Configure the destination.
-
Select OpenObserve.
-
Select which type of configuration you want to use:
- Simple: Send all data into a single stream of an organization.
- Dynamic: Send data to an organization and stream based on the content or metadata of the incoming messages (or to the default stream of the default organization).
- Advanced: Allows you to specify a custom URL endpoint.
-
Enter a name for the destination.

-
Configure the endpoint of the destination.
- Advanced: Enter your OpenObserve URL into the URL field, for example,
https://example.com/api/my-org/my-stream/_json
- Simple and Dynamic:
- Select the HTTPS or HTTP protocol to use to access your destination.
- Enter the Hostname and Port of the destination.
-
Specify the OpenObserve stream to send the data to.
- Simple: Enter name of the Organization and the Stream where you want to send the data. All data will be sent into this stream.
- Dynamic: Enter the expression that specifies the default Default Organization and teh Default Stream. The data will be sent into this stream if no other organization and stream is set during the processing of the message (for example, by the processing steps of the Flow).
You can use AxoSyslog macros in this field.
-
Enter the username and password for the account you want to use.
-
(Optional)
By default, Axoflow rejects connections to the destination server if the certificate of the server is invalid (for example, it’s expired, signed by an unknown CA, or its CN and the name of the server is mismatched: the Common Name
or the subject_alt_name
parameter of the peer’s certificate must contain the hostname or the IP address (as resolved from AxoRouter) of the peer).
Warning
When validating a certificate, the entire certificate chain must be valid, including the CA certificate. If any certificate of the chain is invalid, Axoflow will reject the connection.
If you want to accept invalid certificates (or no certificate) from the destination servers, disable the Verify server certificate option.
-
(Optional) Set other options as needed for your environments.
- Timeout: The number of seconds to wait for a log-forwarding request to complete, and attempt to reconnect the server if exceeded. The default (
0
) is unlimited. For more details, see the AxoSyslog documentation.
- Timeout: Number of seconds to wait for a log-forwarding request to complete. If the timeout is exceeded, AxoRouter attempts to reconnect the destination. The default value is unlimited. For more details, see the AxoSyslog documentation.
- Batch Lines: Number of lines sent to the destination in one batch. AxoRouter waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For more details, see the AxoSyslog documentation.
- Batch Timeout: Maximal time in milliseconds to wait for a batch to be filled before sending it. The default value (
-1
) means that the batch should be full before sending, which may result in a long wait if the incoming traffic is low. For more details, see the AxoSyslog documentation.
- Number of Workers: Used for scaling the destination in case of high message load. Specifies the number of worker threads AxoRouter uses for sending messages to the destination. The default is 1. If high message throughput is not a concern, leave it on 1. For maximum performance, increase it up to the number of CPU cores available on AxoRouter. For more details, see the AxoSyslog documentation.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Classify, a Parse, and a Reduce step, in that order, to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet AND meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

10.11 - OpenSearch
OpenSearch:
Open-source search and analytics suite for log ingestion, indexing, dashboards, and alerting.
Axoflow will soon support sending data to OpenSearch using its HTTP endpoint.
If you’d like to send data from this source to AxoRouter, contact our support team for details.
10.12 - Splunk
Splunk:
Ingests, indexes, and visualizes data for monitoring, analysis, and alerting.
To add a Splunk destination (Splunk Cloud or Splunk Enterprise) to Axoflow, complete the following steps.
Prerequisites
-
Enable the HTTP Event Collector (HEC) on your Splunk deployment if needed. On Splunk Cloud Platform deployments, HEC is enabled by default.
-
Create a token for Axoflow to use in the destination. When creating the token, use the syslog source type.
For details, see Set up and use HTTP Event Collector in Splunk Web.
-
If you’re using AxoRouter, create the indexes where Axoflow sends the log data. Which index is needed depends on the sources you have, but create at least the following event indices: axoflow
, infraops
, netops
, netfw
, osnix
(for unclassified messages). Check your sources in the Sources section for a detailed lists on which indices their data is sent.
-
If you’ve created any new indexes, make sure to add those indexes to the token’s Allowed Indexes.
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + Create New Item > Destination.
-
Configure the destination.
-
Select Splunk.
-
Select which type of configuration you want to use:
- Simple: Send all data into a single index, with fixed source and source type settings.
- Dynamic: Set index, source, and source type based on the content or metadata of the incoming messages.
- Advanced: Allows you to specify a custom URL endpoint.
-
Enter a name for the destination.

-
Configure the endpoint of the destination.
- Advanced: Enter your Splunk URL into the URL field, for example,
https://<your-splunk-tenant-id>.splunkcloud.com:8088
for Splunk Cloud Platform free trials, or https://<your-splunk-tenant-id>.splunkcloud.com
for Splunk Cloud Platform instances.
- Simple and Dynamic:
- Select the HTTPS or HTTP protocol to use to access your destination.
- Enter the Hostname and Port of the destination.
-
Specify the Splunk index to send the data to.
- Simple: Enter the expression that specifies the Splunk index to use into the Index field, for example:
netops
. All data will be sent into this index.
- Dynamic and Advanced:
- Enter the name of the Default Index. The data will be sent into this index if no other index is set during the processing of the message (based on automatic classification, or by the processing steps of the Flow). Make sure that the index exists in Splunk.
- Enter the Default Source and Default Source Type. These will be assigned to the messages that have no source or source type set during the processing of the message (based on automatic classification, or by the processing steps of the Flow).
-
Enter the token you’ve created into the Token field.
-
Disable the Verify server certificate option unless your deployment has a valid, non-self-signed certificate. Free Splunk Cloud accounts have self-signed certificates.
-
(Optional) Set other options as needed for your environments.
- Timeout: The number of seconds to wait for a log-forwarding request to complete, and attempt to reconnect the server if exceeded. The default (
0
) is unlimited. For more details, see the AxoSyslog documentation.
- Timeout: Number of seconds to wait for a log-forwarding request to complete. If the timeout is exceeded, AxoRouter attempts to reconnect the destination. The default value is unlimited. For more details, see the AxoSyslog documentation.
- Batch Lines: Number of lines sent to the destination in one batch. AxoRouter waits for this number of lines to accumulate and sends them off in a single batch. Increasing this number increases throughput as more messages are sent in a single batch, but also increases message latency. For more details, see the AxoSyslog documentation.
- Batch Timeout: Maximal time in milliseconds to wait for a batch to be filled before sending it. The default value (
-1
) means that the batch should be full before sending, which may result in a long wait if the incoming traffic is low. For more details, see the AxoSyslog documentation.
- Number of Workers: Used for scaling the destination in case of high message load. Specifies the number of worker threads AxoRouter uses for sending messages to the destination. The default is 1. If high message throughput is not a concern, leave it on 1. For maximum performance, increase it up to the number of CPU cores available on AxoRouter. For more details, see the AxoSyslog documentation.
-
Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.

-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
By default, you can select only external destinations. If you want to send data to another AxoRouter, enable the Show all destinations option, and select the connector of the AxoRouter where you want to send the data.

-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Classify, a Parse, and a Reduce step, in that order, to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet FortiGate firewalls, use the
meta.vendor = fortinet AND meta.product = fortigate
query.
- Save the processing steps.

-
Select Create.
-
The new flow appears in the Flows list.

11 - Metrics and analytics
This chapter shows how to access the different metrics and analytics that Axoflow collects about your security data pipeline.
11.1 - Analytics
Axoflow allows you to analyze your data at various points in your pipeline using using Sankey, Sunburst, and Histogram diagrams.
- The Analytics page of allows you to analyze the data throughput of your pipeline.
- You can analyze the throughput of a flow on the Flows > <flow-to-analyze> > Analytics page.
- You can analyze the throughput of a single host on the Hosts > <host-to-analyze> > Analytics page.

The analytics charts
You can select what is displayed and how using the top bar and the Filter labels bar. Use the top bar to select the dataset you want to work with, and the Filter labels bar to select the labels to visualize for the selected dataset.

-
Time period: Select the calendar_month
icon to change the time period that’s displayed on the charts. You can use absolute (calendar) time, or relative time (for example, the last 2 days).
Axoflow stores all dates in Coordinated Universal Time (UTC), and automatically converts it to the timezone of set in your browser/operating system.
-
Select insights
to switch between Sankey, Sunburst, and Histogram diagrams.
-
You can display the data throughput based on:
- Output bytes
- Output events
- Input bytes
- Input events
-
Export as CSV: Select download
to download the currently visible dataset as a CSV file to process it in an external tool.
-
Add and clear filters (filter_alt
/ filter_alt_off
).
-
Basic Search mode searches in the values of the labels that are selected for display in the Filter labels bar, and filters the dataset to the matching data. For example, if the Host and App labels are displayed, and you enter a search keyword in Basic Search mode, only those data will be visualized where the search keyword appears in its hostname or the app name.
Basic Search is case insensitive. Adding multiple keywords searches for matches in any of the previous fields. This is equivalent to the @ANY =* keyword1 AND @ANY =* keyword2
AQL query.
-
AQL Query Search mode allows you to filter data by any available labels.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
The Filter labels bar determines which labels are displayed. You can:
-
Reorder the labels to adjust the diagram. On Sunburst diagrams, the left-most label is on the inside of the diagram.
-
Add new labels to get more details about the data flow.
- Labels added to AxoRouter hosts get the
axo_host_
prefix.
- Labels added to data sources get the
host_
prefix. For example, if you add a rack label to an edge host, it’ll be added to the data received from the host as host_rack
.
On other pages, like the Host Overview page, the labels are displayed without the prefixes.
-
Remove unneeded labels from the diagram.
Click a segment of the diagram to drill-down into the data. That’s equivalent with selecting filter_alt
and adding the label to the Analytics filters. To clear the filters, select filter_alt_off
.
Hover over a segment displays more details about it.
Sunburst diagrams
Sunburst diagrams (also known as ring charts or radial treemaps) visualize your data pipeline as a hierarchical dataset. It organizes the data according to the labels displayed in the Filter labels field into concentric rings, where each ring corresponds to a level in the hierarchy. The left-most label is on the inside of the diagram.

For example, sunburst diagrams are great for visualizing:
- top talkers (the data sources that are sending the most data), or
- if you’ve added custom labels that show the owner to your data sources, you can see which team is sending the most data to the destination.
The following example groups the data sources that send data into a Splunk destination based on their custom host_label_team
labels.

Sankey diagrams
The Sankey diagram of your data pipeline shows the flow of data between the elements of the pipeline, for example, from the source (host) to the destination. Sankey diagrams are especially suited to visualize the flow of data, and show how that flow is subdivided at each stage. That way, they help highlight bottlenecks, and show where and how much data is flowing.

The diagram consists of nodes (also called segments) that represent the different attributes or labels of the data flowing through the host. Nodes are shown as labeled columns, for example, the sender application (app), or a host. The thickness of the links between the nodes of the diagram shows the amount of data.
-
Hover over a link to show the data throughput of this link between the edges of the diagram.
-
Click on a link to:

-
Click on a node to drill-down into the diagram. (To undo, use the Back button of your browser, or the clear filters icon filter_alt_off
.)
The following example shows a custom label that shows the owner of the source host, thereby visualizing which team is sending the most data to the destination.

This example shows the issue
label that marks messages that were invalid or malformed for some reason (for example, the header of a syslog message was missing).

Sankey diagrams are a great way to:
- Visualize flows: add the
flow
label to the Filter labels field.
- Find unclassified messages that weren’t recognized by the Axoflow database: add the
app
label to the Filter labels field, and look for the axo_fallback
link. You can tap into the log flow to check these messages. Feel free to send us sample so we can add them to the classification database.
- Visualize custom labels and their relation to data flows.
Histogram
The Histogram chart shows time-series data about the metrics for the selected labels.

Note that:
- Hovering over a part of the chart shows you the actual metrics for that time.
- Exporting as CSV (download
) download the currently visible dataset as time-series data.
- Adding too many labels to the Filter labels field can make the chart difficult to interpret.
You can also access histogram charts from the tapping dialog of the Sankey diagrams when you click on a link in the diagram: Histogram for full column will create a histogram for the entire Sankey column, while Histogram for this value will shot the histogram for the selected link.
Tapping into the log flow
-
On the Sankey diagram, click on a link to show the details of the link.

-
Tap into the data traffic:
- Select Tap with this label to tap into the log flow at either end of the link.
- Select Tap both to tap into the data flowing through the link.
-
Select the host where you want to tap into the logs.
-
Select Start.
-
When the logs you’re interested in show up, click Stop Log Tap, then click a log message to see its details.

-
If you don’t know what the message means, select AI Analytics to ask our AI to interpret it.

11.2 - Host metrics
The Metrics & health page of a host shows the history of the various metrics Axoflow collects about the host.

Events of the hosts (for example, configuration reloads, or alerts affecting the host) are displayed over the metrics.

Interact with metrics
You can change which metrics are displayed and for which time period using the bar above the metrics.

-
Metrics categories: Temporarily hide/show the metrics of that category, for example, System. The category for each metric is displayed under the name of the chart. Note that this change is just temporary: if you want to change the layout of the metrics, use the settings
icon.
-
calendar_month
: Use the calendar icon to change the time period that’s displayed on the charts. You can use absolute (calendar) time, or relative time (for example, the last 2 days).
Axoflow stores all dates in Coordinated Universal Time (UTC), and automatically converts it to the timezone of set in your browser/operating system.
Note
By default, Axoflow stores metrics for 30 days, unless specified otherwise in your support contract.
Contact us if you want to increase the data retention time.
To quickly zoom in on a period, click and drag to select the period to display on any of the charts. Every chart will be updated for the selected period. (To return to the previous state, click the Back button of your browser.)
-
/
: Hide/show the alerts and other events (like configuration reloads) of the host. These event are overlayed on the charts by default.
-
: Shows the number of active alerts on the host for that period.
-
The settings
allows you to change the order of the metrics, or to hide metrics. These changes are persistent and stored in your profile.

Interact with a chart
In addition to the possibilities of the top bar, you can interact with the charts the following way:
-
Hover on the info
icon on a colored metric card to display the definition, details, and statistics of the metric.

-
Click on a colored card to hide the related metric from the chart. For example, you can hide unneeded sources on the Log input charts.
-
Click on an event (for example, an alert or configuration reload) to show its details.

-
To quickly zoom in on a period, click and drag to select the period to display on any of the charts. Every chart will be updated for the selected period. (To return to the previous state, click the Back button of your browser.)
Metrics reference
For managed hosts, the following metrics are available:
-
Connections: Number of active connections and the maximum permitted number of connections for each connector.
-
CPU: Percentage of time a CPU core spent on average in a non-idle state within a window of 5 minutes.
-
Disk: Effective storage space used and available on each device (an overlap may exist between devices identified).
-
Dropped packets (total): This chart shows different metrics for packet loss:
- Dropped UDP packets: Number of UDP packets dropped by the OS before processing per second averaged within a time window of 5 minutes.
- Dropped log events: Number of events dropped from event queues within a time window of 5 minutes.
-
Log input (bytes): Incoming log messages processed by each connector, measured in bytes per second, averaged for a time window of 5 minutes.
-
Log input (events): Number of incoming log messages processed by each connector per second, averaged for a time window of 5 minutes.
-
Log output (bytes): Log messages sent to each log destination, measured in bytes per second, averaged for a time window of 5 minutes.
-
Log output (events): Number of log messages sent to each log destination per second, averaged for a time window of 5 minutes.
-
Log memory queue (bytes): Total bytes of data waiting in each memory queue.
-
Log memory queue (events): Number of messages waiting in each memory queue by destination.
-
Log disk queue (bytes): This chart shows the following metrics about disk queue usage:
- Disk queue bytes: Total bytes of data waiting in each disk queue.
- Disk queue memory cache bytes: Amount of memory used for caching disk-based queues.
-
Log disk queue (events): Number of messages waiting in each disk queue by destination.
-
Memory: Memory usage and capacity reported by the OS in bytes (including reclaimable caches and buffers).
-
Network input (bytes): Incoming network traffic in bytes/second reported by the OS for each network interface averaged within a time window of 5 minutes.
-
Network input (packets): Number of incoming network packets per second reported by the OS for each network interface averaged within a time window of 5 minutes.
-
Network output (bytes): Outgoing network traffic in bytes/second reported by the OS for each network interface averaged within a time window of 5 minutes.
-
Network output (packets): Number of outgoing network packets per second reported by the OS for each network interface averaged within a time window of 5 minutes.
-
Event delay (seconds): Latency of outgoing messages.
11.3 - Flow analytics
11.4 - Flow metrics
12 - Activity log
Axoflow collects an activity log to help you track and audit the configuration changes and modifications done by your team. To access the activity log, open the Activity page from the main menu.

For each activity, the following information is shown:
-
Changer: The user who performed the change.
-
Operation: The type of the configuration change, one of: CREATE
, UPDATE
, or DELETE
.
-
Kind: The type of the configuration object that was modified: Flow
, Host
, or Provisioning
.
-
Name: The name of the object that was modified.
-
Date: The date when the change occurred.
Axoflow stores all dates in Coordinated Universal Time (UTC), and automatically converts it to the timezone of set in your browser/operating system.
Open the details of the activity to see a diff of what was changed.

Filter activities
You can use the Filter Bar to search and filter for specific events, and to filter the time range of the activities. You can also access the Activity page from the local ⋮ menus on the Hosts, Flows, and Provisioning pages. In these cases, the activities will be automatically filtered to show the specific object.

-
Basic Search mode searches in the following fields of the activity logs: Changer, Operation, Kind, Name.
Basic Search is case insensitive. Adding multiple keywords searches for matches in any of the previous fields. This is equivalent to the @ANY =* keyword1 AND @ANY =* keyword2
AQL query.
-
AQL Query Search mode allows you to search in specific fields of the activity logs.
It also makes more complex filtering possible, using the Equals, Contains (partial match), and Match (regular expression match) operators. Note that:
- To execute the search, click Search, or hit ESC then ENTER.
- Axoflow Console autocompletes the built-in and custom labels and field names, as well as their most frequent values, but doesn’t autocomplete labels and variables created by data parsing and processing steps.
- You can use the AND and OR operators to combine expressions, and also parenthesis if needed.
13 - Deploy Axoflow Console
This chapter describes the installation and configuration of Axoflow Console in different scenarios.
Axoflow Console is available as:
- an SaaS service,
- on-premises deployment,
- Kubernetes deployment.
13.1 - Air-gapped environment
13.2 - Kubernetes
13.3 - On-premise
This guide shows you how to install Axoflow Console on a local, on-premises virtual machine using k3s and Helm. Installation of other components like the NGINX ingress controller NS cert-manager is also included. Since this is a single instance deployment, we don’t recommend using it in production environments. For additional steps and configurations needed in production environments, contact our support team.
Note
Axoflow Console is also available without installation as a SaaS service. You only have to install Axoflow Console on premises if you want to run and manage it locally.
At a high level, the deployment consists of the following steps:
- Preparing a virtual machine.
- Installing Kubernetes on the virtual machine.
- Basic authentication with admin user and password is configured by default. This guide also shows you how to configure other common authentication methods like LDAP, Github and Google.
- Before deploying AxoRouter describes the steps you have to complete on a host before deploying AxoRouter on it. These steps are specific to on-premises Axoflow Console deployments, and are not needed when using the SaaS Axoflow Console.
Prerequisites
To install Axoflow Console, you’ll need the following:
System requirements
Supported operating system: Ubuntu 24.04, Red Hat 9 and compatible (tested with AlmaLinux 9)
The virtual machine (VM) must have at least:
- 4 vCPUs (x86_64-based)
- 8 GB RAM
- 100 GB disk space
This setup can handle about 100 AxoRouter instances and 1000 data source hosts.
For reference, a real-life scenario that handles 100 AxoRouter and 3000 data source hosts with 30-day metric retention would need:
- 16 vCPU (x86_64-based)
- 16 GB RAM
- 250 GB disk space
Note
Note that AxoRouter collects detailed, real-time metrics about the data-flows – giving you observability over the health of the security data pipeline and its components. Your security data remains in your self-managed cloud or in your on-prem instance where your sources, destinations, and AxoRouters are running, only metrics are forwarded to Axoflow Console.
For details on sizing, contact our support team.
You’ll need to have access to a user with sudo
privileges.
Network access
The hosts must be able to access the following domains related to the Axoflow Console:
Set up Kubernetes
Complete the following steps to install k3s and some dependencies on the virtual machine.
-
Run getenforce
to check the status of SELinux.
-
Check the /etc/selinux/config
file and make sure that the settings match the results of the getenforce
output (for example, SELINUX=disabled
). K3S supports SELinux, but the installation can fail if there is a mismatch between the configured and actual settings.
-
Install k3s.
-
Run the following command:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik --disable metrics-server" K3S_KUBECONFIG_MODE="644" sh -s -
-
Check that the k3s service is running:
-
Install the NGINX ingress controller.
VERSION="v4.11.3"
sudo tee /var/lib/rancher/k3s/server/manifests/ingress-nginx.yaml > /dev/null <<EOF
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: ingress-nginx
namespace: kube-system
spec:
createNamespace: true
chart: ingress-nginx
repo: https://kubernetes.github.io/ingress-nginx
version: $VERSION
targetNamespace: ingress-nginx
valuesContent: |-
controller:
extraArgs:
enable-ssl-passthrough: true
hostNetwork: true
hostPort:
enabled: true
service:
enabled: false
admissionWebhooks:
enabled: false
updateStrategy:
strategy:
type: Recreate
EOF
-
Install the cert-manager controller.
VERSION="1.16.2"
sudo tee /var/lib/rancher/k3s/server/manifests/cert-manager.yaml > /dev/null <<EOF
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: cert-manager
namespace: kube-system
spec:
createNamespace: true
chart: cert-manager
version: $VERSION
repo: https://charts.jetstack.io
targetNamespace: cert-manager
valuesContent: |-
crds:
enabled: true
enableCertificateOwnerRef: true
EOF
Install the Axoflow Console
Complete the following steps to deploy the Axoflow Console Helm chart.
-
Set the following environment variables as needed for your deployment.
-
The domain name for your Axoflow Console deployment. This must point to the IP address of the VM you’re installing Axoflow Console on.
BASE_HOSTNAME=your-host.your-domain
Note
To use Google authentication, Axoflow Console must be deployed on a publicly accessible domain name (the $BASE_HOSTNAME
must end with a valid top-level domain, for example, .com
or .io
.)
-
The version number of Axoflow Console you want to deploy.
-
The internal IP address of the virtual machine. This is needed so the authentication service can communicate with the identity provider without relying on DNS.
VM_IP_ADDRESS=$(hostname -I | cut -d' ' -f1)
Check that the above command returned the correct IP, it might not be accurate if the host has multiple IP addresses.
-
The URL of the Axoflow Helm chart. You’ll receive this URL from our team. You can request it using the contact form.
AXOFLOW_HELM_CHART_URL="<the-chart-URL-received-from-Axoflow>"
-
Install Axoflow Console. You can either let the chart create a password, or you can set an email and password manually.
-
Create password automatically: The following configuration file enables automatic password creation. If this basic setup is working, you can configure another authentication provider.
sudo tee /var/lib/rancher/k3s/server/manifests/axoflow.yaml > /dev/null <<EOF
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: axoflow
namespace: kube-system
spec:
createNamespace: true
targetNamespace: axoflow
version: $VERSION
chart: $AXOFLOW_HELM_CHART_URL
failurePolicy: abort
valuesContent: |-
baseHostName: $BASE_HOSTNAME
dex:
localIP: $VM_IP_ADDRESS
EOF
To reclaim your autogenerated password, use the following command.
kubectl -n axoflow get secret axoflow-admin -o go-template='Email: {{ .data.email | base64decode }}{{ printf "\n" }}Password: {{ .data.password | base64decode }}{{ printf "\n" }}'
Record the email and password, you’ll need it to log in to Axoflow Console.
-
Set password manually: If you’d like to set the email/password manually, create the secret storing the required information, then set a unique identifier for the administrator user using uuidgen
. Note that:
- On Ubuntu, you must first install the
uuid-runtime
package by running sudo apt install uuid-runtime
.
- To set the bcrypt hash of the administrator password, you can generate it with the
htpasswd
tool, which is part of the httpd-tools
package on Red Hat (install it using sudo dnf install httpd-tools
), and the apache2-utils
package on Ubuntu (install it using sudo apt install apache2-utils
).
ADMIN_USERNAME="<your admin username>"
ADMIN_EMAIL="<your admin email>"
ADMIN_PASSWORD="<your admin password>"
ADMIN_UUID=$(uuidgen)
ADMIN_PASSWORD_HASH=$(echo $ADMIN_PASSWORD | htpasswd -BinC 10 $ADMIN_USERNAME | cut -d: -f2)
sudo tee /var/lib/rancher/k3s/server/manifests/axoflow.yaml > /dev/null <<EOF
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: axoflow
namespace: kube-system
spec:
createNamespace: true
targetNamespace: axoflow
version: $VERSION
chart: $AXOFLOW_HELM_CHART_URL
failurePolicy: abort
valuesContent: |-
baseHostName: $BASE_HOSTNAME
pomerium:
policy:
emails:
- $ADMIN_EMAIL
dex:
localIP: $VM_IP_ADDRESS
defaultCredential:
create: false
EOF
kubectl -n axoflow create secret generic axoflow-admin --from-literal=username=$ADMIN_USERNAME --from-literal=password_hash=$ADMIN_PASSWORD_HASH --from-literal=email=$ADMIN_EMAIL --from-literal=uuid=$ADMIN_UUID
-
Wait a few minutes until everything is installed. Check that every component has been installed and is running:
The output should be similar to:
NAMESPACE NAME READY STATUS RESTARTS AGE
axoflow axolet-dist-55ccbc4759-q7l7n 1/1 Running 0 2d3h
axoflow chalco-6cfc74b4fb-mwbbz 1/1 Running 0 2d3h
axoflow configure-kcp-1-x6js6 0/1 Completed 2 2d3h
axoflow configure-kcp-cert-manager-1-wvhhp 0/1 Completed 1 2d3h
axoflow configure-kcp-token-1-xggg7 0/1 Completed 1 2d3h
axoflow controller-manager-5ff8d5ffd-f6xtq 1/1 Running 0 2d3h
axoflow dex-5b9d7b88b-bv82d 1/1 Running 0 2d3h
axoflow kcp-0 1/1 Running 1 (2d3h ago) 2d3h
axoflow kcp-cert-manager-0 1/1 Running 0 2d3h
axoflow pomerium-56b8b9b6b9-vt4t8 1/1 Running 0 2d3h
axoflow prometheus-server-0 2/2 Running 0 2d3h
axoflow telemetryproxy-65f77dcf66-4xr6b 1/1 Running 0 2d3h
axoflow ui-6dc7bdb4c5-frn2p 2/2 Running 0 2d3h
cert-manager cert-manager-74c755695c-5fs6v 1/1 Running 5 (144m ago) 2d3h
cert-manager cert-manager-cainjector-dcc5966bc-9kn4s 1/1 Running 0 2d3h
cert-manager cert-manager-webhook-dfb76c7bd-z9kqr 1/1 Running 0 2d3h
ingress-nginx ingress-nginx-controller-bcc7fb5bd-n5htp 1/1 Running 0 2d3h
kube-system coredns-ccb96694c-fhk8f 1/1 Running 0 2d3h
kube-system helm-install-axoflow-b9crk 0/1 Completed 0 2d3h
kube-system helm-install-cert-manager-88lt7 0/1 Completed 0 2d3h
kube-system helm-install-ingress-nginx-qvkhf 0/1 Completed 0 2d3h
kube-system local-path-provisioner-5cf85fd84d-xdlnr 1/1 Running 0 2d3h
(For details on what the different services do, see the service reference.)
Note
When using AxoRouter with an on-premises Axoflow Console deployment, you have to
prepare the hosts before deploying AxoRouter. These steps are specific to on-premises Axoflow Console deployments, and are not needed when using the SaaS Axoflow Console.
Login to Axoflow Console
-
If the domain name of Axoflow Console cannot be resolved from your desktop, add it to the /etc/hosts
file in the following format. Use and IP address of Axoflow Console that can be accessed from your desktop.
<AXOFLOW-CONSOLE-IP-ADDRESS> <your-host.your-domain> idp.<your-host.your-domain> authenticate.<your-host.your-domain>
-
Open the https://<your-host.your-domain>
URL in your browser.
-
The on-premise deployment of Axoflow Console shows a self-signed certificate. If your browser complains about the related risks, accept it.
-
Use the email address and password you got or set in the installation step to log in to Axoflow Console.
Axoflow Console service reference
Service Name |
Namespace |
Purposes |
Function |
KCP |
axoflow |
Backend API |
Kubernetes Like Service with built in database, that service manage all the settings that our system manage |
Chalco |
axoflow |
Frontend API |
Serve the UI API Calls, implement business logic for the UI |
Controller-Manager |
axoflow |
Backend Service |
Reacts to state changes in our business entities, manage business logic for the Backend |
Telemetry Proxy |
axoflow |
Backend API |
Receives agents telemetries |
UI |
axoflow |
Dashboard |
The frontend for Axoflow Console |
Prometheus |
axoflow |
Backend API /Service |
Monitoring component to store time series information and an API for query, manage alert rules |
Alertmanager |
axoflow |
Backend API /Service |
Monitoring component to Send alert based on alerting rules |
Dex |
axoflow |
Identity Connector/Proxy |
Identity Connector/Proxy to allow the customer to use own identity (Google, Ldap, etc.) |
Pomerium |
axoflow |
HTTP Proxy |
Allow policy based access to the Frontend API (Chalco) |
Axolet Dist |
axoflow |
Backend API |
Static artifact store to contains agents binaries |
Cert Manager (kcp) |
axoflow |
Automated Certificate management tool |
Manage certificates for Agents |
Cert Manager |
cert-manager |
Automated Certificate management tool |
Manage certificates for Axoflow components (Backend API, HTTP Proxy) |
NGINX Ingress Controller |
ingress-nginx |
HTTP Proxy |
Allow routing the HTTP traffic between multiple Frontend/Backend API |
13.3.1 - Authentication
These sections show you how to configure on-premises and Kubernetes deployments of Axoflow Console to use different authentication backends.
13.3.1.1 - GitHub
This section shows you how to use GitHub OAuth2 as an authentication backend for Axoflow Console. It is assumed that you already have a GitHub organization. Complete the following steps.
Prerequisites
Register a new OAuth GitHub application for your organization. (For testing, you can create it under your personal account.) Make sure to:
-
Set the Homepage URL to the URL of your Axoflow Console deployment: https://<your-console-host.your-domain>/callback
, for example, https://axoflow-console.example.com
.
-
Set the Authorization callback URL to: https://auth.<your-console-host.your-domain>/callback
, for example, https://auth.axoflow-console.example.com/callback
.

-
Save the Client id of the app, you’ll need it to configure Axoflow Console.
-
Generate a Client secret and save it, you’ll need it to configure Axoflow Console.
Configuration
-
Configure authentication by editing the spec.dex.config
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
-
(Optional) If you’ve used our earlier example, delete the spec.dex.config.staticPasswords
section.
-
Add the spec.dex.config.connectors
section to the file, like this:
dex:
enabled: true
localIP: $VM_IP_ADDRESS
config:
create: true
connectors:
- type: github
# Required field for connector id.
id: github
# Required field for connector name.
name: GitHub
config:
# Credentials can be string literals or pulled from the environment.
clientID: <ID-of-GitHub-application>
clientSecret: <Secret-of-GitHub-application>
redirectURI: <idp.your-host.your-domain/callback>
orgs:
- name: <your-GitHub-organization>
Note that if you have a valid domain name that points to the VM, you can omit the localIP: $VM_IP_ADDRESS
line.
-
Edit the following fields. For details on the configuration parameters, see the Dex GitHub connector documentation.
connectors.config.clientID
: The ID of the GitHub OAuth application.
connectors.config.clientSecret
: The client secret of the GitHub OAuth application.
connectors.config.redirectURI
: The callback URL of the GitHub OAuth application: https://auth.<your-console-host.your-domain>/callback
, for example, https://auth.axoflow-console.example.com/callback
.
connectors.config.orgs
: List of the GitHub organizations whose members can access Axoflow Console. Restrict access to your organization.
-
Configure authorization in the spec.pomerium.policy
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
- List the primary email addresses of the users who have read and write access to Axoflow Console under the
emails
section.
- List the primary email addresses of the users who have read-only access to Axoflow Console under the
readOnly.emails
section.
Note
Use the primary GitHub email addresses of your users, otherwise the authorization will fail.
policy:
emails:
- username@yourdomain.com
domains: []
groups: []
claim/groups: []
readOnly:
emails:
- readonly-username@yourdomain.com
domains: []
groups: []
claim/groups: []
For details on authorization settings, see Authorization.
-
Save the file.
-
Restart the dex deployment after changing the connector:
kubectl rollout restart deployment/dex -n axoflow
Expected output:
deployment.apps/dex restarted
-
Open the main page of your Axoflow Console deployment in your browser. You’ll be redirected to the GitHub authentication page.
After completing the GitHub authentication you can access Axoflow Console.
Getting help
You can troubleshoot common errors by running kubectl logs -n axoflow <dex-container-name>
If you run into problems setting up the authentication or authorization, contact our support team.
13.3.1.2 - Google
This section shows you how to use Google OpenID Connect as an authentication backend for Axoflow Console. It is assumed that you already have a Google organization and Google Cloud Console access. Complete the following steps.
Prerequisites
-
To use Google authentication, Axoflow Console must be deployed on a publicly accessible domain name (the $BASE_HOSTNAME
must end with a valid top-level domain, for example, .com
or .io
.)
-
Configure OpenID Connect and create an OpenID credential for a Web application.
- Make sure to set the Authorized redirect URIs to:
https://auth.<your-console-host.your-domain>/callback
, for example, https://auth.axoflow-console.example.com/callback
.
- Save the Client ID of the app and the Client secret of the application, you’ll need them to configure Axoflow Console.
For details on setting up OpenID Connect, see the official documentation.
Configuration
-
Configure authentication by editing the spec.dex.config
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
-
(Optional) If you’ve used our earlier example, delete the spec.dex.config.staticPasswords
section.
-
Add the spec.dex.config.connectors
section to the file, like this:
connectors:
- type: google
id: google
name: Google
config:
# Connector config values starting with a "$" will read from the environment.
clientID: <ID-of-Google-application>
clientSecret: <Secret-of-GitHub-application>
# Dex's issuer URL + "/callback"
redirectURI: <idp.your-host.your-domain/callback>
# Set the value of `prompt` query parameter in the authorization request
# The default value is "consent" when not set.
promptType: consent
# Google supports whitelisting allowed domains when using G Suite
# (Google Apps). The following field can be set to a list of domains
# that can log in:
#
hostedDomains:
- <your-domain>
-
Edit the following fields. For details on the configuration parameters, see the Dex Google connector documentation.
connectors.config.clientID
: The ID of the Google application.
connectors.config.clientSecret
: The client secret of the Google application.
connectors.config.redirectURI
: The callback URL of the Google application: https://auth.<your-console-host.your-domain>/callback
, for example, https://auth.axoflow-console.example.com/callback
.
connectors.config.hostedDomains
: The domain where Axoflow Console is deployed, for example, example.com
. Your users must have email addresses for this domain at Google.
-
Configure authorization in the spec.pomerium.policy
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
- List the email addresses of the users who have read and write access to Axoflow Console under the
emails
section.
- List the email addresses of the users who have read-only access to Axoflow Console under the
readOnly.emails
section.
Note
The email addresses of your users must have the same domain as the one set under `connectors.config.hostedDomains`, otherwise the authorization will fail.
policy:
emails:
- username@yourdomain.com
domains: []
groups: []
claim/groups: []
readOnly:
emails:
- readonly-username@yourdomain.com
domains: []
groups: []
claim/groups: []
For details on authorization settings, see Authorization.
-
Save the file.
-
Restart the dex deployment after changing the connector:
kubectl rollout restart deployment/dex -n axoflow
Expected output:
deployment.apps/dex restarted
-
Open the main page of your Axoflow Console deployment in your browser. You’ll be redirected to the Google authentication page.
After completing the Google authentication you can access Axoflow Console.
Getting help
You can troubleshoot common errors by running kubectl logs -n axoflow <dex-container-name>
If you run into problems setting up the authentication or authorization, contact our support team.
13.3.1.3 - LDAP
This section shows you how to use LDAP as an authentication backend for Axoflow Console. In the examples we used the public demo service of FreeIPA as an LDAP server. It is assumed that you already have an LDAP server in place. Complete the following steps.
-
Configure authentication by editing the spec.dex.config
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
-
(Optional) If you’ve used our earlier example, delete the spec.dex.config.staticPasswords
section.
-
Add the spec.dex.config.connectors
section to the file, like this:
CAUTION:
This example shows a simple configuration suitable for testing. In production environments, make sure to:
- configure TLS encryption to access your LDAP server
- retrieve the bind password from a vault or environment variable. Note that if the bind password contains the `$` character, you must set it in an environment variable and pass it like `bindPW: $LDAP_BINDPW`.
dex:
enabled: true
localIP: $VM_IP_ADDRESS
config:
create: true
connectors:
- type: ldap
name: OpenLDAP
id: ldap
config:
host: ipa.demo1.freeipa.org
insecureNoSSL: true
# This would normally be a read-only user.
bindDN: uid=admin,cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org
bindPW: Secret123
usernamePrompt: Email Address
userSearch:
baseDN: dc=demo1,dc=freeipa,dc=org
filter: "(objectClass=person)"
username: mail
# "DN" (case sensitive) is a special attribute name. It indicates that
# this value should be taken from the entity's DN not an attribute on
# the entity.
idAttr: uid
emailAttr: mail
nameAttr: cn
groupSearch:
baseDN: dc=demo1,dc=freeipa,dc=org
filter: "(objectClass=groupOfNames)"
userMatchers:
# A user is a member of a group when their DN matches
# the value of a "member" attribute on the group entity.
- userAttr: DN
groupAttr: member
# The group name should be the "cn" value.
nameAttr: cn
-
Edit the following fields. For details on the configuration parameters, see the Dex LDAP connector documentation.
connectors.config.host
: The hostname and optionally the port of the LDAP server in “host:port” format.
connectors.config.bindDN
and connectors.config.bindPW
: The DN and password for an application service account that the connector uses to search for users and groups.
connectors.config.userSearch.bindDN
and connectors.config.groupSearch.bindDN
: The base DN for the user and group search.
-
Configure authorization in the spec.pomerium.policy
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
- List the names of the LDAP groups whose members have read and write access to Axoflow Console under
claim/groups
. (Group managers
in the example.)
- List the names of the LDAP groups whose members have read-only access to Axoflow Console under
readOnly.claim/groups
. (Group employee
in the example.)
policy:
emails: []
domains: []
groups: []
claim/groups:
- managers
readOnly:
emails: []
domains: []
groups: []
claim/groups:
- employee
For details on authorization settings, see Authorization.
-
Save the file.
-
Restart the dex deployment after changing the connector:
kubectl rollout restart deployment/dex -n axoflow
Expected output:
deployment.apps/dex restarted
Getting help
You can troubleshoot common errors by running kubectl logs -n axoflow <dex-container-name>
If you run into problems setting up the authentication or authorization, contact our support team.
13.3.2 - Authorization
These sections show you how to configure the authorization of Axoflow Console with different authentication backends.
You can configure authorization in the spec.pomerium.policy
section of the Axoflow Console manifest. In on-premise deployments, the manifest is in the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
You can list individual email addresses and user groups to have read and write (using the keys under spec.pomerium.policy
) and read-only (using the keys under spec.pomerium.policy.readOnly
) access to Axoflow Console. Which key to use depends on the authentication backend configured for Axoflow Console:
-
emails
: Email addresses used with static passwords and GitHub authentication.
With GitHub authentication, use the primary GitHub email addresses of your users, otherwise the authorization will fail.
-
claim/groups
: LDAP groups used with LDAP authentication. For example:
policy:
emails: []
domains: []
groups: []
claim/groups:
- managers
readOnly:
emails: []
domains: []
groups: []
claim/groups:
- employee
13.3.3 - Prepare AxoRouter hosts
When using AxoRouter with an on-premises Axoflow Console deployment, you have to complete the following steps on the hosts you want to deploy AxoRouter on. These steps are specific to on-premises Axoflow Console deployments, and are not needed when using the SaaS Axoflow Console.
-
If the domain name of Axoflow Console cannot be resolved from the AxoRouter host, add it to the /etc/hosts
file of the AxoRouter host in the following format. Use and IP address of Axoflow Console that can be accessed from the AxoRouter host.
<AXOFLOW-CONSOLE-IP-ADDRESS> <AXOFLOW-CONSOLE-BASE-URL> kcp.<AXOFLOW-CONSOLE-BASE-URL> telemetry.<AXOFLOW-CONSOLE-BASE-URL> idp.<AXOFLOW-CONSOLE-BASE-URL> authenticate.<AXOFLOW-CONSOLE-BASE-URL>
-
curl -I https://<your-host.your-domain>
should give you a valid HTTP/2 302 response
- Now you can deploy AxoRouter on the host.
14 - Axoflow reference
This describes the command-line tools and API reference of the Axoflow Platform.
14.1 - Manual pages
14.1.1 - axorouter-ctl
Name
axorouter-ctl
— Utility to control and troubleshoot AxoRouter
Synopsis
axorouter-ctl [command] [options]
Description
The axorouter-ctl
application is a utility that is needed mainly for troubleshooting. For example, it can be used to:
Control syslog container
Usage: axorouter-ctl control syslog <command> <options>
This command allows you to access and control the syslog container of AxoRouter. Essentially, it’s a wrapper for the syslog-ng-ctl
command of AxoSyslog. Usually, you don’t have to use this command unless our support team instructs you to.
For example:
- Enable debug messages for troubleshooting:
axorouter-ctl control syslog log-level verbose
- Display the current configuration:
axorouter-ctl control syslog config --preprocessed
- Reload the configuration:
axorouter-ctl control syslog reload
For details, see the AxoSyslog documentation.
Show logs
Usage: axorouter-ctl logs <target> <journalctl-options>
Displays the logs of the target container: syslog
or wec
. WEC logs are available only if the AxoRouter host has a ;Windows Event Collector (WEC) connector configured.
The axorouter-ctl logs
command is a wrapper for the journalctl
command, so you can add regular journalctl options to the command. For example, -n 50
to show the 50 most recent lines: axorouter-ctl logs syslog -n 50
Prune images
Usage: axorouter-ctl prune
Removes all unused images from the local store, similarly to podman-image-prune
.
Pull images
Usage: axorouter-ctl pull
Download images needed for AxoRouter. This is usually not needed unless you have a custom environment, for example, proxy settings or pull credentials that aren’t available when the AxoRouter container is run.
Shell access to AxoRouter
Usage: axorouter-ctl shell wec
Open a shell into the target container (syslog
or wec
). Usually needed only for troubleshooting purposes, when requested by our support team.
Statistics and metrics
Usage: axorouter-ctl stats syslog <legacy|prometheus>
Display the statistics and metrics of the syslog container in Prometheus or legacy format, from the underlying AxoSyslog process. For details, see the AxoSyslog documentation.
Version
Usage: axorouter-ctl version
Show the version numbers of the underlying AxoSyslog build, for example:
0.58.0
axosyslog 4 (4.11.0)
Config version: 4.2
Installer-Version: 4.11.0
15 - Legal notice
Copyright of Axoflow Inc. All rights reserved.
The Axoflow, AxoRouter, and AxoSyslog trademarks and logos are trademarks of Axoflow Inc. All other trademarks are property of their respective owners.
16 - Integrations
1Password
Manages and secures user credentials, secrets, and vaults for individuals and organizations.
A10 Networks
Delivers application load balancing, traffic management, and DDoS protection for enterprise networks.
Amazon
Tracks AWS account activity and API usage for auditing, governance, and compliance monitoring.
Monitors AWS resources and applications by collecting metrics, logs, and setting alarms.
Scalable cloud storage service for storing and retrieving any amount of data via object storage.
Centralizes and normalizes security data from AWS and third-party sources for analytics and compliance.
Axoflow
Manages and routes event flow through pipelines, including filtering, parsing, and transformations.
High-performance, configurable syslog service for collecting, processing, and forwarding log data.
Box, Inc.
Cloud-based content management and file sharing platform designed for secure collaboration and storage.
Broadcom
Secures web traffic through policy enforcement, SSL inspection, and real-time threat protection.
Provides network virtualization, micro-segmentation, and security for software-defined data centers.
Check Point Software
Detects and blocks botnet communications and command-and-control traffic to prevent malware infections.
Protects endpoints from viruses, ransomware, and other malware using signature and behavior analysis.
Prevents phishing attacks by analyzing email content and links to block credential theft attempts.
Blocks spam and malicious email content using reputation checks and email filtering techniques.
Unified threat prevention platform delivering firewall, VPN, and intrusion prevention capabilities.
Legacy Check Point management client used to interface with security policies and logs.
Utility used to update configuration and database files for Check Point Multi-Domain environments.
Command-line tool to extract, query, or update Check Point configuration and policy databases.
Checks endpoint status and posture before granting network access, enforcing security policies.
Centralized platform for managing endpoint protection, updates, and policy enforcement.
Next-generation firewall providing intrusion prevention, application control, and threat protection.
Analyzes security incidents on endpoints to uncover attack vectors and malicious activity.
Placeholder integration used for generic or unsupported Check Point log sources.
Facilitates secure password reset processes for users across integrated environments.
Decrypts and inspects HTTPS traffic to detect hidden threats within encrypted web sessions.
Provides configuration profiles for secure mobile access and web filtering on iOS devices.
Detects and blocks known and unknown exploits, malware, and vulnerabilities in network traffic.
CLI tool for querying multi-domain configurations and policies in Check Point environments.
Secures USB ports and encrypts removable media to protect sensitive data on endpoints.
Enables secure remote access to corporate apps and data from mobile devices.
Implements bandwidth control and traffic prioritization policies for optimized network usage.
Accesses and queries internal policy or object databases in Check Point systems.
Graphical interface for managing Check Point security policies, logs, and monitoring.
Tool for updating and managing licenses, software, and hotfixes in Check Point environments.
Enables export of Check Point logs via syslog to external monitoring and SIEM platforms.
Emulates files in a virtual environment to detect and block advanced persistent threats and exploits.
Controls and logs web access based on URL categories and custom site rules to enforce policy.
Provides programmatic access to Check Point security management through RESTful API endpoints.
Cisco
Provides application-aware load balancing, SSL offload, and traffic control for Cisco networks.
Centralizes network access control with RADIUS and TACACS+ for authentication and authorization.
Provides stateful firewall, VPN support, and advanced threat protection for secure network perimeters.
Delivers enterprise-grade Ethernet switching with high performance, security, and scalable management.
Provides out-of-band server management for Cisco UCS, enabling hardware monitoring and configuration.
Provides software-defined networking, policy automation, and analytics for enterprise infrastructure.
Delivers two-factor authentication and secure access controls to protect users and applications.
Protects email systems from spam, phishing, malware, and data loss with advanced threat filtering.
Provides next-gen firewall features including intrusion prevention, app control, and malware protection.
Unifies firewall, VPN, and intrusion prevention into a single software for comprehensive threat defense.
Delivers multi-context, high-performance firewall services integrated into Cisco Catalyst switches.
Network operating system for Cisco routers and switches, enabling routing, switching, and security.
Manages network access control and enforces policies with user and device authentication capabilities.
Cloud-managed network appliance offering firewall, VPN, SD-WAN, and security in a single platform.
Provides advanced threat protection and policy-based access control.
Legacy firewall appliance delivering stateful inspection and secure network access control.
Enables video conferencing control and call routing for Cisco TelePresence systems and endpoints.
Delivers unified voice, video, messaging, and mobility services in enterprise IP telephony systems.
Infrastructure solution combining compute, storage, and networking in a single system.
Centralized management platform for Cisco Unified Computing System (UCS) servers and resources.
Software-defined WAN solution providing secure connectivity, centralized control, and traffic optimization.
High-performance, modular network operating system for carrier-grade routing and scalability.
Citrix
Offers application delivery, load balancing, and security features for optimized app performance.
ClickHouse
Coda
All-in-one collaborative document platform combining text, spreadsheets, and integrations into a single workspace.
Confluent
Corelight
Provides network detection and response by analyzing traffic for advanced threats and anomalous behavior.
CrowdStrike
Cloud-native platform for endpoint detection, threat hunting, and security analytics at scale.
CyberArk
Analyzes privileged account behavior to detect threats and suspicious activity in real time.
Stores and manages privileged credentials, session recordings, and access control policies securely.
Databricks
Unified analytics platform for data engineering, machine learning, and collaborative data science workflows.
Datadog
Observability platform that provides monitoring for infrastructure, applications, logs, and security events.
DELL
Delivers firewall, VPN, and deep packet inspection to protect networks from cyber threats and intrusions.
Elastic
Distributed search and analytics engine for real-time log analysis, observability, and data indexing.
F5 Networks
Provides load balancing, traffic management, and application security for optimized service delivery.
Forcepoint
Protects email systems from spam, phishing, malware, and data exfiltration using advanced threat defense.
Next-gen firewall with deep packet inspection, policy enforcement, and integrated intrusion prevention.
Provides web traffic filtering, malware protection, and data loss prevention for secure internet access.
Fortinet
Enterprise firewall platform offering threat protection, VPN, and traffic filtering for secure networking.
Secures inbound and outbound email with spam filtering, malware protection, and advanced threat detection.
Web application firewall protecting websites from attacks like XSS, SQL injection, and bot threats.
Fortra
Monitors IBM i system activity and forwards security events for centralized analysis in SIEM platforms.
Generic
Blackhole destination for testing or discarding events without storing them.
Standard protocol for system logs. Our source automatically detects, parses, and classifies incoming syslog events without pre-configuration.
Receives data via HTTP POST requests for flexible event-driven integration.
Google
Captures administrative events and configuration changes across the Google Workspace environment.
Generates security and compliance alerts for suspicious user activity within Google Workspace services.
Asynchronous messaging service for ingesting and distributing event data between services and applications.
Grafana
Scalable log aggregation system optimized for storing and querying logs by labels using Grafana.
HAProxy
High-performance TCP/HTTP load balancer and reverse proxy for increasing reliability and scalability.
Hewlett Packard Enterprise
Enterprise-grade networking gear for wireless, wired, and security management across distributed networks.
Imperva
Cloud-based WAF, DDoS protection, and bot mitigation service for securing web applications and APIs.
Provides on-prem web application, database, and file security with granular activity monitoring.
Infoblox
Delivers secure DNS, DHCP, and IPAM (DDI) services with centralized network control and automation.
Internet Systems Consortium
Ivanti
Provides dynamic IP address assignment and network configuration for DHCP-enabled devices.
Juniper Networks
Junos OS is the network operating system for Juniper physical and virtual networking and security products.
Kafka
Distributed event streaming platform used for building real-time data pipelines and stream processing.
Kaspersky
Protects endpoints from malware, ransomware, and intrusions with antivirus, firewall, and threat detection.
Microsoft
Cloud-based object storage optimized for unstructured data like backups, logs, and media files.
Big data streaming platform to ingest and process events.
Collects and analyzes telemetry from applications and infrastructure to monitor performance and usage.
Monitors cloud app usage, detects anomalies, and enforces security policies across SaaS services.
Provides audit and event data for code repositories, development workflows, and security scans.
Generates user activity and audit logs from Exchange, SharePoint, Teams, and other Office 365 services.
Cloud-native SIEM and SOAR platform for collecting, analyzing, and responding to security threats.
Event logs from core services like security, system, DNS, and DHCP for operational and forensic analysis.
Collects Windows Event Log data via WEC for centralized log analysis and monitoring.
MikroTik
Router operating system providing firewall, bandwidth management, routing, and hotspot functionality.
NetFlow Logic
Aggregates and transforms flow data (NetFlow, IPFIX) into actionable security and performance insights.
Netgate
Open-source firewall and router platform with VPN, traffic shaping, and intrusion detection capabilities.
Netmotion
Provides secure, optimized remote access with performance monitoring for mobile and distributed workforces.
NETSCOUT
Edge-based DDoS protection and threat mitigation system to block attacks before they enter the network.
Monitors and mitigates advanced persistent threats and malware with inline packet inspection.
Generic UNIX/Linux host
A generic placeholder for program classifications
Okta
Identity-as-a-Service platform offering authentication and authorization for apps and APIs.
OpenObserve
High-performance log analytics platform supporting structured log ingestion, search, and visualization.
OpenSearch
Open-source search and analytics suite for log ingestion, indexing, dashboards, and alerting.
OpenTelemetry
Collects observability data like traces, metrics, and logs from applications using the OTLP protocol.
OpenText
SIEM platform for collecting, correlating, and analyzing security event data across IT environments.
Allows users to securely reset their own passwords without IT assistance, reducing helpdesk load.
Palo Alto Networks
Security orchestration, automation, and response platform for threat detection and incident management.
Firewall operating system delivering network security features including traffic control and threat prevention.
Ping Identity
A centralized access security solution which provides secure access to applications and APIs down to the URL level
Progress
Detects network anomalies and threats through flow-based behavior analysis and machine learning.
Riverbed
Software-defined WAN solution for centralized network management and secure cloud connectivity.
WAN optimization appliance that accelerates application performance and reduces bandwidth usage.
RSA
Manages two-factor authentication using RSA SecurID tokens for secure access to enterprise resources.
SecureAuth
Delivers identity and access management with adaptive authentication and single sign-on capabilities.
Skyhigh Security
Inspects and filters web traffic to protect against malware, enforce policies, and prevent data loss.
Slack
Collaboration platform that captures workspace messages, file uploads, and user activity for auditing.
Snowflake
Cloud data platform for storing, querying, and analyzing large-scale structured and semi-structured data.
Splunk
Ingests, indexes, and visualizes data for monitoring, analysis, and alerting.
Receive data from Splunk.
Agent that collects and forwards logs from remote systems to a central Splunk instance.
Sumo Logic
Cloud-native platform for log management, metrics analysis, and security analytics.
Superna
Manages and automates data protection, DR, and reporting for PowerScale environments.
Thales
Provides data encryption, key management, and access controls across cloud and on-premise environments.
Trellix
Centralized policy and configuration management platform for Trellix security products.
Manages endpoint security policies and threat responses across distributed environments.
Analyzes and filters email traffic to block phishing, malware, and targeted email-based threats.
Detects and responds to advanced threats on endpoints using behavior-based analysis and threat intel.
Appliance for detecting and blocking advanced threats through inline malware inspection.
Trend Micro
Provides anti-malware, intrusion prevention, and log inspection for cloud and on-prem servers.
Ubiquiti
Manages network devices including routers, switches, and access points with centralized control.
Varonis
Monitors data access and permissions to detect insider threats and automate compliance reporting.
Vectra AI
Detects and investigates cyberattacks across cloud, data center, and enterprise networks using AI.
VMware
Bare-metal hypervisor that enables server virtualization and resource allocation for virtual machines.
Centralized management platform for VMware vSphere environments and virtual infrastructure control.
Wiz
Cloud security platform offering visibility into misconfigurations, vulnerabilities, and compliance risks.
Zscaler
Cloud-based secure internet gateway that inspects traffic for threats and enforces policies.
Provides secure remote access to internal apps without exposing them to the public internet.