This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Axoflow Platform
Axoflow is a security data curation pipeline empowering hybrid enterprises to reduce complexity and costs by 50% or more by automatically curating their carrier-grade data in the pipeline.
- The platform automatically identifies the security data you are feeding it with and automatically fixes it (like think about missing or duplicated timestamps), parses it, then transforms it into the schema of your choice.
- Part of the identification process is enriching log data with relevant metadata we call labels. Labels add missing context to logs at the edge like geolocation. This context can power your SOC team as well as enable you to make routing decisions based on them.
- Your CISO will be happy to hear that the automatic curation process also includes a reduction step that can save your enterprise up to 50% on SIEM data ingestion costs, while preserving data integrity.
- Monitoring and alerting complement our platform, making it easy to identify, fix, and get ahead of data transport issues that are especially prevalent with syslog data. This way you don’t need to worry about data breaches or alerts not firing for your SOC analysts.
- The platform comes with our vendor-agnostic agents for network devices, cloud sources, Kubernetes, and other sources, and we can also instrument your existing collection infrastructure. This eliminates SIEM lock-in so you are free to use your own data wherever your enterprise needs it.
And all of the above works out-of-the box, automatically, at carrier-grade scale.
Use cases
- Reducing data cleaning and normalization efforts
- Decoupling parsing from SIEM
- Breaking vendor lock-in
- Using or migrating to another SIEM
- Migrating to a cloud-based SIEM
- Routing, data tiering
- Data reduction
- Diagnosing and avoiding data loss
- Windows Event Collection (WEC)
Would you like to try Axoflow? Request a zero-commitment demo or a sandbox environment to experience the power of our platform.
Ready to get started? Go to our Getting started section for the first steps.
1 - Introduction
2 - Axoflow architecture
The Axoflow provides an end-to-end pipeline automating the collection, management and loading of your security data in a vendor-agnostic way. The following figure highlights the Axoflow data flow:
Axoflow architecture
The architecture of Axoflow is comprised of two main elements: the Axoflow Console and the Data Plane.
- The Axoflow Console is primarily concerned with highlighting the metadata of each event. This includes the source from which it originated, the size in bytes (and event count over time), its destination, and any other element which describes the data.
- The Data Plane includes collector agents and processing engines (like AxoRouter) that collect, classify, filter, transform, and deliver telemetry data to its proper destinations (SIEMs, storage), and provide metrics to the Axoflow Console. The components of the Data Plane can be managed from the Axoflow Console, or can be independent.
Pipeline components
A telemetry pipeline consists of the following high-level components:
-
Data Sources: Data sources are the endpoints of the pipeline that generate the logs and other telemetry data you want to collect. For example, firewalls and other appliances, Kubernetes clusters, application servers, and so on can all be data sources. Data sources send their data either directly to a destination, or to a router.
Axoflow provides several log collecting agents and solutions to collect data in different environments, including connectors for cloud services, Kubernetes clusters, Linux servers, and Windows servers.
-
Routers: Router (also called relays or aggregators) collect the data from a set of data sources and transport them to the destinations.
AxoRouter can collect, curate, and enrich the data: it automatically identifies your log sources and fixes common errors in the incoming data. It also converts the data into a format that best suits the destination to optimize ingestion speed and data quality.
-
Destinations: Destinations are your SIEM and storage solutions where the telemetry pipeline delivers your security data.
Your telemetry pipeline can consist of managed and unmanaged components. You can deploy and configure managed components from the Axoflow Console. Axoflow provides several managed components that help you collect or fetch data from your various data sources, or act as routers.
Axoflow Console
Axoflow Console is the data visualization and management UI of Axoflow. Available both as a SaaS and an on-premises solution, it collects and visualizes the metrics received from the pipeline components to provide insight into the details of your telemetry pipeline and the data it processes. It also allows you to:
AxoRouter
AxoRouter is a router (aggregator) and data curation engine: it collects all kinds of telemetry and security data and has all the low-level functions you would expect of log-forwarding agents and routers. AxoRouter can also curate and enrich the collected data: it automatically identifies your log sources and fixes common errors in the incoming data: for example, it corrects missing hostnames, invalid timestamps, formatting errors, and so on.
AxoRouter also has a range of zero-maintenance connectors for various networking and security products (for example, switches, firewalls, and web gateways), so it can classify the incoming data (by recognizing the product that is sending it), and apply various data curation and enrichment steps to reduce noise and improve data quality.
Before sending your data to its destination, AxoRouter automatically converts the data into a format that best suits the destination to optimize ingestion speed and data quality. For example, when sending data to Splunk, setting the proper sourcetype and index is essential.
In addition to curating and formatting your data, AxoRouter also collects detailed metrics about the processed data. These real-time metrics give you insight into the status of the telemetry pipeline and its components, providing end-to-end observability into your data flows.
Axolet
Axolet is a monitoring and management agent that integrates with the local log collector (like AxoSyslog, Splunk Connect for Syslog, or syslog-ng) that runs on the data source and provides detailed metrics about the host and its data traffic to the Axoflow Console.
3 - Deployment scenarios
Thanks to its flexible deployment modes, you can quickly insert an Axoflow processing node (called AxoRouter) transparently into your telemetry pipeline and gain instant benefits:
- Data reduction
- Improved SIEM accuracy
- Configuration UI for routing data
- Automatic data classification
- Metrics about log ingestion, processing, and data drops
- Analytics about the transported data
- Health check, highlighting anomalies
After the first step, you can further integrate your pipeline, by deploying Axoflow agents to collect and manage your security data, or onboarding your existing log collector agents (for example, syslog-ng). Let’s see what these scenarios look like in detail.
Transparent router deployment
In transparent mode, you deploy an AxoRouter in front of your SIEM and configure the syslog server to send the logs to AxoRouter instead of the SIEM. The transparent deployment method is a quick, minimally invasive way to get instant benefits and value from Axoflow:
- automatically identifies the sending device or application,
- applies device-specific, zero-maintenance parsers to the incoming data, and
- enriches metadata to the incoming messages.
You can use Axoflow console as a SaaS solution, deploy it on-premises. We also support hybrid and air-gapped environments.
Axoflow as SaaS:
Axoflow on premises:
Router and edge deployment
Axoflow provides agents to collect data from all kinds of sources: Kubernetes clusters, cloud sources, security appliances, as well as regular Windows or Linux-based servers. (If you’d prefer to keep using your existing syslog infrastructure instead of the Axoflow agents, see Onboard an existing syslog infrastructure).
Using the Axoflow collector agents gives you:
- Reliable transport: Between its components, Axoflow transports security data using the reliable OpenTelemetry protocol (OTLP) for high performance, and to avoid losing messages.
- Managed components: You can configure, manage, monitor, and troubleshoot all these components from the Axoflow Console. For example, you can sample the data flowing through each component.
- Metrics: Detailed metrics from every collector provide unprecedented insight into the status of your telemetry pipeline.
Onboard an existing syslog infrastructure
If your organization already has a syslog architecture in place, Axoflow provides ways to reuse it. This allows you to integrate your existing infrastructure with Axoflow, and optionally – in a later phase – replace your log collectors with the agents provided by Axoflow.
Read-only mode
In this scenario, you install Axolet on the data source. Axolet is a monitoring (and management) agent that integrates with the local log collector, like AxoSyslog, Splunk Connect for Syslog, or syslog-ng, and sends detailed metrics about the host and its data traffic to the Axoflow Console. This allows you to use the Axoflow Console to:
Unmanaged AxoRouter deployments
In this mode, you install AxoRouter on the data source to replace its local collector agent, and manage it manually. That way you get the functional benefits of using AxoRouter (our aggregator and data curation engine) to collect and classify your data but can manage its configuration as you see fit. This gives you all the benefits of the read-only mode (since AxoRouter includes Axolet as well), and in addition, it provides:
- Advanced and more detailed metrics about the log ingestion, processing, data drops, delays
- More detailed analytics about the transported data
- Access to the FilterX processing engine
- Ability to receive OpenTelemetry data
- Acts as a Windows Event Collector server, allowing you to collect Windows events
- Optimized output for the specific SIEMs
- Data reduction
Managed AxoRouter deployments
This deployment mode is similar to the previous one, but instead of writing configurations manually, you use the centralized management UI of Axoflow Console to manage your AxoRouter instances. This provides the tightest integration and the most benefits. In addition to the unmanaged use case, it gives you:
4 - Try for free
To see Axoflow in action, you have a number of options:
- Watch some of the product videos on our blog.
- Request a demo sandbox environment. This pre-deployed environment contains a demo pipeline complete with destinations, routers, and sources that generate traffic, and you can check the metrics, use the analytics, modify the flows, and so on to get a feel of using a real Axoflow deployment.
- Request a free evaluation version. That’s an empty cloud deployment you can use for testing and PoC: you can onboard your own pipeline elements, add sources and destinations, and see how Axoflow performs with your data!
5 - Getting started
This guide shows you how to get started with Axoflow. You’re going to install AxoRouter, and configure or create a source to send data to AxoRouter. You’ll also configure AxoRouter to forward the received data to your destination SIEM or storage provider. The resulting topology will look something like this:
Why use Axoflow
Using the Axoflow security data pipeline automatically corrects and augments the security data you collect, resulting in high-quality, curated, SIEM-optimized data. It also removes redundant data to reduce storage and SIEM costs. In addition, it allows automates pipeline configuration and provides metrics and alerts for your telemetry data flows.
Prerequisites
You’ll need:
-
An Axoflow subscription, or access to a free evaluation version.
-
A data source. This can be any host that you can configure to send syslog or OpenTelemetry data to your AxoRouter instance that you’ll install. If you don’t want to change the configuration of an existing device, you can use a virtual machine or a docker container on your local computer.
-
A host that you’ll install AxoRouter on. This can be a separate Linux host, or a virtual machine running on your local computer.
AxoRouter should work on most Red Hat and Debian compatible Linux distributions. For production environments, we recommend using Red Hat 9.
-
Access to a supported SIEM or storage provider, like Splunk or Amazon S3. For a quick test of Axoflow, you can use a free Splunk or OpenObserve account as well.
Network access
The hosts must be able to access the following domains related to the Axoflow Console:
<your-tenant-id>.cloud.axoflow.io
: HTTPS traffic on TCP port 443, needed to download the binaries for Axoflow software (like Axolet and AxoRouter).
kcp.<your-tenant-id>.cloud.axoflow.io
: HTTPS (mutual TLS) traffic on TCP port 443 for management traffic.
telemetry.<your-tenant-id>.cloud.axoflow.io
: HTTPS (mutual TLS) traffic on TCP port 443, where Axolet sends the metrics of the host.
us-docker.pkg.dev
: HTTPS traffic on TCP port 443, for pulling container images (AxoRouter only).
Log in to the Axoflow Console
Verify that you have access to the Axoflow Console.
- Open
https://<your-tenant-id>.axoflow.io/
in your browser.
- Log in using Google Authentication.
Add a destination
Add the destination where you’re sending your data. For a quick test, you can use a free Splunk or OpenObserve account.
Add a Splunk Cloud destination. For other destinations, see Destinations.
Prerequisites
-
Enable the HTTP Event Collector (HEC) on your Splunk deployment if needed. On Splunk Cloud Platform deployments, HEC is enabled by default.
-
Create a token for Axoflow to use in the destination. When creating the token, use the syslog source type.
For details, see Set up and use HTTP Event Collector in Splunk Web.
-
If you’re using AxoRouter, create the indexes where Axoflow sends the log data. Which index is needed depends on the sources you have, but create at least the following event indices: axoflow
, infraops
, netops
, netfw
, osnix
(for unclassified messages). Check your sources in the Data sources section for a detailed lists on which indices their data is sent.
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + > Destination.
-
Configure the destination.
-
Select Splunk.
-
Enter a name for the destination.
-
Enter your Splunk URL into the URL field, for example, https://<your-splunk-tenant-id>.splunkcloud.com:8088
for Splunk Cloud Platform free trials, or https://<your-splunk-tenant-id>.splunkcloud.com
for Splunk Cloud Platform instances.
-
Enter the token you’ve created into the Token field.
-
Disable the Verify server certificate option unless your deployment has a valid, non-self-signed certificate. Free Splunk Cloud accounts have self-signed certificates.
-
Select Create.
Deploy an AxoRouter instance
Deploy an AxoRouter instance that will route, curate, and enrich your log data.
Deploy AxoRouter on Linux. For other platforms, see AxoRouter.
-
Select Provisioning > Select type and platform.
-
Select the type (AxoRouter) and platform (Linux). The one-liner installation command is displayed.
If needed, set the Advanced options (for example, proxy settings) to modify the installation parameters. Usually, you don’t have to use advanced options unless the Axoflow support team instructs you to do so.
-
Open a terminal on the host where you want to install AxoRouter.
-
Run the one-liner, then follow the on-screen instructions.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
Example output:
Do you want to install AxoRouter now? [Y]
y
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4142 100 4142 0 0 19723 0 --:--:-- --:--:-- --:--:-- 19818
Verifying packages...
Preparing packages...
axorouter-0.40.0-1.aarch64
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 31.6M 100 31.6M 0 0 2092k 0 0:00:15 0:00:15 --:--:-- 2009k
Verifying packages...
Preparing packages...
axolet-0.40.0-1.aarch64
Created symlink /etc/systemd/system/multi-user.target.wants/axolet.service → /usr/lib/systemd/system/axolet.service.
Now continue with onboarding the host on the Axoflow web UI.
-
Register the host.
-
Reload the Provisioning page. There should be a registration request for the new AxoRouter deployment. Select ✓.
-
Select Register to register the host. You can add a description and labels (in label:value
format) to the host.
-
Select the Topology page. The new AxoRouter instance is displayed.
Add a source appliance
Configure a host to send data to AxoRouter.
Configure a generic syslog host. For appliances that are specifically supported by Axoflow, see Data sources.
-
Log in to your device. You need administrator privileges to perform the configuration.
-
If needed, enable syslog forwarding on the device.
-
Set AxoRouter as the syslog server. Typically, you can configure the following parameters:
-
Name or IP Address of the syslog server: Set the address of your AxoRouter.
-
Protocol: If possible, set TCP or TLS.
-
Syslog Format: If possible, set RFC5424 (or equivalent), otherwise leave the default.
-
Port: Set a port appropriate for the protocol and syslog format you have configured.
By default, AxoRouter accepts data on the following ports:
- 514 TCP and UDP for RFC3164 (BSD-syslog) formatted traffic.
- 601 TCP for RFC5424 (IETF-syslog) formatted traffic.
- 6514 TCP for TLS-encrypted traffic.
- 4317 TCP for OpenTelemetry log data.
Make sure to enable the ports you’re using on the firewall of your host.
-
Add the appliance to the Axoflow Console. For details, see Appliances.
Note
If your syslog source is running
syslog-ng
, Splunk Connect for Syslog (SC4S), or AxoSyslog as its log forwarder agent, consider installing Axolet on the host and instrumenting the configuration of the log forwarder to receive detailed metrics about the host and the processed data. For details, see
Manage and monitor the pipeline.
Add a path
Create a path between the source appliance and the AxoRouter instance.
-
Select Topology > + Path.
-
Select your data source in the Source host field.
-
Select the target router or aggregator this source is sending its data to in the Target host field, for example, axorouter
.
-
Select the Target connector. The connector determines how the destination receives the data (for example, using which protocol or port).
-
Select Create. The new path appears on the Topology page.
Create a flow
Create a flow to route the traffic from your AxoRouter instance to your destination.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.
-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet Fortigate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.
-
Select Create.
-
The new flow appears in the Flows list.
Check the metrics on the Topology page
Open the Topology page and verify that your AxoRouter instance is connected both to the appliance and the destination. If it isn’t, select + > Path to connect them.
If you have traffic flowing from the source to your AxoRouter instance, the Topology page shows the amount of data flowing on the path. Click the AxoRouter instance, then select Analytics to visualize the data flow.
Tap into the log flow
Log tapping in Axoflow samples the log flow. You can use labels to filter for specific messages (like ones with parse errors) and tap only those messages. To not get overwhelmed with events, Axoflow automatically samples the output: if many messages match the selected filter, only a subset is shown (about 1 message per second). Using log tapping, you can quickly troubleshoot both parsing/curation errors and destination ingest (API) errors, and check:
- What was in the original message?
- What is sent in the final payload to the destination?
Tap into the log flow.
-
Click your AxoRouter instance on the Topology page, then select ⋮ > Tap log flow.
-
Tap into the log flow.
- To see the input data, select Input log flow > Start.
- To see the output data, select Output log flow > Start.
You can use labels to filter the messages and sample only the matching ones.
-
When the logs you’re interested in show up, click Stop Log Tap, then click a log message to see its details.
-
If you don’t know what the message means, select AI Analytics to ask our AI to interpret it.
Troubleshooting
In case you run into problems, or you’re not getting any data in Splunk, check the logs of your AxoRouter instance:
-
Select Topology, then select your AxoRouter instance.
-
Select ⋮ > Tap agent logs > Start. Axoflow displays the log messages of AxoRouter. Check the logs for error messages. Some common errors include:
Redirected event for unconfigured/disabled/deleted index=netops with source="source::axo" host="host::axosyslog-almalinux" sourcetype="sourcetype::fortigate_event" into the LastChanceIndex. So far received events from 1 missing index(es).
: The Splunk index where AxoRouter is trying to send data doesn’t exist. Check which index is missing in the error message and create it in Splunk. (For a list of recommended indices, see the Splunk destination prerequisites.)
http: error sending HTTP request; url='https://prd-p-sp2id.splunkcloud.com:8088/services/collector/event/1.0?index=&source=&sourcetype=', error='SSL peer certificate or SSH remote key was not OK', worker_index='0', driver='splunk--flow-axorouter4-almalinux#0', location='/usr/share/syslog-ng/include/scl/splunk/splunk.conf:104:3'
: Your Splunk deployment uses an invalid or self-signed certificate, and the Verify server certificate option is enabled in the Splunk destination of Axoflow. Either fix the certificate in Splunk, or: select Topology > <your-splunk-destination>, disable Verify server certificate, then select Update.
6 - Concepts
This section describes the main concepts of Axoflow.
6.1 - Automatic data processing
When forwarding data to your SIEM, poor quality data and malformed logs that lack critical fields like timestamps or hostnames need to be fixed. The usual solutions fix the problem in the SIEM, and involve complex regular expressions, which are difficult to create and maintain. SIEM users often rely on vendors to manage these rules, but support is limited, especially for less popular devices. Axoflow offers a unique solution by automatically processing, curating, and classifying data before it’s sent to the SIEM, ensuring accurate, structured, and optimized data, reducing ingestion costs and improving SIEM performance.
The problem
The main issue related to classification is that many devices send malformed messages: missing timestamp, missing hostname, invalid message format, and so on. Such errors can cause different kinds of problems:
- Log messages are often routed to different destinations based on the sender hostname. Missing or invalid hostnames mean that the message is not attributed to the right host, and often doesn’t arrive at its intended destination.
- Incorrect timestamp or timezone hampers investigations during an incident, resulting in potentially critical data failing to show up (or extraneous data appearing) in queries for a particular period.
- Invalid data can lead to memory leaks or resource overload in the processing software (and to unusable monitoring dashboards) when a sequence number or other rapidly varying field is mistakenly parsed as the hostname, program name, or other low cardinality field.
Overall, they decrease the quality of security data you’re sending to your SIEM tools, which increases false positives, requires secondary data processing to clean, and increases query time – all of which ends up costing firms a lot more.
For instance, this is a log message from a SonicWall
firewall appliance:
<133> id=firewall sn=C0EFE33057B0 time="2024-10-07 14:56:47 UTC" fw=172.18.88.37 pri=6 c=1024 m=537 msg="Connection Closed" f=2 n=316039228 src=192.0.0.159:61254:X1: dst=10.0.0.7:53:X3:SIMILDC01 proto=udp/dns sent=59 rcvd=134 vpnpolicy="ELG Main"
A well-formed syslog message should look like this:
<priority>timestamp hostname application: message body
As you can see, the SonicWall format is completely invalid after the initial <priority>
field. Instead of the timestamp, hostname, and application name comes the free-form part of the message (in this case a whitespace-separated key=value list). Unless you extract the hostname and timestamp from the content of this malformed message, you won’t be able to reconstruct the course of events during a security incident.
Axoflow provides data processing, curation, and classification intelligence that’s built into the data pipeline, so it processes and fixes the data before it’s sent to the SIEM.
Our solution
Our data engine and database solution automatically processes the incoming data: AxoRouter recognizes and classifies the incoming data, applies device-specific fixes for the errors, then enriches and optimizes the formatting for the specific destination (SIEM). This approach has several benefits:
- Cost reduction: All data is processed before it’s sent to the SIEM. That way, we can automatically reduce the amount of data sent to the SIEM (for example, by removing empty and redundant fields), cutting your data ingestion costs.
- Structured data: Axoflow recognizes the format of the incoming data payload (for example, JSON, CSV, LEEF, free text), and automatically parses the payload into a structured map. This allows us to have detailed, content-based metrics and alerts, and also makes it easy for you to add custom transformations if needed.
- SIEM-independent: The structured data representation allows us to support multiple SIEMs (and other destinations) and optimize the data for every destination.
- Performance: Compared to the commonly used regular expressions, it’s more robust and has better performance, allowing you to process more data with fewer resources.
- Maintained by Axoflow: We maintain the database; you don’t have work with it. This includes updates for new product versions and adding new devices. We proactively monitor and check the new releases of main security devices for logging-related changes and update our database. (If something’s not working as expected, you can easily submit log samples and we’ll fix it ASAP). Currently, we have over 80 application adapters in our database.
The automatic classification and curation also adds labels and metadata that can be used to make decisions and route your data. Messages with errors are also tagged with error-specific tags. For example, you can easily route all firewall and security logs to your Splunk deployment, and exclude logs (like debug logs) that have no security relevance and shouldn’t be sent to the SIEM.
Axoflow Console allows you to quickly drill down to find log flows with issues, and to tap into the log flow and see samples of the specific messages that are processed, along with the related parsing information, like tags that describe the errors of invalid messages.
message.utf8_sanitized
: The message is not valid UTF-8.
syslog.missing_timestamp
: The message has no timestamp.
syslog.invalid_hostname
: The hostname field doesn’t seem to be valid, for example, it contains invalid characters.
syslog.missing_pri
: The priority (PRI) field is missing from the message.
syslog.unexpected_framing
: An octet count was found in front of the message, suggested invalid framing.
syslog.rfc3164_missing_header
: The date and the host are missing from the message – practically that’s the entire header of RFC3164-formatted messages.
syslog.rfc5424_unquoted_sdata_value
: The message contains an incorrectly quoted RFC5424 SDATA field.
message.parse_error
: Some other parsing error occurred.
6.2 - Host attribution and inventory
Axoflow’s built-in inventory solution enriches your security data with critical metadata (like the origin host) so you can pinpoint the exact source of every data entry, enabling precise, label-based routing and more informed security decisions.
Enterprises and organizations collect security data (like syslog and other event data) from various data sources, including network devices, security devices, servers, and so on. When looking at a particular log entry during a security incident, it’s not always trivial to determine what generated it. Was it an appliance or an application? Which team is the owner of the application or device? If it was a network device like a switch or a Wi-Fi access point (of which even medium-sized organizations have dozens), can you tell which one it was, and where is it located?
Cloud-native environments, like Kubernetes have addressed this issue using resource metadata. You can attach labels to every element of the infrastructure: containers, pods, nodes, and so on, to include region, role, owner or other custom metadata. When collecting log data from the applications running in containers, the log collector agent can retrieve these labels and attach them to the log data as metadata. This helps immensely to associate routing decisions and security conclusions with the source systems.
In non-cloud environments, like traditionally operated physical or virtual machine clusters, the logs of applications, appliances, and other data sources lack such labels. To identify the log source, you’re stuck with the sender’s IP address, the sender’s hostname, and in some cases the path and filename of the log file, plus whatever information is available in the log message.
Why isn’t the IP address or hostname enough?
- Source IP addresses are poor identifiers, as they aren’t strongly tied to the host, especially when they’re dynamically allocated using DHCP or when a group of senders are behind a NAT and have the same address.
- Some organizations encode metadata into the hostname or DNS record. However, compared to the volume of log messages, DNS resolving is slow. Also, the DNS record is available at the source, but might not be available at the log aggregation device or the log server. As a result, this data is often lost.
- Many data sources and devices omit important information from their messages. For example, the log messages of Cisco routers by default omit the hostname. (They can be configured to send their hostname, but unfortunately, often they aren’t.) You can resolve their IP address to obtain the hostname of the device, but that leads back to the problem in the previous point.
- In addition, all the above issues are rooted in human configuration practices, which tend to be the main cause of anomalies in the system.
Correlating data to data source
Some SIEMs (like IBM QRadar) rely on the sender IP address to identify the data source. Others, like Splunk, delegate the task of providing metadata to the collector agent, but log sources often don’t supply metadata, and even if they do, the information is based on IP address or DNS.
Certain devices, like SonicWall firewalls include a unique device ID in the content of their log messages. Having access to this ID makes attribution straightforward, but logging solutions rarely extract such information. AxoRouter does.
Axoflow builds an inventory to match the messages in your data flow to the available data sources, based on data including:
-
IP address, host name, and DNS records of the source (if available),
-
serial number, device ID, and other unique identifiers extracted from the messages, and
-
metadata based on automatic classification of the log messages, like product and vendor name.
Axoflow (or more precisely, AxoRouter) classifies the processed log data to identify common vendors and products, and automatically applies labels to attach this information.
You can also integrate the telemetry pipeline with external asset management systems to further enrich your security data with custom labels and additional contextual information about your data sources.
6.3 - Policy based routing
Axoflow’s host attribution gives you unprecedented possibilities to route your data. You can tell exactly what kind of data you want to send where. Not only based on technical parameters like sender IP or application name, but using a much more fine-grained inventory that includes information about the devices, their roles, and their location. For example:
- The vendor and the product (for example, a SonicWall firewall)
- The physical location of the device (geographical location, rack number)
- Owner of the device (organization/department/team)
Axoflow automatically adds metadata labels based on information extracted from the data payload, but you can also add static custom labels to hosts manually, or dynamically using flows.
Paired with the information from the content of the messages, all this metadata allows you to formulate high-level, declarative policies and alerts using the metadata labels, for example:
- Route the logs of devices to the owner’s Splunk index.
- Route every firewall and security log to a specific index.
- Let the different teams manage and observe their own devices.
- Granular access control based on relevant log content (for example, identify a proxy log-stream based on the upstream address), instead of just device access.
- Ensure that specific logs won’t cross geographic borders, violating GDPR or other compliance requirements.
6.4 - Reliable transport
Between its components, Axoflow transports data using the reliable OpenTelemetry protocol (OTLP) for high performance, and to avoid losing messages. Under the hood, OpenTelemetry uses gRPC, and has superior performance compared to other log transport protocols, consumes less bandwidth (because it uses protocol buffers), and also provides features like:
- on-the-wire compression
- authentication
- application layer acknowledgement
- batched data sending
- multi-worker scalability on the client and the server.
7 - Manage and monitor the pipeline
7.1 - Provision pipeline elements
The following sections describe how to register a logging host into the Axoflow Console.
Hosts with supported collectors
If the host is running one of the following log collector agents and you can install Axolet on the host to receive detailed metrics about the host, the agent, and data flow the agent processes.
-
Install Axolet on the host. For details, see Axolet.
-
Configure the log collector agent of the host to integrate with Axolet. For details, see the following pages:
7.1.1 - AxoRouter
7.1.1.1 - Install AxoRouter on Kubernetes
To install AxoRouter on a Kubernetes cluster, complete the following steps. For other platforms, see AxoRouter.
Prerequisites
Kubernetes version 1.29 and newer
Minimal resource requirements
- CPU: at least
100m
- Memory:
256MB
- Storage:
8Gi
Network access
The hosts must be able to access the following domains related to the Axoflow Console:
<your-tenant-id>.cloud.axoflow.io
: HTTPS traffic on TCP port 443, needed to download the binaries for Axoflow software (like Axolet and AxoRouter).
kcp.<your-tenant-id>.cloud.axoflow.io
: HTTPS (mutual TLS) traffic on TCP port 443 for management traffic.
telemetry.<your-tenant-id>.cloud.axoflow.io
: HTTPS (mutual TLS) traffic on TCP port 443, where Axolet sends the metrics of the host.
us-docker.pkg.dev
: HTTPS traffic on TCP port 443, for pulling container images (AxoRouter only).
Install AxoRouter
-
Open the Axoflow Console.
-
Select Provisioning.
-
Select the Host type > AxoRouter > Kubernetes. The one-liner installation command is displayed.
-
Open a terminal and set your Kubernetes context to the cluster where you want to install AxoRouter.
-
Run the one-liner, and follow the on-screen instructions.
Current kubernetes context: minikube
Server Version: v1.28.3
Installing to new namespace: axorouter
Do you want to install AxoRouter now? [Y]
-
Register the host.
-
Reload the Provisioning page. There should be a registration request for the new AxoRouter deployment. Select ✓.
-
Select Register to register the host. You can add a description and labels (in label:value
format) to the host.
-
Select the Topology page. The new AxoRouter instance is displayed.
Create a flow
- If you haven’t already done so, create a new destination.
-
Create a flow to connect the new AxoRouter to the destination.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.
-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet Fortigate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.
-
Select Create.
-
The new flow appears in the Flows list.
Send logs to AxoRouter
By default, AxoRouter accepts data on the following ports:
- 514 TCP and UDP for RFC3164 (BSD-syslog) formatted traffic.
- 601 TCP for RFC5424 (IETF-syslog) formatted traffic.
- 6514 TCP for TLS-encrypted traffic.
- 4317 TCP for OpenTelemetry log data.
Make sure to enable the ports you’re using on the firewall of your host.
7.1.1.1.1 - Advanced installation options
When installing AxoRouter, you can set a number of advanced options if needed for your environment. Setting the advanced options in the Axoflow Console automatically updates the one-liner command that you can copy and run.
Alternatively, before running the one-liner you can use one of the following methods:
Proxy settings
Use the http_proxy=
, https_proxy=
, no_proxy=
parameters to configure HTTP proxy settings for the installer. To configure the Axolet service to use the proxy settings, enable the AXOLET_AVOID_PROXY
parameter as well. Lowercase variable names are preferred because they work universally.
Installation options
You can pass the following parameters to the installation script as environment variables, or as URL parameters.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
7.1.1.2 - Install AxoRouter on Linux
AxoRouter is a key building block of Axoflow that collects, aggregates, transforms and routes all kinds of telemetry and security data automatically. AxoRouter for Linux includes a Podman container running AxoSyslog, Axolet, and other components.
To install AxoRouter on a Linux host, complete the following steps. For other platforms, see AxoRouter.
What the install script does
The AxoRouter installer will first install a software package which deploys a systemd service.
What the install script does
When you deploy AxoRouter, you run a command that installs the required software packages, configures them and sets up the connection with Axoflow.
The installer script installs the axorouter packages, then executes the configure-axorouter
command with the right parameters. (If you’ve installed the packages in advance, the installer script only executes the configure-axorouter
command.)
The configure-axorouter
command is designed to be run as root (sudo), but you can configure axorouter to run as a non-root user. The configure-axorouter
command is executed with a configuration snippet on its standard input which contains a token required for registering into the management platform.
The script performs the following main steps:
- Generates a unique identifier (GUID).
- Initiates a cryptographic handshake process to Axoflow.
- Creates the initial configuration file for AxoRouter under
/etc/axorouter/
.
- Installs a statically linked executable to
/usr/local/bin/axorouter
.
- Creates the systemd service unit file
/etc/systemd/system/axorouter.service
, then enables and starts that service.
- The service waits for an approval on Axoflow. Once you approve the host registration request, Axoflow issues a client certificate to AxoRouter.
- AxoRouter starts to send telemetry data to Axoflow, and keeps sending them as long as the agent is registered and the certificate is valid.
Prerequisites
Minimal resource requirements
- CPU: at least
100m
- Memory:
256MB
- Storage:
8Gi
Network access
The hosts must be able to access the following domains related to the Axoflow Console:
<your-tenant-id>.cloud.axoflow.io
: HTTPS traffic on TCP port 443, needed to download the binaries for Axoflow software (like Axolet and AxoRouter).
kcp.<your-tenant-id>.cloud.axoflow.io
: HTTPS (mutual TLS) traffic on TCP port 443 for management traffic.
telemetry.<your-tenant-id>.cloud.axoflow.io
: HTTPS (mutual TLS) traffic on TCP port 443, where Axolet sends the metrics of the host.
us-docker.pkg.dev
: HTTPS traffic on TCP port 443, for pulling container images (AxoRouter only).
Install AxoRouter
-
Select Provisioning > Select type and platform.
-
Select the type (AxoRouter) and platform (Linux). The one-liner installation command is displayed.
If needed, set the Advanced options (for example, proxy settings) to modify the installation parameters. Usually, you don’t have to use advanced options unless the Axoflow support team instructs you to do so.
-
Open a terminal on the host where you want to install AxoRouter.
-
Run the one-liner, then follow the on-screen instructions.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
Example output:
Do you want to install AxoRouter now? [Y]
y
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 4142 100 4142 0 0 19723 0 --:--:-- --:--:-- --:--:-- 19818
Verifying packages...
Preparing packages...
axorouter-0.40.0-1.aarch64
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 31.6M 100 31.6M 0 0 2092k 0 0:00:15 0:00:15 --:--:-- 2009k
Verifying packages...
Preparing packages...
axolet-0.40.0-1.aarch64
Created symlink /etc/systemd/system/multi-user.target.wants/axolet.service → /usr/lib/systemd/system/axolet.service.
Now continue with onboarding the host on the Axoflow web UI.
-
Register the host.
-
Reload the Provisioning page. There should be a registration request for the new AxoRouter deployment. Select ✓.
-
Select Register to register the host. You can add a description and labels (in label:value
format) to the host.
-
Select the Topology page. The new AxoRouter instance is displayed.
Create a flow
- If you haven’t already done so, create a new destination.
-
Create a flow to connect the new AxoRouter to the destination.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.
-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet Fortigate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.
-
Select Create.
-
The new flow appears in the Flows list.
Send logs to AxoRouter
Configure your hosts to send data to AxoRouter.
-
For appliances that are specifically supported by Axoflow, see Data sources.
-
For other appliances and generic Linux devices, see Generic tips.
-
For a quick test without an actual source, you can also do the following (requires nc
to be installed on the AxoRouter host):
-
Open the Axoflow Console, select Topology, then select the AxoRouter instance you’ve deployed.
-
Select ⋮ > Tap log flow > Input log flow. Select Start.
-
Open a terminal on your AxoRouter host.
-
Run the following command to send 120 test messages (2 per second) in a loop to AxoRouter:
for i in `seq 1 120`; do echo "<165> fortigate date=$(date -u +%Y-%m-%d) time=$(date -u +"%H:%M:%S%Z") devname=us-east-1-dc1-a-dmz-fw devid=FGT60D4614044725 logid=0100040704 type=event subtype=system level=notice vd=root logdesc=\"System performance statistics\" action=\"perf-stats\" cpu=2 mem=35 totalsession=61 disk=2 bandwidth=158/138 setuprate=2 disklograte=0 fazlograte=0 msg=\"Performance statistics: average CPU: 2, memory: 35, concurrent sessions: 61, setup-rate: 2\""; sleep 0.5; done | nc -v 127.0.0.1 514
Alternatively, you can send logs in an endless loop:
while true; do echo "<165> fortigate date=$(date -u +%Y-%m-%d) time=$(date -u +"%H:%M:%S%Z") devname=us-east-1-dc1-a-dmz-fw devid=FGT60D4614044725 logid=0100040704 type=event subtype=system level=notice vd=root logdesc=\"System performance statistics\" action=\"perf-stats\" cpu=2 mem=35 totalsession=61 disk=2 bandwidth=158/138 setuprate=2 disklograte=0 fazlograte=0 msg=\"Performance statistics: average CPU: 2, memory: 35, concurrent sessions: 61, setup-rate: 2\""; sleep 1; done | nc -v 127.0.0.1 514
Manage AxoRouter
This section describes how to start, stop and check the status of the AxoRouter service on Linux.
Start AxoRouter
To start AxoRouter, execute the following command. For example:
systemctl start axorouter
If the service starts successfully, no output will be displayed.
The following message indicates that AxoRouter can not start (see Check AxoRouter status):
Job for axorouter.service failed because the control process exited with error code. See `systemctl status axorouter.service` and `journalctl -xe` for details.
Stop AxoRouter
To stop AxoRouter
-
Execute the following command.
systemctl stop axorouter
-
Check the status of the AxoRouter service (see Check AxoRouter status).
Restart AxoRouter
To restart AxoRouter, execute the following command.
systemctl restart axorouter
Reload the configuration without restarting AxoRouter
To reload the configuration file without restarting AxoRouter, execute the following command.
systemctl reload axorouter
Check the status of AxoRouter service
To check the status of AxoRouter service
-
Execute the following command.
systemctl --no-pager status axorouter
-
Check the Active:
field, which shows the status of the AxoRouter service. The following statuses are possible:
7.1.1.2.1 - Advanced installation options
When installing AxoRouter, you can set a number of advanced options if needed for your environment. Setting the advanced options in the Axoflow Console automatically updates the one-liner command that you can copy and run.
Alternatively, before running the one-liner you can use one of the following methods:
Proxy settings
Use the http_proxy=
, https_proxy=
, no_proxy=
parameters to configure HTTP proxy settings for the installer. To configure the Axolet service to use the proxy settings, enable the AXOLET_AVOID_PROXY
parameter as well. Lowercase variable names are preferred because they work universally.
Installation options
You can pass the following parameters to the installation script as environment variables, or as URL parameters.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
7.1.1.2.2 - Run AxoRouter as non-root
To run AxoRouter as a non-root user, set the AXO_USER
and AXO_GROUP
environment variables to the user’s username and groupname on the host you want to deploy AxoRouter. For details, see Advanced installation options.
Operators must have access to the following commands:
-
/usr/bin/systemctl * axolet.service
: Controls the axolet.service
systemd unit. Usually *
is start
, stop
, restart
, enable
, and status
. Used by the operators for troubleshooting.
-
/usr/local/bin/configure-axolet
: Creates initial axolet
configuration and enables/starts the axolet
service. Executed by the bootstrap script.
-
Command to install and upgrade the axolet
package. Executed by the bootstrap script if the packages aren’t already installed.
- On RPM-based Linux distributions:
/usr/bin/rpm -Uv axo*.rpm
- On DEB-based Linux distributions:
/usr/bin/dpkg -i axo*.deb
-
/usr/bin/systemctl * axorouter.service
: Controls the axorouter.service
systemd unit. Usually *
is start
, stop
, restart
, enable
, and status
. Used by the operators for troubleshooting.
-
/usr/local/bin/configure-axorouter
: Creates the initial axorouter configuration and enables/starts the axorouter service. Executed by the bootstrap script.
-
Command to install and upgrade the axorouter and the axolet package. Executed by the bootstrap script if the packages aren’t already installed.
- On RPM-based Linux distributions:
/usr/bin/rpm -Uv axo*.rpm
- On DEB-based Linux distributions:
/usr/bin/dpkg -i axo*.deb
You can permit the syslogng
user to run these commands by running on of the following:
sudo tee /etc/sudoers.d/configure-axoflow <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axolet
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axolet.service
# for rpm installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/rpm -Uv axo*.rpm
A
sudo tee /etc/sudoers.d/configure-axorouter <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axorouter
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axorouter.service
# for rpm installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/rpm -Uv axo*.rpm
A
sudo tee /etc/sudoers.d/configure-axorouter <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axorouter
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axorouter.service
# for deb installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/dpkg -i axo*.deb
A
sudo tee /etc/sudoers.d/configure-axorouter <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axorouter
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axorouter.service
# for deb installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/rpm -Uv axo*.deb
A
7.1.2 - AxoSyslog
Onboarding allows you to collect metrics about the host, display the host on the Topology page, and to tap into the log flow.
Onboarding requires you to modify the host and the configuration of the logging agent running on the host.
- Level 1: Install Axolet on the host. Axolet collects metrics from the host and sends them to the Axoflow Console, so you can check host-level metrics on the Metrics & Health page of the host, and displays the host on the Topology page.
- Level 2: Instrument the configuration of the logging agent to provide detailed metrics about the traffic flow. This allows you to display data about the host on the Analytics page.
- Level 3: Instrument the configuration of the logging agent to allow you to access the logs of the logging agent and to tap into the log flow from the Axoflow Console. The exact steps for this integration step depend on the configuration of your logging agent. Contact us so our professional services can help you with the integration.
To onboard an existing AxoSyslog instance into Axoflow, complete the following steps.
-
Install Axolet on the host, then approve its registration on the Provisioning page of the Axoflow Console.
-
The AxoSyslog host is now visible on the Topology page of the Axoflow Console as a source.
-
If you've already added the AxoRouter instance or the destination where this host is sending data to the Axoflow Console, add a path to connect the host to the AxoRouter or the destination.
-
Select Topology > + Path.
-
Select your data source in the Source host field.
-
Select the target router or aggregator this source is sending its data to in the Target host field, for example, axorouter
.
-
Select the Target connector. The connector determines how the destination receives the data (for example, using which protocol or port).
-
Select Create. The new path appears on the Topology page.
-
Access the AxoSyslog host and edit the configuration of AxoSyslog. Set the statistics-related global options like this (if the options
block already exists, add these lines to the bottom of the block):
options {
stats(
level(2)
freq(0) # Inhibit statistics output to stdout
);
};
-
(Optional) To get detailed metrics and analytics about the traffic that flows through the host, instrument your AxoSyslog configuration as follows:
Note
You can use Axolet with an un-instrumented AxoSyslog configuration file, but that limits available metrics to host statistics (for example, disk, memory, queue information). You won’t access data about the actual traffic flowing through the host. To collect traffic-related metrics, instrument configuration with
metrics-probe()
stanzas. The example below shows how to instrument the configuration to highlight common macros such as
$HOST
and
$PROTOCOL
. If you want to customize the collected metrics or need help with the instrumentation,
contact us.
-
Download the following configuration snippet to the AxoSyslog host, for example, as /etc/syslog-ng/conf.d/axoflow-instrumentation.conf
.
-
Include it in at the top of your configuration file:
@version: current
@include "axoflow-instrumentation.conf"
-
Edit every destination statement to include a parser { metrics-output(destination(<custom-ID-for-the-destination>)); };
line, for example:
destination d_file {
channel {
parser { metrics-output(destination(my-file-destination)); };
destination { file("/dev/null" persist-name("d_s3")); };
};
};
-
Reload the configuration of AxoSyslog.
systemctl reload syslog-ng
7.1.3 - Splunk Connect for Syslog (SC4S)
Onboarding allows you to collect metrics about the host, display the host on the Topology page, and to tap into the log flow.
Onboarding requires you to modify the host and the configuration of the logging agent running on the host.
- Level 1: Install Axolet on the host. Axolet collects metrics from the host and sends them to the Axoflow Console, so you can check host-level metrics on the Metrics & Health page of the host, and displays the host on the Topology page.
- Level 2: Instrument the configuration of the logging agent to provide detailed metrics about the traffic flow. This allows you to display data about the host on the Analytics page.
- Level 3: Instrument the configuration of the logging agent to allow you to access the logs of the logging agent and to tap into the log flow from the Axoflow Console. The exact steps for this integration step depend on the configuration of your logging agent. Contact us so our professional services can help you with the integration.
To generate metrics for the Axoflow platform from an existing Splunk Connect for Syslog (SC4S) instance, you need to configure SC4S to generate these metrics. Complete the following steps.
-
If you haven’t already done so, install Axolet on the host, then approve its registration on the Provisioning page of the Axoflow Console.
-
Download the following code snippet as axoflow-instrumentation.conf
.
-
If you are running SC4S under podman or docker, copy the file into the /opt/sc4s/local/config/destinations
directory. In other deployment methods this might be different, check the SC4S documentation for details.
-
Check if the metrics are appearing, for example, run the following command on the SC4S host:
syslog-ng-ctl stats prometheus | grep classified
7.1.4 - syslog-ng
Onboarding allows you to collect metrics about the host, display the host on the Topology page, and to tap into the log flow.
Onboarding requires you to modify the host and the configuration of the logging agent running on the host.
- Level 1: Install Axolet on the host. Axolet collects metrics from the host and sends them to the Axoflow Console, so you can check host-level metrics on the Metrics & Health page of the host, and displays the host on the Topology page.
- Level 2: Instrument the configuration of the logging agent to provide detailed metrics about the traffic flow. This allows you to display data about the host on the Analytics page.
- Level 3: Instrument the configuration of the logging agent to allow you to access the logs of the logging agent and to tap into the log flow from the Axoflow Console. The exact steps for this integration step depend on the configuration of your logging agent. Contact us so our professional services can help you with the integration.
To onboard an existing syslog-ng instance into Axoflow, complete the following steps.
-
Install Axolet on the host, then approve its registration on the Provisioning page of the Axoflow Console.
-
The syslog-ng host is now visible on the Topology page of the Axoflow Console as a source.
-
If you've already added the AxoRouter instance or the destination where this host is sending data to the Axoflow Console, add a path to connect the host to the AxoRouter or the destination.
-
Select Topology > + Path.
-
Select your data source in the Source host field.
-
Select the target router or aggregator this source is sending its data to in the Target host field, for example, axorouter
.
-
Select the Target connector. The connector determines how the destination receives the data (for example, using which protocol or port).
-
Select Create. The new path appears on the Topology page.
-
Access the syslog-ng host and edit the configuration of syslog-ng. Set the statistics-related global options like this (if the options
block already exists, add these lines to the bottom of the block):
options {
stats-level(2);
stats-freq(0); # Inhibit statistics output to stdout
};
-
(Optional) To get detailed metrics and analytics about the traffic that flows through the host, instrument your syslog-ng configuration as follows:
Note
You can use Axolet with an un-instrumented syslog-ng configuration file, but that limits available metrics to host statistics (for example, disk, memory, queue information). You won’t access data about the actual traffic flowing through the host. To collect traffic-related metrics, instrument configuration with
metrics-probe()
stanzas. The example below shows how to instrument the configuration to highlight common macros such as
$HOST
and
$PROTOCOL
. If you want to customize the collected metrics or need help with the instrumentation,
contact us.
-
Download the following configuration snippet to the syslog-ng host, for example, as /etc/syslog-ng/conf.d/axoflow-instrumentation.conf
.
-
Include it in at the top of your configuration file:
@version: current
@include "axoflow-instrumentation.conf"
-
Edit every destination statement to include a parser { metrics-output(destination(<custom-ID-for-the-destination>)); };
line, for example:
destination d_file {
channel {
parser { metrics-output(destination(my-file-destination)); };
destination { file("/dev/null" persist-name("d_s3")); };
};
};
-
Reload the configuration of syslog-ng.
systemctl reload syslog-ng
7.1.5 - Axolet
Axolet is the agent software for Axoflow. Its primary purpose is to discover the log collector services that are running on the host (for example, AxoSyslog or SC4S), and report their operating statistics (counters) to Axoflow.
7.1.5.1 - Install Axolet
Axolet is the agent software for Axoflow. Its primary purpose is to discover the log collector services that are running on the host (for example, AxoSyslog or SC4S), and report their operating statistics (counters) to Axoflow.
It is simple to install Axolet on individual hosts, or collectively via orchestration tools such as chef or puppet.
What the install script does
When you deploy Axolet, you run a command that installs the required software packages, configures them and sets up the connection with Axoflow.
The installer script installs the axolet packages, then executes the configure-axolet
command with the right parameters. (If you’ve installed the packages in advance, the installer script only executes the configure-axolet
command.)
The configure-axolet
command is designed to be run as root (sudo), but you can configure axolet to run as a non-root user. The configure-axolet
command is executed with a configuration snippet on its standard input which contains a token required for registering into the management platform.
The script performs the following main steps:
- Generates a unique identifier (GUID).
- Initiates a cryptographic handshake process to Axoflow.
- Creates the initial configuration file for Axolet under
/etc/axolet/
.
- Installs a statically linked executable to
/usr/local/bin/axolet
.
- Creates the systemd service unit file
/etc/systemd/system/axolet.service
, then enables and starts that service.
- The service waits for an approval on Axoflow. Once you approve the host registration request, Axoflow issues a client certificate to Axolet.
- Axolet starts to send telemetry data to Axoflow, and keeps sending them as long as the agent is registered and the certificate is valid.
Prerequisites
Axolet should work on most Red Hat and Debian compatible Linux distributions. For production environments, we recommend using Red Hat Enterprise Linux 9.
Network access
The hosts must be able to access the following domains related to the Axoflow Console:
<your-tenant-id>.cloud.axoflow.io
: HTTPS traffic on TCP port 443, needed to download the binaries for Axoflow software (like Axolet and AxoRouter).
kcp.<your-tenant-id>.cloud.axoflow.io
: HTTPS (mutual TLS) traffic on TCP port 443 for management traffic.
telemetry.<your-tenant-id>.cloud.axoflow.io
: HTTPS (mutual TLS) traffic on TCP port 443, where Axolet sends the metrics of the host.
us-docker.pkg.dev
: HTTPS traffic on TCP port 443, for pulling container images (AxoRouter only).
Install Axolet
To install Axolet on a host and onboard it to Axoflow, complete the following steps. If you need to reinstall Axolet for some reason, see Reinstall Axolet.
-
Open the Axoflow Console at https://<your-tenant-id>.cloud.axoflow.io/
.
-
Select Provisioning > Select type and platform.
-
Select Edge > Linux.
The curl
command can be run manually or inserted into a template in any common software deployment package. When run, a script is downloaded that sets up the Axolet
process to run automatically at boot time via systemd
. For advanced installation options, see Advanced installation options.
-
Copy the deployment one-liner and run it on the host you are onboarding into Axoflow.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
Example output:
Do you want to install Axolet now? [Y]
y
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 31.6M 100 31.6M 0 0 2127k 0 0:00:15 0:00:15 --:--:-- 2075k
Verifying packages...
Preparing packages...
axolet-0.40.0-1.aarch64
Created symlink /etc/systemd/system/multi-user.target.wants/axolet.service → /usr/lib/systemd/system/axolet.service.
Now continue with onboarding the host on the Axoflow web UI.
-
On the Axoflow Console, reload the Provisioning page. A registration request for the new host should be displayed. Accept it.
-
Axolet starts sending metrics from the host. Check the Topology page to see the new host.
-
Continue to onboard the host as described for your specific log collector agent.
Manage Axolet
This section describes how to start, stop and check the status of the Axolet service on Linux.
Start Axolet
To start Axolet, execute the following command. For example:
systemctl start axolet
If the service starts successfully, no output will be displayed.
The following message indicates that Axolet can not start (see Check Axolet status):
Job for axolet.service failed because the control process exited with error code. See `systemctl status axolet.service` and `journalctl -xe` for details.
Stop Axolet
To stop Axolet
-
Execute the following command.
systemctl stop axolet
-
Check the status of the Axolet service (see Check Axolet status).
Restart Axolet
To restart Axolet, execute the following command.
systemctl restart axolet
Reload the configuration without restarting Axolet
To reload the configuration file without restarting Axolet, execute the following command.
systemctl reload axolet
Check the status of Axolet service
To check the status of Axolet service
-
Execute the following command.
systemctl --no-pager status axolet
-
Check the Active:
field, which shows the status of the Axolet service. The following statuses are possible:
Upgrade Axolet
To upgrade Axolet, re-run the installation script.
Run axolet as non-root
You can run Axolet as non-root user, but operators must have access to the following commands:
-
/usr/bin/systemctl * axolet.service
: Controls the axolet.service
systemd unit. Usually *
is start
, stop
, restart
, enable
, and status
. Used by the operators for troubleshooting.
-
/usr/local/bin/configure-axolet
: Creates initial axolet
configuration and enables/starts the axolet
service. Executed by the bootstrap script.
-
Command to install and upgrade the axolet
package. Executed by the bootstrap script if the packages aren’t already installed.
- On RPM-based Linux distributions:
/usr/bin/rpm -Uv axo*.rpm
- On DEB-based Linux distributions:
/usr/bin/dpkg -i axo*.deb
You can permit the syslogng
user to run these commands by running on of the following:
sudo tee /etc/sudoers.d/configure-axoflow <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axolet
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axolet.service
# for rpm installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/rpm -Uv axo*.rpm
A
sudo tee /etc/sudoers.d/configure-axoflow <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axolet
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axolet.service
# for deb installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/dpkg -i axo*.deb
A
7.1.5.2 - Advanced installation options
When installing Axolet, you can set a number of advanced options if needed for your environment. Setting the advanced options in the Axoflow Console automatically updates the one-liner command that you can copy and run.
Alternatively, before running the one-liner you can use one of the following methods:
Proxy settings
Use the http_proxy=
, https_proxy=
, no_proxy=
parameters to configure HTTP proxy settings for the installer. To configure the Axolet service to use the proxy settings, enable the AXOLET_AVOID_PROXY
parameter as well. Lowercase variable names are preferred because they work universally.
Installation options
You can pass the following parameters to the installation script as environment variables, or as URL parameters.
Note
Running the provisioning command with sudo
would mask environment variables of the calling shell. Either start the whole procedure from a root shell, or let the install script call sudo when it needs to. In other words: don’t add the sudo
command to the provisioning command.
AXOLET_AVOID_PROXY
|
|
Default value: |
AXOLET_AVOID_PROXY=false |
Available values: |
true , false |
Description: If set to true
, the value of the *_proxy
variables will only be used for downloading the installer, but not for the axolet
service itself. If set to false
, the Axolet service will use the variables from the installer.
AXOLET_CAPS
|
|
Default value: |
AXOLET_CAPS=CAP_SYS_PTRACE |
Available values: |
Whitespace-separated list of capability names with CAP_ prefix. |
Description: Ambient Linux capabilities the axolet
service will use.
AXOLET_CONFIG_OVERWRITE
|
|
Default value: |
AXOLET_CONFIG_OVERWRITE=false |
Available values: |
true , false |
Description: If set to true
, the configuration process will overwrite existing configuration (/etc/axolet/config.json
). This means that the agent will get a new GUID and it will require approval on the Axoflow Console.
AXOLET_GROUP
|
|
Default value: |
AXOLET_GROUP=root |
Available values: |
Any group name available on the host. |
Description: Name of the group and Axolet will be running as. It should be either root
or the group syslog-ng
is running as.
AXOLET_INSTALL_PACKAGE
|
|
Default value: |
AXOLET_INSTALL_PACKAGE=auto |
Available values: |
auto , rpm , deb , tar , none |
Description: The method to use for installing the axolet
binaries, the configure-axolet
script, and the systemd service file. The configure-axolet
script expects a pre-installed package when none
is selected, and it will try to find the proper method for the current OS when it’s auto
.
AXOLET_START
|
|
Default value: |
AXOLET_START=true |
Available values: |
true , false |
Description: Start axolet agent at the end of installation. Use false for preparing golden images. In this case axolet
will generate a new GUID on the first boot after cloning the image.
If you are preparing a host for cloning with Axolet already installed, set the following environment variable in your (root) shell session, before running the one-liner command. For example:
export START_AXOLET=false
curl ... # Run the command copied from the Provisioning page
This way Axolet will only start and initialize after the first reboot.
AXOLET_USER
|
|
Default value: |
AXOLET_USER=root |
Available values: |
Any username available on the host. |
Description: Name of the user Axolet will be running as. It should be either root
or the user syslog-ng
is running as. See also Run axolet as non-root.
7.1.5.3 - Run axolet as non-root
If the log collector agent (AxoSyslog or syslog-ng) is running as a non-root user, you may want to configure the Axolet agent to run as the same user.
To do that, set the AXOLET_USER
and AXOLET_GROUP
environment variables to the user’s username and groupname. For details, see Advanced installation options.
Operators will need to have access to the following commands:
-
/usr/bin/systemctl * axolet.service
: Controls the axolet.service
systemd unit. Usually *
is start
, stop
, restart
, enable
, and status
. Used by the operators for troubleshooting.
-
/usr/local/bin/configure-axolet
: Creates initial axolet
configuration and enables/starts the axolet
service. Executed by the bootstrap script.
-
Command to install and upgrade the axolet
package. Executed by the bootstrap script if the packages aren’t already installed.
- On RPM-based Linux distributions:
/usr/bin/rpm -Uv axo*.rpm
- On DEB-based Linux distributions:
/usr/bin/dpkg -i axo*.deb
For example, you can permit the syslogng
user to run these commands by running the following commands:
sudo tee /etc/sudoers.d/configure-axoflow <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axolet
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axolet.service
# for rpm installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/rpm -Uv axo*.rpm
A
sudo tee /etc/sudoers.d/configure-axoflow <<A
syslogng ALL=(ALL) NOPASSWD: /usr/local/bin/configure-axolet
syslogng ALL=(ALL) NOPASSWD: /bin/systemctl * axolet.service
# for deb installation:
syslogng ALL=(ALL) NOPASSWD: /usr/bin/dpkg -i axo*.deb
A
7.1.5.4 - Reinstall Axolet
Reinstall Axolet to a (cloned) machine
To re-install Axolet on a (cloned) machine that has an earlier installation running, complete the following steps.
-
Log in to the host as root and execute the following commands:
systemctl stop axolet
rm -r /etc/axolet/
-
Follow the regular installation steps described in Axolet.
Recover Axolet after the root CA certificate was rotated
-
Log in to the host as root and execute the following commands:
export AXOLET_KEEP_GUID=y AXOLET_CONFIG_OVERWRITE=y PATH=$PATH:/usr/local/bin
-
Run the one-liner from the Axoflow Console Provisioning page in the same shell.
-
The installation may result in an error message, but Axolet should eventually recover. You can check by running:
journalctl -b -u axolet -n 20 -f
7.1.6 - Data sources
7.1.7 - Destinations
7.2 - Topology
Based on the collected metrics, Axoflow visualizes the topology of your security data pipeline. The topology allows you to get a semi-real-time view of how your edge-to-edge data flows, and drill down to the details and metrics of your pipeline elements to quickly find data transport issues.
Select the name of a source host or a router to show the details and metrics of that pipeline elements.
If a host has active alerts, it’s indicated on the topology as well.
Traffic volume
Select bps or eps in the top bar to show the volume of the data flow on the topology paths in bytes per second or events per second.
Queues
Select queue in the top bar to show the status of the memory and disk queues of the hosts. Select the status indicators of a host to display the details of the queues.
The following details about the queue help you diagnose which connections of the host are having throughput or connectivity issues:
- capacity: The total capacity of the buffer in bytes.
- usage: How much of the total capacity is currently in use.
- driver: The type of the AxoSyslog driver used for the connection the queue belongs to.
- id: The identifier of the connection.
Disk-based queues also have the following information:
- path: The location of the disk-buffer file on the host.
- reliable: Indicates wether the disk-buffer is set to be reliable or not.
- worker: The ID of the AxoSyslog worker thread the queue belongs to.
Both memory-based and disk-based queues have additional details that depend on the destination.
7.3 - Hosts
Axoflow collects and shows you a wealth of information about the hosts of your security data pipeline.
7.3.1 - Find a host
To find a specific host, you have the following options:
-
Open the Topology page, then click the name of the host you’re interested in.
-
Open the Hosts page and find the host you’re interested in.
Use the search bar to quickly find the host by name or by label.
7.3.2 - Host information
The Hosts page contains a quick overview of every data source and data processor node. The exact information depends on the type of the host: hosts managed by Axoflow provide more information than external hosts.
The following information is displayed:
- Hostname or IP address
- Metadata labels. These include labels added automatically during the Axoflow curation process (like product name and vendor labels), as well as any custom labels you’ve assigned to the host.
For hosts that have the Axoflow agent (Axolet) installed:
- The version on the agent
- The name and version of the log collector running on the host (for example, AxoSyslog or Splunk Connect for Syslog)
- Operating system, version, and architecture
- Resource information: CPU and memory usage, disk buffer usage. Click on a resource to open its resource history on the Metrics & health page of the host.
- Traffic information: volume of the incoming and outgoing data traffic on the host.
For more details about the host, select the hostname, or click ⋮.
7.3.3 - Custom labels and metadata
To add custom labels to a host, complete the following steps. Note that these are static labels. To add labels dynamically based on the contents of the processed data, use the processing steps in data flows.
-
Find the host on the Hosts or the Topology page, and click on its hostname. The overview of the host is displayed.
-
Select Edit.
You can add custom labels in <label-name>:<value>
format (for example, the group or department a source device belongs to), or a generic description about the host. You can use the labels for quickly finding the host on the Hosts page, and also for filtering when configuring Flows.
When using labels in filters, processing steps, or search bars, note that:
- Labels added to AxoRouter hosts get the
axo_host_
prefix.
- Labels added to data sources get the
host_
prefix. For example, if you add a rack label to an edge host, it’ll be added to the data received from the host as host_rack
.
On other pages, like the Host Overview page, the labels are displayed without the prefixes.
-
Select Save.
7.3.4 - Services
The Services page of a host shows information about the data collector or router services running on the host.
This page is only available for managed pipeline elements.
The following information is displayed:
- Name: Name of the service
- Version: Version number of the service
- Type: Type of the service (which data collector or processor it’s running)
- Supervisor: The type of the supervisor process. For example, Splunk Connect for Syslog (
sc4s
) runs a syslog-ng process under the hood.
Icons and colors indicate the status of the service: running
, stopped
, or not registered
.
Service configuration
To check the configuration of the service, select Configuration. This shows the list of related environment variables and configuration files. Select a file or environment variable to display its value. For example:
To display other details of the service (for example, the location of the configuration file or the binary), select Details.
Manage service
To reload a registered service, select Reload.
7.4 - Log tapping
Log tapping in Axoflow samples the log flow. You can use labels to filter for specific messages (like ones with parse errors) and tap only those messages. To not get overwhelmed with events, Axoflow automatically samples the output: if many messages match the selected filter, only a subset is shown (about 1 message per second). Using log tapping, you can quickly troubleshoot both parsing/curation errors and destination ingest (API) errors, and check:
- What was in the original message?
- What is sent in the final payload to the destination?
To see log tapping in action, check this blog post. To tap into your log flow, or to display the logs of the log collector agent, complete the following steps.
-
Open the Topology page, then select the host where you want to tap the logs, for example, an AxoRouter deployment.
-
Select ⋮ > Tap log flow.
-
Tap into the log flow.
- To see the input data, select Input log flow > Start.
- To see the output data, select Output log flow > Start.
- To see the logs of the log collector agent running on the host, select Agent logs.
You can use labels to filter the messages and sample only the matching ones.
-
When the logs you’re interested in show up, click Stop Log Tap, then click a log message to see its details.
-
If you don’t know what the message means, select AI Analytics to ask our AI to interpret it.
Filter the messages
You can add labels to the Filter By Label field to sample only messages matching the filter. If you specify multiple labels, only messages that match all filters will be sampled. For example, the following filter selects messages from a specific source IP, sent to a specific destination IP.
7.5 - Alerts
Axoflow raises alerts for a number of events in your security data pipeline, for example:
- a destination becomes unavailable or isn’t accepting messages,
- a host is dropping packets or messages,
- a host becomes unavailable,
- the disk queues of a host are filling up,
- abandoned (orphaned) disk buffer files are on the host,
- the configuration of a managed host failed to synchronize or reload,
- traffic of a flow has unexpectedly dropped.
Alerts are indicated at a number of places:
-
On the Alerting page. Select an alert to display its details.
-
On the Topology page:
-
On the Overview page of the host:
-
On the Metrics & Health page of the host the alerts are shown as an overlay to the metrics:
7.6 - Private connections
7.6.1 - Google Private Service Connect
If you want your hosts in Google Cloud to access the Axoflow Console without leaving the Google network, we recommend that you use Google Cloud Private Service Connect (PSC) to secure the connection from your VPC to Axoflow.
Prerequisites
Contact Axoflow and provide the list of projects so we can set up an endpoint for your PSC. You will receive information from us that you’ll need to properly configure your connection.
You will also need to allocate a dedicated IP address for the connection in a subnet that’s accessible for the hosts.
Steps
After you have received the details of your target endpoint from Axoflow, complete the following steps to configure Google Cloud Private Service Connect from your VPC to Axoflow Console.
-
Open Google Cloud Console and navigate to Private Service Connect > Connected endpoints.
-
Select the project you want to connect to Axoflow.
-
Navigate to Connect endpoint and complete the following steps.
- Select Target > Published service.
- Set Target service to the service name you’ve received from Axoflow. The service name should be similar to:
projects/axoflow-shared/regions/<region-code>/serviceAttachments/<your-tenant-ID>
- Set Endpoint name to the name you prefer, or the one recommended by Axoflow. The recommended service name is similar to:
psc-axoflow-<your-tenant-ID>
- Select your VPC in the Network field.
- Set Subnet where the endpoint should appear. Since subnets are regional resources, select a subnet in the region you received from Axoflow.
- Select Create IP address and allocate an address for the endpoint. Save the address, you’ll need it later to verify that the connection is working.
- Select Enable global access.
- There is no need to enable the directory API even if it’s offered by Google.
- Select Add endpoint.
-
Test the connection.
-
Log in to a machine where you want to use the PSC using SSH.
-
Test the connection. Run the following command using the IP address you’ve allocated for the endpoint.
curl -vk https://<IP-address-allocated-for-the-endpoint>
If the connection is established, you’ll receive an HTTP 404 response.
-
If the connection is established, configure DNS resolution on the hosts. Complete the following steps.
Setting up selected machines to use the PSC
-
Add the following entry to the /etc/hosts
file of the machine.
<IP-address-allocated-for-the-endpoint> <your-tenant-id>.cloud.axoflow.io kcp.<your-tenant-id>.cloud.axoflow.io telemetry.<your-tenant-id>.cloud.axoflow.io
-
Run the following command to test DNS resolution:
curl -v https://<your-tenant-id>.cloud.axoflow.io
It should load an HTML page from the IP address of the endpoint.
-
If the host is running axolet, restart it by running:
sudo systemctl restart axolet.service
Check the axolet logs to verify that there’re no errors:
sudo journalctl -fu axolet
-
Deploy the changes of the /etc/hosts
file to all your VMs.
Setting up whole VPC networks to use the PSC
-
Open Google Cloud Console and in the Cloud DNS service navigate to the Create a DNS zone page.
-
Create a new private zone with the zone name <your-tenant-id>.cloud.axoflow.io
, and select the networks you want to use the PSC in.
-
Add the following three A records, all of which targeted to the <IP-address-allocated-for-the-endpoint>
:
<your-tenant-id>.cloud.axoflow.io
kcp.<your-tenant-id>.cloud.axoflow.io
telemetry.<your-tenant-id>.cloud.axoflow.io
8 - Data management
This section describes how data Axoflow processes your security data, and how to configure and monitor your data flows in Axoflow.
8.1 - Processing elements
Axoflow processes the data transported in your security data pipeline in the following stages:
-
Sources: Data enters the pipeline from a data source. A data source can be an external appliance or application, or a log collector agent managed by Axoflow.
-
Custom metadata on the source: You can configure Axoflow to automatically add custom metadata to the data received from a source.
-
Router: The AxoRouter data aggregator processes the data it receives from the sources and automatically processes it.
- Metadata: AxoRouter classifies and identifies the incoming messages and adds metadata, for example, the vendor and product of the identified source.
- Data extraction: AxoRouter extracts the relevant information from the content of the messages, and makes it available as stuctured data.
The router can perform other processing steps, as configured in the flows applying to the specific router (see next step).
-
Flow: You can configure flows in the Axoflow Console that Axoflow uses to configure the AxoRouter instances to filter, route, and process the security data. Flows also allow you to automatically remove unneeded or redundant information from the messages, reducing data volume and SIEM and storage costs.
-
Destination: The router sends data to the specified destination in a format optimized for the specific destination.
8.2 - Flow management
Axoflow uses flows to manage the routing and processing of security data. A flow applies to one or more AxoRouter instances. The Flows page lists the configured flows.
Each flow consists of the following main elements:
- A Router selector that specifies the AxoRouter instances the flow applies to. Multiple flows can apply to a single AxoRouter instance.
- Processing steps that filter and select the messages to process, set/unset message fields, and perform different data transformation and data reduction.
- One or more Destinations where the AxoRouter instances of the flow deliver the data. Destinations can be external destinations (for example, a SIEM), or other AxoRouter instances.
Click ⋮ or the name of the flow to display the details of the flow.
8.2.1 - Create flow
To create a new flow, open the Axoflow Console, then complete the following steps:
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.
-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet Fortigate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.
-
Select Create.
-
The new flow appears in the Flows list.
8.2.2 - Processing steps
Flows can include a list of processing steps to select and transform the messages. All log messages received by the routers of the flow are fed into the pipeline constructed from the processing steps, then the filtered and processed messages are forwarded to the specified destination.
CAUTION:
Processing steps are executed in the order they appear in the flow. Make sure to arrange the steps in the proper order to avoid problems.
To add processing steps to an existing flow, complete the following steps.
-
Open the Flows page and select the flow you want to modify.
-
Select Process > Add new Processing Step.
-
Select the type of the processing step to add.
The following types of processing steps are available:
- Select Messages: Filter the messages using a query. Only the matching messages will be processed in subsequent steps.
- Set Field: Set the value of a field on the message object.
- Unset Field: Unset the specified field of the message object.
- Reduce: Automatically remove redundant content from the messages.
- FilterX: Execute an arbitrary Filterx snippet.
- Probe: A measurement point that provides metrics about the throughput of the flow.
-
Configure the processing step as needed for your environment.
-
(Optional) To apply the processing step only to specific messages from the flow, set the Condition field of the step. Conditions act as filters, but apply only to this step. Messages that don’t match the condition will be processed by the subsequent steps.
-
(Optional) If needed, drag-and-drop the step to change its location in the flow.
-
Select Save.
FilterX
Execute an arbitrary FilterX snippet. The following example checks if the meta.host.labels.team
field exists, and sets the meta.openobserve
variable to a JSON value that contains stream
as a key with the value of the meta.host.labels.team
field.
Probe
Set a measurement point that provides metrics about the throughput of the flow.
For details, see Flow metrics.
Reduce
Automatically remove redundant content from the messages.
Select messages
Filter the messages using a query. Only the matching messages will be processed in subsequent steps. The following example selects messages that have the resource.attributes["service.name"]
label set to nginx
.
Set field
Set specific field of the message. You can use static values, and also dynamic values that were extracted from the message automatically by Axoflow or manually in a previous processing step. If the field doesn’t exist, it’s automatically created. The following example sets the resource.attributes.format
field to parsed
.
Unset field
Unset a specific field of the message.
8.2.3 - Flow metrics
The Metrics page of a flow shows the amount of data (in events per second or bytes per second) processed by the flow. By default, the flow provides metrics for the input and the output of the flow. To get more information about the data flow, add Probe steps to the flow. For example, you can add a probe after:
- Select Messages steps to check the amount of data that match the filter, or
- Reduce steps to check the ratio of data saved.
8.2.4 - Paths
Create a path
Open the Axoflow Console.
-
Select Topology > + Path.
-
Select your data source in the Source host field.
-
Select the target router or aggregator this source is sending its data to in the Target host field, for example, axorouter
.
-
Select the Target connector. The connector determines how the destination receives the data (for example, using which protocol or port).
-
Select Create. The new path appears on the Topology page.
9 - Data sources
The following chapters show you how to configure specific appliances, applications, and other sources to send their log data to Axoflow.
9.1 - Generic tips
The following section gives you a generic overview on how to configure an appliance to send its log data to Axoflow. If there is a specific section for your appliance, follow that instead of the generic procedure. For details, see the documentation of your appliance.
CAUTION:
Make sure to set data forwarding as described in this guide. Different setting like other message format or port might be valid, but can result in data loss or incorrect parsing.
Prerequisites
Steps
-
Log in to your device. You need administrator privileges to perform the configuration.
-
If needed, enable syslog forwarding on the device.
-
Set AxoRouter as the syslog server. Typically, you can configure the following parameters:
-
Name or IP Address of the syslog server: Set the address of your AxoRouter.
-
Protocol: If possible, set TCP or TLS.
-
Syslog Format: If possible, set RFC5424 (or equivalent), otherwise leave the default.
-
Port: Set a port appropriate for the protocol and syslog format you have configured.
By default, AxoRouter accepts data on the following ports:
- 514 TCP and UDP for RFC3164 (BSD-syslog) formatted traffic.
- 601 TCP for RFC5424 (IETF-syslog) formatted traffic.
- 6514 TCP for TLS-encrypted traffic.
- 4317 TCP for OpenTelemetry log data.
Make sure to enable the ports you’re using on the firewall of your host.
-
Add the appliance to the Axoflow Console. For details, see Appliances.
Note
If your syslog source is running
syslog-ng
, Splunk Connect for Syslog (SC4S), or AxoSyslog as its log forwarder agent, consider installing Axolet on the host and instrumenting the configuration of the log forwarder to receive detailed metrics about the host and the processed data. For details, see
Manage and monitor the pipeline.
9.2 - Appliances
To onboard an appliance that is specifically supported by Axoflow, complete the following steps. Onboarding allows you to collect metrics about the host, and display the host on the Topology page.
-
Open the Axoflow Console.
-
Select Topology.
-
Select + > Source.
-
If the appliance is already sending logs to an AxoRouter instance that is registered in the Axoflow Console, select Detected, then select the appliance.
Otherwise, select the type of the appliance you want to onboard, and follow the on-screen instructions.
-
Connect the appliance to the destination or AxoRouter instance it’s sending logs to.
- Select Topology > + > Path.
- Set the appliance as the Source host.
- Set the destination or the AxoRouter instance as Target host.
-
Configure the appliance to send logs to an AxoRouter instance. Specific instructions regarding individual vendors are listed below, along with default metadata (labels) and specific metadata for Splunk.
9.2.1 - Cisco
9.2.1.1 - Adaptive Security Appliance (ASA)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
asa |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:asa |
netfw |
9.2.1.2 - Application Control Engine (ACE)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ace |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ace |
netops |
9.2.1.3 - Cisco IOS
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ios |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ios |
netops |
9.2.1.4 - Digital Network Architecture (DNA)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
dna |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:dna |
netops |
9.2.1.5 - Email Security Appliance (ESA)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
esa |
format |
text-plain | cef |
Note that the device can be configured to send plain syslog text or CEF-formatted output.
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype, index, and source settings:
sourcetype |
index |
source |
cisco:esa:http |
email |
esa:http |
cisco:esa:textmail |
email |
esa:textmail |
cisco:esa:amp |
email |
esa:amp |
cisco:esa:antispam |
email |
esa:antispam |
cisco:esa:system_logs |
email |
esa:system_logs |
cisco:esa:system_logs |
email |
esa:euq_logs |
cisco:esa:system_logs |
email |
esa:service_logs |
cisco:esa:system_logs |
email |
esa:reportd_logs |
cisco:esa:system_logs |
email |
esa:sntpd_logs |
cisco:esa:system_logs |
email |
esa:smartlicense |
cisco:esa:error_logs |
email |
esa:error_logs |
cisco:esa:error_logs |
email |
esa:updater_logs |
cisco:esa:content_scanner |
email |
esa:content_scanner |
cisco:esa:authentication |
email |
esa:authentication |
cisco:esa:http |
email |
esa:http |
cisco:esa:textmail |
email |
esa:textmail |
cisco:esa:amp |
email |
esa:amp |
cisco:esa |
email |
program: <variable> |
cisco:esa:cef |
email |
esa:consolidated |
Tested with: Splunk Add-on for Cisco ESA
9.2.1.6 - Firepower
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
firepower |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:firepower:syslog |
netids |
9.2.1.7 - Firepower Threat Defence (FTD)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ftd |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ftd |
netfw |
9.2.1.8 - Firewall Services Module (FWSM)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
fwsm |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:fwsm |
netfw |
9.2.1.9 - HyperFlex (HX, UCSH)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ucsh |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ucsh:hx |
infraops |
9.2.1.10 - Integrated Management Controller (IMC)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
cimc |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:cimc |
infraops |
9.2.1.11 - IOS XR
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
xr |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:xr |
netops |
9.2.1.12 - Meraki MX
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
meraki |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:meraki |
netfw |
Tested with: TA-meraki
9.2.1.13 - Private Internet eXchange (PIX)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
pix |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:pix |
netfw |
9.2.1.14 - TelePresence Video Communication Server (VCS)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
tvcs |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:tvcs |
main |
9.2.1.15 - Unified Computing System Manager (UCSM)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ucsm |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ucs |
infraops |
9.2.1.16 - Unified Communications Manager (UCM)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
ucm |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:ucm |
netops |
9.2.1.17 - Viptela
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
cisco |
product |
viptela |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
cisco:viptela |
netops |
9.2.2 - F5 Networks
9.2.2.1 - BIG-IP
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
f5 |
product |
bigip |
format |
text-plain | JSON | kv |
Note that the device can be configured to send plain syslog text, JSON, or key-value pairs.
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
f5:bigip:syslog |
netops |
f5:bigip:ltm:access_json |
netops |
f5:bigip:asm:syslog |
netops |
f5:bigip:apm:syslog |
netops |
f5:bigip:ltm:ssl:error |
netops |
f5:bigip:ltm:tcl:error |
netops |
f5:bigip:ltm:traffic |
netops |
f5:bigip:ltm:log:error |
netops |
f5:bigip:gtm:dns:request:irule |
netops |
f5:bigip:gtm:dns:response:irule |
netops |
f5:bigip:ltm:http:irule |
netops |
f5:bigip:ltm:failed:irule |
netops |
nix:syslog |
netops |
Tested with: Splunk Add-on for F5 BIG-IP
9.2.3 - Fortinet
9.2.3.1 - FortiGate firewalls
The following sections show you how to configure FortiGate Next-Generation Firewall (NGFW) to send their log data to Axoflow.
CAUTION:
Make sure to set data forwarding as described in this guide. Different setting like other message format or port might be valid, but can result in data loss or incorrect parsing.
Prerequisites
Steps
Note: The steps involving the FortiGate user interface are just for your convenience, for details, see the official FortiGate documentation.
-
Log in to your FortiGate device. You need administrator privileges to perform the configuration.
-
Register the address of your AxoRouter as an Address Object.
-
Select Log & Report > Log Settings > Global Settings.
-
Configure the following settings:
- Event Logging: Click All.
- Local traffic logging: Click All.
- Syslog logging: Enable this option.
- IP address/FQDN: Enter the address of your AxoRouter:
%axorouter-ip%
-
Click Apply.
-
Add the appliance to the Axoflow Console. For details, see Appliances.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
fortinet |
product |
fortigate |
format |
kv |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
fortigate_event |
netops |
fortigate_traffic |
netfw |
fortigate_utm |
netfw |
Tested with: Fortinet FortiGate Add-On for Splunk technical add-on
9.2.3.2 - FortiMail
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
fortinet |
product |
fortimail |
format |
kv |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
fml:log |
email |
Tested with: FortiMail Add-on for Splunk technical add-on
9.2.3.3 - FortiWeb
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
fortinet |
product |
fortiweb |
product |
kv |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
fwb_log |
netops |
fwb_attack |
netids |
fwb_event |
netops |
fwb_traffic |
netfw |
Tested with: Fortinet FortiWeb Add-0n for Splunk technical add-on
9.2.4 - Infoblox
9.2.4.1 - NIOS
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
infloblox |
product |
nios |
format |
cef |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
source |
infoblox:threatprotect |
netids |
Infoblox:NIOS |
infoblox:dns |
netids |
Infoblox:NIOS |
Tested with: Splunk Add-on for Infoblox
9.2.5 - Juniper
9.2.5.1 - Junos OS
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
juniper |
product |
junos |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
juniper:junos:aamw:structured |
netfw |
juniper:junos:firewall |
netfw |
juniper:junos:firewall |
netids |
juniper:junos:firewall:structured |
netfw |
juniper:junos:firewall:structured |
netids |
juniper:junos:idp |
netids |
juniper:junos:idp:structured |
netids |
juniper:legacy |
netops |
juniper:junos:secintel:structured |
netfw |
juniper:junos:snmp |
netops |
juniper:structured |
netops |
Tested with: Splunk Add-on for Juniper
9.2.6 - Kaspersky
9.2.6.1 - Endpoint Security
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
kaspersky |
product |
endpoint_security |
format |
text-plain | cef | leef |
Note that the device can be configured to send plain syslog text, LEEF, or CEF-formatted output.
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
kaspersky:cef |
epav |
kaspersky:es |
epav |
kaspersky:gnrl |
epav |
kaspersky:klau |
epav |
kaspersky:klbl |
epav |
kaspersky:klmo |
epav |
kaspersky:klna |
epav |
kaspersky:klpr |
epav |
kaspersky:klsr |
epav |
kaspersky:leef |
epav |
kaspersky:sysl |
epav |
9.2.7 - Microsoft
9.2.7.1 - Azure Event Hubs
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
microsoft |
product |
azure-event-hubs |
format |
otlp |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
mscs:azure:eventhub:log |
azure-activity |
9.2.7.2 - Windows Event Log
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
microsoft |
product |
windows |
format |
eventxml | snare |
Note: Windows event logs can arrive in several formats and via several different methods.
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
windows:eventlog:snare |
oswin |
windows:eventlog:xml |
oswin |
9.2.8 - MikroTik
9.2.8.1 - RouterOS
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
mikrotik |
product |
routeros |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
routeros |
netfw |
routeros |
netops |
9.2.9 - Netgate
9.2.9.1 - pfSense
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
netgate |
product |
pfsense |
format |
csv | text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
pfsense:filterlog |
netfw |
pfsense:<program> |
netops |
The pfsense:<program>
variant is simply a generic linux event that is generated by the underlying OS on the appliance.
Tested with: TA-pfsense
9.2.10 - Netmotion
9.2.10.1 - Netmotion
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
netmotion |
product |
netmotion |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
netmotion:reporting |
netops |
netmotion:mobilityserver:nm_mobilityanalyticsappdata |
netops |
9.2.11 - Palo Alto Networks
9.2.11.1 - Palo Alto firewalls
The following sections show you how to configure Palo Alto Networks Next-Generation Firewall devices to send their log data to Axoflow.
CAUTION:
Make sure to set data forwarding as described in this guide. Different setting like other message format or port might be valid, but can result in data loss or incorrect parsing.
Prerequisites
Steps
Note: The steps involving the Palo Alto Networks Next-Generation Firewall user interface are just for your convenience, for details, see the official PAN-OS® documentation.
-
Log in to your firewall device. You need administrator privileges to perform the configuration.
-
Configure a Syslog server profile.
-
Select Device > Server Profiles > Syslog.
-
Click Add and enter a Name for the profile, for example, axorouter
.
-
Configure the following settings:
- Syslog Server: Enter the IP address of your AxoRouter:
%axorouter-ip%
- Transport: Select TCP or TLS.
- Format: Select IETF.
- Syslog logging: Enable this option.
-
Click OK.
-
Configure syslog forwarding for Traffic, Threat, and WildFire Submission logs. For details, see Configure Log Forwarding the official PAN-OS® documentation.
- Select Objects > Log Forwarding.
- Click Add.
- Enter a Name for the profile, for example,
axoflow
.
- For each log type, severity level, or WildFire verdict, select the Syslog server profile.
- Click OK.
- Assign the log forwarding profile to a security policy to trigger log generation and forwarding.
- Select Policies > Security and select a policy rule.
- Select Actions, then select the Log Forwarding profile you created (for example,
axoflow
).
- For Traffic logs, select one or both of the Log at Session Start and Log At Session End options.
- Click OK.
-
Configure syslog forwarding for System, Config, HIP Match, and Correlation logs.
- Select Device > Log Settings.
- For System and Correlation logs, select each Severity level, select the Syslog server profile, and click OK.
- For Config, HIP Match, and Correlation logs, edit the section, select the Syslog server profile, and click OK.
-
Click Commit.
-
Add the appliance to the Axoflow Console. For details, see Appliances.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
pan |
product |
paloalto |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
pan:audit |
netops |
pan:globalprotect |
netfw |
pan:hipmatch |
epintel |
pan:traffic |
netfw |
pan:threat |
netproxy |
pan:system |
netops |
Tested with: Palo Alto Networks Add-on for Splunk technical add-on
9.2.12 - Riverbed
9.2.12.1 - SteelConnect
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
riverbed |
product |
steelconnect |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
riverbed:syslog |
netops |
9.2.12.2 - SteelHead
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
riverbed |
product |
steelhead |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
riverbed:steelhead |
netops |
9.2.13 - rsyslog
Axoflow treats rsyslog sources as a generic syslog source. To send data from rsyslog to Axoflow, just configure rsyslog to send data to an AxoRouter instance using the syslog protocol.
Note that even if rsyslog is acting as a relay (receiving data from other clients and forwarding them to AxoRouter), on the Topology page it will be displayed as a data source.
Prerequisites
9.2.14 - SecureAuth
9.2.14.1 - Identity Platform
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
secureauth |
product |
idp |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
secureauth:idp |
netops |
Tested with: SecureAuth IdP Splunk App
9.2.15 - SonicWall
9.2.15.1 - SonicWall
The following sections show you how to configure SonicWall firewalls to send their log data to Axoflow.
CAUTION:
Make sure to set data forwarding as described in this guide. Different setting like other message format or port might be valid, but can result in data loss or incorrect parsing.
Prerequisites
Steps for SonicOS 7.x
Note: The steps involving the SonicWall user interface are just for your convenience, for details, see the official SonicWall documentation.
-
Log in to your SonicWall device. You need administrator privileges to perform the configuration.
-
Register the address of your AxoRouter as an Address Object.
-
Select MENU > OBJECT.
-
Select Match Objects > Addresses > Address objects.
-
Click Add Address.
-
Configure the following settings:
- Name: Enter a name for the AxoRouter, for example,
AxoRouter
.
- Zone Assignment: Select the correct zone.
- Type: Select Host.
- IP Address: Enter the IP address of your AxoRouter:
%axorouter-ip%
-
Click Save.
-
Set your AxoRouter as a syslog server.
-
Navigate to Device > Log > Syslog.
-
Select the Syslog Servers tab.
-
Click Add.
-
Configure the following options:
- Name or IP Address: Select the Address Object of AxoRouter.
- Server Type: Select Syslog Server.
- Syslog Format: Select Enhanced.
If your Syslog server does not use default port 514, type the port number in the Port field.
By default, AxoRouter accepts data on the following ports:
- 514 TCP and UDP for RFC3164 (BSD-syslog) formatted traffic.
- 601 TCP for RFC5424 (IETF-syslog) formatted traffic.
- 6514 TCP for TLS-encrypted traffic.
- 4317 TCP for OpenTelemetry log data.
Make sure to enable the ports you’re using on the firewall of your host.
-
Add the appliance to the Axoflow Console. For details, see Appliances.
Steps for SonicOS 6.x
Note: The steps involving the SonicWall user interface are just for your convenience, for details, see the official SonicWall documentation.
-
Log in to your SonicWall device. You need administrator privileges to perform the configuration.
-
Register the address of your AxoRouter as an Address Object.
-
Select MANAGE > Policies > Objects > Address Objects.
-
Click Add.
-
Configure the following settings:
- Name: Enter a name for the AxoRouter, for example,
AxoRouter
.
- Zone Assignment: Select the correct zone.
- Type: Select Host.
- IP Address: Enter the IP address of your AxoRouter:
%axorouter-ip%
-
Click Add.
-
Set your AxoRouter as a syslog server.
-
Navigate to MANAGE > Log Settings > SYSLOG.
-
Click ADD.
-
Configure the following options:
- Syslog ID: Enter an ID for the firewall. This ID will be used as the hostname in the log messages.
- Name or IP Address: Select the Address Object of AxoRouter.
- Server Type: Select Syslog Server.
- Enable the Enhanced Syslog Fields Settings.
-
Click OK.
-
Add the appliance to the Axoflow Console. For details, see Appliances.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
dell |
product |
sonicwall |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
dell:sonicwall |
netfw |
Tested with: Dell SonicWall Add-on for Splunk technical add-on
9.2.16 - syslog-ng
By default, Axoflow treats syslog-ng sources as a generic syslog source.
- The easiest way to send data from syslog-ng to Axoflow is to configure it to send data to an AxoRouter instance using the syslog protocol.
- If you’re using syslog-ng Open Source Edition version 4.4 or newer, use the
syslog-ng-otlp()
driver to send data to AxoRouter using the OpenTelemetry Protocol.
Note that even if syslog-ng is acting as a relay (receiving data from other clients and forwarding them to AxoRouter), on the Topology page it will be displayed as a data source.
Prerequisites
9.2.17 - Thales
9.2.17.1 - Vormetric Data Security Platform
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
thales |
product |
vormetric |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
thales:vormetric |
netauth |
9.2.18 - Trellix
9.2.18.1 - Endpoint Security (HX)
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
trellix |
product |
hx |
format |
text-plain |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
hx_json |
fireeye |
fe_json |
fireeye |
hx_cef_sys |
fireeye |
Tested with: FireEye Add-on for Splunk Enterprise
9.2.19 - Zscaler appliances
9.2.19.1 - Zscaler Nanolog Streaming Service
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
zscaler |
product |
nss |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
zscalernss-alerts |
netops |
zscalernss-tunnel |
netops |
zscalernss-web |
netproxy |
zscalernss-web:leef |
netproxy |
Tested with: Zscaler Technical Add-On for Splunk
9.2.19.2 - Zscaler Log Streaming Service
To onboard such an appliance to Axoflow, complete the generic appliance onboarding steps.
Labels
Axoflow automatically adds the following labels to data collected from this source:
label |
value |
vendor |
zscaler |
product |
lss |
Sending data to Splunk
When sending the data collected from this source to Splunk, Axoflow uses the following sourcetype and index settings:
sourcetype |
index |
zscalerlss-zpa-app |
netproxy |
zscalerlss-zpa-audit |
netproxy |
zscalerlss-zpa-auth |
netproxy |
zscalerlss-zpa-bba |
netproxy |
zscalerlss-zpa-connector |
netproxy |
Tested with: Zscaler Technical Add-On for Splunk
10 - Destinations
The following chapters show you how to configure Axoflow to send your data to specific destinations, like SIEMs or object storage solutions.
10.1 - Amazon
10.1.1 - Amazon S3
To add an Amazon S3 destination to Axoflow, complete the following steps.
Prerequisites
-
An existing S3 bucket configured for programmatic access, and the related ACCESS_KEY
and SECRET_KEY
of a user that can access it. The user needs to have the following permissions:
kms:Decrypt
kms:Encrypt
kms:GenerateDataKey
s3:ListBucket
s3:ListBucketMultipartUploads
s3:AbortMultipartUpload
s3:ListMultipartUploadParts
s3:PutObject
-
To configure Axoflow, you’ll need the bucket name, region (or URL), access key, and the secret key of the bucket.
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + > Destination.
-
Configure the destination.
-
Select Amazon S3.
-
Enter a name for the destination.
-
Enter the name of the bucket you want to use.
-
Enter the region code of the bucket into the Region field (for example, us-east-1
.), or select the Use custom endpoint URL option, and enter the URL of the endpoint into the URL field.
-
Enter the Access key and the Secret key for the account you want to use.
-
Enter the Object key (or key name), which uniquely identifies the object in an Amazon S3 bucket, for example: my-logs/${HOSTNAME}/
.
You can use AxoSyslog macros in this field.
- Select the Object key timestamp format you want to use, or select Use custom object key timestamp and enter a custom template. For details on the available date-related macros, see the AxoSyslog documentation.
- Set the maximal size of the S3 object. If an object reaches this size, Axoflow appends an index ("-1", “-2”, …) to the end of the object key and starts a new object after rotation.
- Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.
-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet Fortigate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.
-
Select Create.
-
The new flow appears in the Flows list.
10.1.2 - Amazon Security Lake
Coming soon
If you’d like, we can send you an email when we update this page.
10.2 - Apache Kafka
Coming soon
If you’d like, we can send you an email when we update this page.
10.3 - Clickhouse
Coming soon
If you’d like, we can send you an email when we update this page.
10.4 - Confluent
Coming soon
If you’d like, we can send you an email when we update this page.
10.5 - Crowdstrike
Coming soon
If you’d like, we can send you an email when we update this page.
10.6 - Databricks
Coming soon
If you’d like, we can send you an email when we update this page.
10.7 - Datadog
Coming soon
If you’d like, we can send you an email when we update this page.
10.8 - Elastic
10.8.1 - Elastic Cloud
To add an Elasticsearch destination to Axoflow, complete the following steps.
Prerequisites
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + > Destination.
-
Configure the destination.
-
Select Elasticsearch.
-
Enter a name for the destination.
-
Enter your Elasticsearch URL into the URL field, for example, http://my-elastic-server:9200/_bulk
-
Enter the type of the Elasticsearch index where you want to send your data, into the Type field.
-
Enter the expression that specifies the Elasticsearch index to use into the Index field, for example: test-${YEAR}${MONTH}${DAY}
.
You can use AxoSyslog macros in this field.
-
Enter the name and password for the account you want to use.
-
(Optional)
By default, Axoflow rejects connections to the destination server if the certificate of the server is invalid (for example, it’s expired, signed by an unknown CA, or its CN and the name of the server is mismatched).
Warning
When validating a certificate, the entire certificate chain must be valid, including the CA certificate. If any certificate of the chain is invalid, Axoflow will reject the connection.
If you want to accept invalid certificates (or no certificate) from the destination servers, disable the Verify server certificate option.
-
Select Create.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.
-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet Fortigate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.
-
Select Create.
-
The new flow appears in the Flows list.
10.8.2 - Elasticsearch
Coming soon
If you’d like, we can send you an email when we update this page.
10.9 - Grafana
10.9.1 - Grafana Loki
Coming soon
If you’d like, we can send you an email when we update this page.
10.10 - Microsoft
10.10.1 - Azure Blob Storage
Coming soon
If you’d like, we can send you an email when we update this page.
10.10.2 - Azure Monitor
Coming soon
If you’d like, we can send you an email when we update this page.
10.11 - OpenObserve
To add an OpenObserve destination to Axoflow, complete the following steps.
Prerequisites
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + > Destination.
-
Configure the destination.
-
Select OpenObserve.
-
Enter a name for the destination.
-
Enter your OpenObserve URL into the URL field, for example, https://api.openobserve.ai/api/
-
Enter the name of the OpenObserve stream where you want to send your data, for example, your-example-stream
.
-
Enter the name and password for the account you want to use.
-
(Optional)
By default, Axoflow rejects connections to the destination server if the certificate of the server is invalid (for example, it’s expired, signed by an unknown CA, or its CN and the name of the server is mismatched).
Warning
When validating a certificate, the entire certificate chain must be valid, including the CA certificate. If any certificate of the chain is invalid, Axoflow will reject the connection.
If you want to accept invalid certificates (or no certificate) from the destination servers, disable the Verify server certificate option.
-
Select Create.
-
The new destination appears on the Topology page.
-
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.
-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet Fortigate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.
-
Select Create.
-
The new flow appears in the Flows list.
10.12 - Snowflake
Coming soon
If you’d like, we can send you an email when we update this page.
10.13 - Splunk
10.13.1 - Splunk Cloud
To add a Splunk destination to Axoflow, complete the following steps.
Prerequisites
-
Enable the HTTP Event Collector (HEC) on your Splunk deployment if needed. On Splunk Cloud Platform deployments, HEC is enabled by default.
-
Create a token for Axoflow to use in the destination. When creating the token, use the syslog source type.
For details, see Set up and use HTTP Event Collector in Splunk Web.
-
If you’re using AxoRouter, create the indexes where Axoflow sends the log data. Which index is needed depends on the sources you have, but create at least the following event indices: axoflow
, infraops
, netops
, netfw
, osnix
(for unclassified messages). Check your sources in the Data sources section for a detailed lists on which indices their data is sent.
Steps
-
Create a new destination.
- Open the Axoflow Console.
- Select Topology.
- Select + > Destination.
-
Configure the destination.
-
Select Splunk.
-
Enter a name for the destination.
-
Enter your Splunk URL into the URL field, for example, https://<your-splunk-tenant-id>.splunkcloud.com:8088
for Splunk Cloud Platform free trials, or https://<your-splunk-tenant-id>.splunkcloud.com
for Splunk Cloud Platform instances.
-
Enter the token you’ve created into the Token field.
-
Disable the Verify server certificate option unless your deployment has a valid, non-self-signed certificate. Free Splunk Cloud accounts have self-signed certificates.
-
Select Create.
Next step
Create a flow to connect the new destination to an AxoRouter instance.
-
Select Flows.
-
Select Create New Flow.
-
Enter a name for the flow, for example, my-test-flow
.
-
In the Router Selector field, enter an expression that matches the router(s) you want to apply the flow. To select a specific router, use a name selector, for example, name = my-axorouter-hostname
.
-
Select the Destination where you want to send your data. If you don’t have any destination configured, see Destinations.
-
(Optional) To process the data transferred in the flow, select Add New Processing Step. For details, see Processing steps. For example:
- Add a Reduce step to automatically remove redundant and empty fields from your data.
- To select which messages are processed by the flow, add a Select Messages step, and enter a filter into the Query field. For example, to select only the messages received from Fortinet Fortigate firewalls, use the
meta.vendor = fortinet + meta.product = fortigate
query.
- Save the processing steps.
-
Select Create.
-
The new flow appears in the Flows list.
10.13.2 - Splunk Enterprise
Coming soon
If you’d like, we can send you an email when we update this page.
11 - Metrics and analytics
This chapter shows how to access the different metrics and analytics that Axoflow collects about your security data pipeline.
11.1 - Analytics
The Analytics page of allows you to analyze the data throughput of your pipeline using Sankey and Sunburst diagrams. You can analyze the throughput of a single host on the Hosts > <host-to-analyze> > Analytics page.
The analytics charts
You can select what is displayed and how using the top bar and the Filter labels bar.
- Time period: Select the calendar_month
icon to change the time period that’s displayed on the charts. You can use absolute (calendar) time, or relative time (for example, the last 2 days).
- Select insights
to switch between Sankey and Sunburst diagrams.
- You can display the data throughput based on:
- Output bytes
- Output events
- Input bytes
- Input events
- Add and clear filters (filter_alt
/ filter_alt_off
).
In the Filter labels bar, you can:
-
Reorder the labels to adjust the diagram. On Sunburst diagrams, the left-most label is on the inside of the diagram.
-
Add new labels to get more details about the data flow.
- Labels added to AxoRouter hosts get the
axo_host_
prefix.
- Labels added to data sources get the
host_
prefix. For example, if you add a rack label to an edge host, it’ll be added to the data received from the host as host_rack
.
On other pages, like the Host Overview page, the labels are displayed without the prefixes.
-
Remove unneeded labels from the diagram.
Click a segment of the diagram to drill-down into the data. That’s equivalent with selecting filter_alt
and adding the label to the Analytics filters. To clear the filters, select filter_alt_off
.
Hover over a segment displays more details about it.
Sunburst diagrams
Sunburst diagrams (also known as ring charts or radial treemaps) visualize your data pipeline as a hierarchical dataset. It organizes the data according to the labels displayed in the Filter labels field into concentric rings, where each ring corresponds to a level in the hierarchy. The left-most label is on the inside of the diagram.
For example, sunburst diagrams are great for visualizing:
- top talkers (the data sources that are sending the most data), or
- if you’ve added custom labels that show the owner to your data sources, you can see which team is sending the most data to the destination.
The following example groups the data sources that sned data into a Splunk destination based on their custom host_label_team
labels.
Sankey diagrams
The Sankey diagram of your data pipeline shows the flow of data between the elements of the pipeline, for example, from the source (host) to the destination. Sankey diagrams are especially suited to visualize the flow of data, and show how that flow is subdivided at each stage. That way, they help highlight bottlenecks, and show where and how much data is flowing.
The diagram consists of nodes (also called segments) that represent the different attributes or labels of the data flowing through the host. Nodes are shown as labeled columns, for example, the sender application (app), or a host. The thickness of the links between the nodes of the diagram shows the amount of data.
- Hover over a link to show the data throughput of this link between the edges of the diagram.
- Click on a link to show the details of the link: the labels that the link connects, and their data throughput. You can also tap into the log flow.
- Click on a node to drill-down into the diagram. (To undo, use the Back button of your browser, or the clear filters icon filter_alt_off
.)
The following example shows a custom label that shows the owner of the source host, thereby visualizing which team is sending the most data to the destination.
Sankey diagrams are a great way to:
- Visualize flows: add the
flow
label to the Filter labels field.
- Find unclassified messages that weren’t recognized by the Axoflow database: add the
app
label to the Filter labels field, and look for the axo_fallback
link. You can tap into the log flow to check these messages. Feel free to send us sample so we can add them to the classification database.
- Visualize custom labels and their relation to data flows.
Tapping into the log flow
-
On the Sankey diagram, click on a link to show the details of the link.
-
Tap into the data traffic:
- Select Tap with this label to tap into the log flow at either end of the link.
- Select Tap both to tap into the data flowing through the link.
-
Select the host where you want to tap into the logs.
-
Select Start.
-
When the logs you’re interested in show up, click Stop Log Tap, then click a log message to see its details.
-
If you don’t know what the message means, select AI Analytics to ask our AI to interpret it.
11.2 - Host metrics
The Metrics & health page of a host shows the history of the various metrics Axoflow collects about the host.
Events of the hosts (for example, configuration reloads, or alerts affecting the host) are displayed over the metrics.
Interact with metrics
You can change which metrics are displayed and for which time period using the bar above the metrics.
-
Metrics categories: Temporarily hide/show the metrics of that category, for example, System. The category for each metric is displayed under the name of the chart. Note that this change is just temporary: if you want to change the layout of the metrics, use the settings
icon.
-
calendar_month
: Use the calendar icon to change the time period that’s displayed on the charts. You can use absolute (calendar) time, or relative time (for example, the last 2 days).
To quickly zoom in on a period, click and drag to select the period to display on any of the charts. Every chart will be updated for the selected period. (To return to the previous state, click the Back button of your browser.)
-
/
: Hide/show the alerts and other events (like configuration reloads) of the host. These event are overlayed on the charts by default.
-
: Shows the number of active alerts on the host for that period.
-
The settings
allows you to change the order of the metrics, or to hide metrics. These changes are persistent and stored in your profile.
Interact with a chart
In addition to the possibilities of the top bar, you can interact with the charts the following way:
-
Hover on the info
icon on a colored metric card to display the definition, details, and statistics of the metric.
-
Click on a colored card to hide the related metric from the chart. For example, you can hide unneeded sources on the Log input charts.
-
Click on an event (for example, an alert or configuration reload) to show its details.
-
To quickly zoom in on a period, click and drag to select the period to display on any of the charts. Every chart will be updated for the selected period. (To return to the previous state, click the Back button of your browser.)
Metrics reference
For managed hosts, the following metrics are available:
-
Connections: Number of active connections and their configured maximum number for each log source.
-
CPU: Percentage of time a CPU core spent on average in a non-idle state within a window of 5 minutes.
-
Disk: Effective storage space used and available on each device (an overlap may exist between devices identified).
-
Dropped packets (total): This chart shows different metrics for packet loss:
- Dropped UDP packets: Count of UDP packets dropped by the OS before processing per second averaged within a time window of 5 minutes.
- Dropped log events: Count of events dropped from event queues within a time window of 5 minutes.
-
Log input (bytes): Incoming log messages processed by each log source measured in bytes per second averaged in a time window of 5 minutes.
-
Log input (events): Count of incoming log messages processed by each log source per second averaged within a time window of 5 minutes.
-
Log output (bytes): Log messages sent to each log destination measured in bytes per second averaged within a time window of 5 minutes.
-
Log output (events): Count of log messages sent to each log destination per second averaged within a time window of 5 minutes.
-
Log memory queue (bytes): Total bytes of data waiting in each memory queue.
-
Log memory queue (events): Count of messages waiting in each memory queue by destination.
-
Log disk queue (bytes): This chart shows the following metrics about disk queue usage:
- Disk queue bytes: Total bytes of data waiting in each disk queue.
- Disk queue memory cache bytes: Amount of memory used for caching disk-based queues.
-
Log disk queue (events): Count of messages waiting in each disk queue by destination.
-
Memory: Memory usage and capacity reported by the OS in bytes (including reclaimable caches and buffers).
-
Network input (bytes): Incoming network traffic in bytes/second reported by the OS for each network interface averaged within a time window of 5 minutes.
-
Network input (packets): Count of incoming network packets per second reported by the OS for each network interface averaged within a time window of 5 minutes.
-
Network output (bytes): Outgoing network traffic in bytes/second reported by the OS for each network interface averaged within a time window of 5 minutes.
-
Network output (packets): Count of outgoing network packets per second reported by the OS for each network interface averaged within a time window of 5 minutes.
-
Event delay (seconds): Latency of outgoing messages.
12 - Legal notice
Copyright of Axoflow Inc. All rights reserved.
The Axoflow, AxoRouter, and AxoSyslog trademarks and logos are trademarks of Axoflow Inc. All other trademarks are property of their respective owners.