Let Go of the Loop: Why Real Telemetry Automation Leaves Manual Oversight Behind

"With [AI], humans stay in the driver's seat... our human-in-the-loop design gives operators full visibility and control over their data."

We’ve heard this pitch before. On paper, it sounds responsible. After all, security teams need to supervise things. Infrastructure teams need control. No one wants a black box to process their critical telemetry. But let’s not confuse supervision with ownership. If teams still need to review, validate, and maintain transformation logic for known log types, they aren’t automating—they’re just shifting who does the manual work. Whether the logic comes from a senior engineer or a polite AI assistant, it’s still bespoke. It still has to be versioned. It still breaks.

That’s not leverage. That’s treadmill automation. Also, for data transformation or processing and reducing security data, reviewing changes suggested by an AI requires specialized knowledge and expertise, something that many organizations lack in the first place.

Every Pipeline Becomes Technical Debt

With most telemetry pipelines, you’re spending a lot of your time managing the pipeline instead of the data: you have to designate ports for specific devices, attach parsers, manually configure endpoints, and so on. Also, telemetry pipelines—whether declaratively built, AI-suggested, or hand-coded—age like milk.

  • A new version of FortiGate introduces a subtle timestamp change? Update the parser.
  • A firewall logs unexpected fields during a failover? Another exception.
  • The security team adopts ECS but the data lake expects OCSF? Patch it, test it.

This kind of configuration sprawl turns pipelines into a secondary product:

  • It has its own lifecycle.
  • It needs QA, regression testing, rollback plans.
  • The AI may help draft the config, but you still own the mess.

And the more AI-suggested transformations you deploy, the more brittle your environment becomes. You're not simplifying. You're just speedrunning a trainwreck.

Why Are We Still Rebuilding Parsers for Well-Known Logs?

Palo Alto logs are not new. Neither are Windows Event logs, M365 audit records, or Okta syslogs. They’ve been around for years. Their structure is documented, their values are clear. So why are vendors still asking customers to rebuild the parsing logic from scratch every time?

Some try to make this less painful by handing you a chatbot. Now instead of writing regex by hand, you describe your intent and the AI suggests a pipeline in yet another configuration language. You've upgraded from manual plumbing to reviewing machine-generated plumbing. But at the end of the day, your team still owns the pipeline. They still babysit it. And when something changes upstream, they’re still the ones triaging the impact.

The Axoflow Model: Curated Intelligence, Not Pipeline Prompts

At Axoflow, we didn’t set out to make pipelines more tolerable. We set out to make them automatic, so they just work.

We maintain a curated catalog of data source integrations—rich with parser logic, schema mappings, enrichment steps, and destination-aware transformations—so you don’t have to.

When security data hits our platform, it’s automatically:

  • Identified and classified by source and type
  • Parsed using vetted, source-specific logic
  • Normalized into your chosen schema (OCSF, ECS, etc.)
  • Enriched with context (e.g., GeoIP, asset info)
  • Routed according to your policies

No loop. No approval process. No tweaking an AI-generated RegEx at 2 AM. You get the outcome, not the scaffolding.

Transparency Without Responsibility

Let’s make one thing clear: we are not hiding the process. You can configure high-level, intent-based policies that stay solid over time - like “collect all my firewall logs into Splunk”.

Axoflow provides detailed observability into every stage of the data flow:

  • what was parsed,
  • what was dropped,
  • what schema was applied, and
  • how it evolved.

You see it all. But seeing isn’t the same as owning.

With traditional tools, “visibility” means, “now you can figure out why the pipeline broke.” With Axoflow, visibility means you’re aware, but you’re not a bottleneck. We own the underlying mechanics:

  • If a field changes in Palo Alto logs, we fix the parser
  • If Sentinel updates its schema, we realign the output.

That’s how your pipeline should be: truly automatic.

Declarative ≠ Automatic

A final note on Cribl’s “declarative pipeline” pitch: declaring that a thing should happen is only helpful if the system knows how to make it happen. Declarative syntax without curated logic is just wishful configuration. Saying “normalize these logs to OCSF” doesn't help if the tool doesn't actually know how OCSF expects these logs to be structured. Without expert-maintained logic, the AI will guess, or even worse, hallucinate. Like this:

And you’ll still need to check its work. Declarative is supposed to remove responsibility—not rename it.

Closing Thoughts

Other telemetry pipelines want you to move tasks currently done in the SIEM - like parsing and classification - into the pipeline. That’s good, but they only provide the scaffolding, and require you to manually redo these tasks, which is not helpful. Axoflow automates these tasks, so they just work.

Maintaining and optimizing parsers and classification, or how to map which field for your SIEM shouldn’t be your responsibility. That’s what we built Axoflow to do: to take responsibility and ownership of the pipeline instead of your teams. No Copilot approvals. We are the humans in the loop. Just clean, complete, detection-ready data—automated end-to-end. Let go of the loop.

Follow Our Progress!

We are excited to be realizing our vision above with a full Axoflow product suite.

Sign me up
This button is added to each code block on the live site, then its parent is removed from here.

Fighting data Loss?

Balázs Scheidler

Book a free 30-min consultation with syslog-ng creator Balázs Scheidler

Recent posts

How Axoflow Works with Google Security Operations, Cloud, Pub/Sub, and BigQuery
1 year of AxoSyslog
Parsing firewall logs with FilterX