Logs have been my passion for over two decades now. As a 3rd-year university student, I started an Open Source project to fix the “syslogd” problem. syslogd was the standard solution at that time to collect, deliver and aggregate system and device logs. The new project was named “syslog, the next generation” or syslog-ng for short. Well, 25 years on and we are now launching Axoflow – where we are bringing to market a long-overdue capability for the burgeoning Observability space that we anticipate will have similar impact!
But first, some history. Since its creation, syslog-ng has been deployed to millions of servers and other devices, and has been fully developed into a commercial product while staying true to its open-source roots. It is embedded into both enterprise and consumer products, and has been expanded to use cases far beyond its origin in pure system logging. As a result, syslog-ng has emerged as the cornerstone of logging infrastructure in various Fortune 100 companies for over two decades.
During this time, as syslog-ng quickly emerged as a key tool in the enterprise security logging space, we have witnessed 3 entire hype cycles:
- Open source tools and home grown scripts to analyze logs (for example, logcheck, swatch, artificial ignorance) (1995-2005)
- The emergence and promise of the enterprise SIEM (Security Incident and Event Management), the likes of LogLogic, QRadar, ArcSight, and the most recent leader Splunk (2005-2015)
- The cloud wave with a focus on User Behavior Analytics, like SumoLogic, Exabeam, Elastic. (2015-)
Logs have always been the primary data source for enterprise security teams using SIEM and other incident detection mechanisms. However, the exclusive role of the SIEM in collecting, extracting, and processing this data has recently come under pressure due to the exponential rise in data volumes and more complex use cases. In addition, non-security use cases utilizing the same data, such as for Observability/IT Operations, has challenged the monolithic nature of current SIEM technology and its lock on the log data.
We believe that the landscape is shifting – again
While the SIEM started with the promise of being the “be-all/end-all” for enterprise security teams in one monolithic system, the challenges noted above have forced its breakup into smaller, interoperating components, each having its own specific focus:
- Log management and observability products: collecting and aggregating logs: syslog-ng, rsyslog, logstash, fluentd, beats, Cribl, OTEL, etc. instead of SIEM-specific agents like Splunk UF, ArcSight SmartConnectors or WinCollect
- Security Data Lakes: Snowflake, AWS Security Lake, Elastic, etc. instead of built-in data storage functionality offered by a SIEM
- Incident detection: both rule and analytics based (existing SIEM functionality; point products focused on a specific niche; homegrown applications)
- Response: (existing SIEM functionality; dedicated SOAR products)
- Case Management: (existing SIEM functionality; dedicated ticketing systems, e.g. ServiceNow)
The original “SOC in a box” goal of the SIEM continues to be the holy grail of Security Operations, and SIEM functionality as a whole will continue to be a key driver in detection and response. But instead of coupling collection, storage and analytics into a single monolithic product, future SIEMs could simply request the data needed from a security data lake (perhaps 3rd party) that also serves other security and observability purposes. This decoupling of functionality also makes it easier and quicker to support newer container/ephemeral architectures, hosted either in the Cloud or on-prem. Lastly, in addition to addressing scale concerns, decomposition vastly decreases vendor dependence (“vendor lockin”), allowing the use (and substitution) of best-of-breed solutions for each focus area.
Industry Need: Management of the Observability Supply Chain
Though monolithic SIEMs present many challenges, current SIEMs do have one theoretical advantage – being integrated, they know how to store, retrieve, and present collected data so that each of the constituent parts function optimally. In a distributed component/vendor architecture, data passed between components must be recognizable to each of the other components, and furthermore be presented in a way that each receiving component can make the best use of the data. The key ingredient in making this “Observability Supply Chain” successful is to keep the APIs open and the data formats standardized. Emerging standards such as OTEL or OCSF are early drivers for this industry interoperability, and will eventually lead to the deprecation of proprietary APIs (e.g. Splunk S2S). However, these emerging standards will take time to become pervasive, and will undoubtedly not comprise a complete solution – highlighting the critical need for an “Observability Supply Chain Management” facility that properly translates and curates data being transported between these components.
Consolidation of Observability and Logging
The Observability space, while originally focusing on IT operations/DevOps, is increasingly expanding to include security-oriented use cases. First, in the transition of pure DevOps into DevSecOps, and then by addressing the global, security oriented use-cases that traditional log management tools traditionally tackled. Observability, with its decoupled architecture at its core and focus on ephemeral workloads, stands to benefit even more from this new revolution in Observability Supply Chain Management: Traditionally separate infrastructures dedicated to Observability and Log Management can now be merged into one.
Having created both syslog-ng and the Logging Operator for Kubernetes, we, the founders of Axoflow, have a broad experience both in the traditional log management space as well as the “logs/metrics/traces” world of Observability. Axoflow is leveraging this foundation to create products that address the critical industry need for effective management of the Observability Supply Chain of the future!