Parsing sucks! Watch our on-demand webinar and learn what you can do about it! >>

One of the recent trends I’ve been watching is the convergence of GRC (Governance, Risk and Compliance) initiatives with cyber security. There have been several articles citing the importance of this convergence and it seems on the surface that this would be obvious. But I was wondering what was really driving this convergence and how does it impact enterprises?  I reached out to Alan Demers who has spent his career working on data quality and GRC at some of the largest corporations in the world for his thoughts on what is going on:

The problem is complex but basically the data is being created too far away from where it’s processed. If you think about Blockchain for example, every change is imprinted and auditable, the concept applies here as well. We need the fundamental understanding of what happens at each step in the data flow from creation to consumption all the way through to products and services. There are too many gaps in our current processes.

Alan Demers

Senior Advisor, McKinsey & Company

Compliance Should Happen as Close to the Source as Possible 

The gaps that Alan mentioned are usually addressed with a lot of manual labor, or inadequate systems, or not at all. Some examples would be:

Data obfuscation for HIPAA and PII is usually done at the destination system (in the database) by a team that focuses on cleaning up this information. Usually, each destination requires its own team working on data quality and GRC issues. HIPAA and PII information needs to be obfuscated and the teams do this work on arrival.

This data is still vulnerable on the wire, encryption and vigilance over the transport layer is critical but often ignored.

Bridging the Gap Between Monitoring and Security

For example, SOC2 requires compliant organizations to monitor their processes and report any down times. The OMB memorandum M-21-31 specifically relates to log management practices agencies should undertake to better secure their log data (data feeding the SIEM). This area of concern is specific to Axoflow’s expertise and product lines and the Memorandum points to specific gaps in IT environments where adequate monitoring must be deployed to really know that you have captured everything you need. It’s one thing to know your SIEM was 100% operational but do you know, and can you confirm under audit that all of the relevant data going to the SIEM was sent and received? How many stories do we hear after the fact about servers or firewalls that were not sending data for months before anyone noticed that that data was not in the SIEM?

These real-world situations are forcing the convergence of GRC and cyber security. And firms that are practitioners of bringing these silos together are in fact safer according to Hyperproof’s 2024 IT Risk and Compliance Benchmark report. Centralizing management of GRC together with Security means integration of risk and compliance systems with Security systems. Those teams will pressure their vendors to start providing this type of integration out of the box.

The Problem with “Send Me Everything”

For the past 2 decades, the vendors for SIEM and observability have all been saying, “send me everything” with all the problems of formatting and classification; we will do that work at the destination. But by the time it arrives, you have now already paid for it, whether you needed it or not. And this is a major contention point, it seems the “why send it?”  to the security SIEM has been lost. It’s much easier to send everything there and not ever worry about it again, but data volumes have now made that model untenable. The pocketbooks for security are deep but they are not infinite, and at data growth rates of 25% per year, you are doubling your SIEM bill every three years. 

The Role of the Observability Pipeline

We believe the observability (or data) pipeline is the ideal place to ensure that security monitoring meets GRC requirements while performing its main functions of data management and cost control. The pipeline also comes into view as the best place to allow customers to make decisions about sending the data they need, answering both where and why – before you pay for it. The disruption here is that after 25 years of fire and forget, we can now apply modern programming technologies like AI and declarative programming to automate these normally difficult tasks. Enterprises will no longer need to create and maintain parsing and normalization rules for off-the-shelf products as a part of their security monitoring regime – automatic message classification and curation, long the ideal, are now within reach and Axoflow is leading the charge. Moreover, when the data is curated in the pipeline, things like data obfuscation for credit card, HIPAA, or PII compliance can be applied as close to the source as possible. Policy-based routing can apply to GDPR and the entire observability pipeline layer can be monitored, audited, and verified for compliance requirements.

The Future: Automated, And User-friendly Pipeline Management

By adding metrics and automation, pipeline management becomes user friendly and more accessible. Firms can concentrate their skilled workers on business processes and focus less on the highly technical work of loading and managing massive amounts of machine data and determining what that data is and if it’s valuable or not. For GRC teams, the real benefit is that they not only have a highly performant compliance tool to protect their data, but it is also ready to monitor and report on their compliance requirements.

As Alan notes:

Controlling the data as close to the source that created it in a real time fashion is paramount to not only compliance with data security LRRs (Laws, Rules and Regulations) but also to understanding the effectiveness and efficiency of the business process providing the products and services for our customers.

Alan Demers

Senior Advisor, McKinsey & Company

If you want to see the Axoflow telemetry pipeline management in action, request a free demo here.

 Please visit us at Splunk .conf24 in Las Vegas at Booth 209 June 11 – 14  

Authors

Alan Demers, Senior Advisor at McKinsey & Company 

Alan is a C-level executive leader with extensive experience in the financial services, retail and commercial banking and capital markets industries. He has expertise in operational and compliance risk, regulatory relations and interaction management, financial crimes, data quality management, operations management, capital markets, securitizations, and employee development. 

Neil Boyd, VP of Sales at Axoflow

Neil’s sales experience spans decades, and has focused on solving large scale data and network projects. He has extensive knowledge on enterprise logging and telemetry projects, and is dedicated to solving machine data issues at the world’s largest enterprises.
Observability and the Telemetry Pipeline - Metrics and Management white paper

On-deman Webinar

Parsing
sucks!

What can you do
about it?

56 minutes

Balázs SCHEIDLER

Balázs SCHEIDLER

Founder syslog-ng™

Mark BONSACK

Mark BONSACK

Co-creator SC4S

Sándor GUBA

Sándor GUBA

Founder Logging Operator

Neil BOYD

Neil BOYD

Moderator

On-demand Webinar

Parsing
sucks!

What can you do about it?

56 minutes

Follow Our Progress!

We are excited to be realizing our vision above with a full Axoflow product suite.