# Multi-cloud and multi-datacenter scenario

Organizations running workloads across multiple cloud providers (AWS, Azure, GCP) and private datacenters face a common challenge: each environment produces security and observability data in different formats, volumes, and locations, making it hard to collect, normalize, and route it consistently.

Axoflow is designed for exactly this scenario. You deploy Axoflow agent agents and Axoflow Cloud Connectors close to your data sources in each cloud and datacenter. The agents forward data to one or more AxoRouter nodes, which you can place per region to minimize egress costs, or centralize for simpler operations.

![Multi-cloud and multi-datacenter deployment](/docs/axoflow/deployment-scenarios/multi-cloud/multi-datacenter-deployment.png)

This approach gives you:

  * Single management plane: You configure and monitor all Axoflow agent agents and AxoRouter nodes from the AxoConsole, regardless of which cloud or datacenter they run in.
  * Local aggregation: AxoRouters automatically [filter, enrich, and reduce](../../docs/axoflow/concepts/classify-reduce-security-data/index.md) data before forwarding it, so you only pay cross-cloud or cross-datacenter egress for the data that matters.
  * [Reliable transport](../../docs/axoflow/concepts/reliable-transport/index.md): Axoflow uses the OpenTelemetry protocol (OTLP) between components (Axoflow agent to AxoRouter and Axoflow Cloud Connector to AxoRouter), so no messages are lost when crossing network boundaries between environments.
  * [Unified metrics](../../docs/axoflow/metrics/index.md): Every agent and router emits detailed per-component metrics, so you have end-to-end visibility into your data pipeline across all environments.
  * Flexible routing: You can route data to environment-specific destinations (for example, a regional SIEM or local archive) or aggregate it centrally — or both — using [policy-based routing](../../docs/axoflow/concepts/policy-based-routing/index.md).