On-premise

This guide shows you how to install Axoflow Console on a local, on-premises virtual machine using k3s and Helm. Installation of other components like the NGINX ingress controller NS cert-manager is also included. Since this is a single instance deployment, we don’t recommend using it in production environments. For additional steps and configurations needed in production environments, contact us.

At a high level, the deployment consists of the following steps:

  1. Preparing a virtual machine.
  2. Installing Kubernetes on the virtual machine.
  3. Configure authentication. This guide shows you how to configure LDAP authentication. Other supported authentication methods include Google and GitHub.
  4. Before deploying AxoRouter describes the steps you have to complete on a host before deploying AxoRouter on it. These steps are specific to on-premises Axoflow Console deployments, and are not needed when using the SaaS Axoflow Console.

Prerequisites

To install Axoflow Console, you’ll need the following:

System requirements

Supported operating system: Ubuntu 24.04, Red Hat 9 and compatible (tested with AlmaLinux 9)

The virtual machine (VM) must have at least:

  • 4 vCPUs
  • 4 GB RAM
  • 100 GB disk space

This setup can handle about 100 AxoRouter instances and 1000 data source hosts.

For reference, a real-life scenario that handles 100 AxoRouter and 3000 data source hosts with 30-day metric retention would need:

  • 16 vCPU
  • 16 GB RAM
  • 250 GB disk space

For details on sizing, contact us.

You’ll need to have access to a user with sudo privileges.

Network access

The hosts must be able to access the following domains related to the Axoflow Console:

  • When using Axoflow Console SaaS:

    • <your-tenant-id>.cloud.axoflow.io: HTTPS traffic on TCP port 443, needed to download the binaries for Axoflow software (like Axolet and AxoRouter).
    • kcp.<your-tenant-id>.cloud.axoflow.io: HTTPS (mutual TLS) traffic on TCP port 443 for management traffic.
    • telemetry.<your-tenant-id>.cloud.axoflow.io: HTTPS (mutual TLS) traffic on TCP port 443, where Axolet sends the metrics of the host.
    • us-docker.pkg.dev: HTTPS traffic on TCP port 443, for pulling container images (AxoRouter only).
  • When using an on-premise Axoflow Console:

    • The following domains should point to Axoflow Console IP address to access Axoflow from your desktop and AxoRouter hosts:

      • your-host.your-domain: The main domain of your Axoflow Console deployment.
      • authenticate.your-host.your-domain: A subdomain used for authentication.
      • idp.your-host.your-domain: A subdomain for the identity provider.
    • The Axoflow Console host must have the following Open Ports:

      • Port 80 (HTTP)
      • Port 443 (HTTPS)

Set up Kubernetes

Complete the following steps to install k3s and some dependencies on the virtual machine.

  1. Run getenforce to check the status of SELinux.

  2. Check the /etc/selinux/config file and make sure that the settings match the results of the getenforce output (for example, SELINUX=disabled). K3S supports SELinux, but the installation can fail if there is a mismatch between the configured and actual settings.

  3. Install k3s.

    1. Run the following command:

      curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik --disable metrics-server" K3S_KUBECONFIG_MODE="644" sh -s -
      
    2. Check that the k3s service is running:

      systemctl status k3s
      
  4. Install the NGINX ingress controller.

    VERSION="v4.11.3"
    sudo tee /var/lib/rancher/k3s/server/manifests/ingress-nginx.yaml > /dev/null <<EOF
    ---
    apiVersion: helm.cattle.io/v1
    kind: HelmChart
    metadata:
      name: ingress-nginx
      namespace: kube-system
    spec:
      createNamespace: true
      chart: ingress-nginx
      repo: https://kubernetes.github.io/ingress-nginx
      version: $VERSION
      targetNamespace: ingress-nginx
      valuesContent: |-
        controller:
          extraArgs:
            enable-ssl-passthrough: true
          hostNetwork: true
          hostPort:
            enabled: true
          service:
            enabled: false
          admissionWebhooks:
            enabled: false
          updateStrategy:
            strategy:
              type: Recreate
    EOF
    
  5. Install the cert-manager controller.

    VERSION="1.16.2"
    sudo tee /var/lib/rancher/k3s/server/manifests/cert-manager.yaml > /dev/null <<EOF
    ---
    apiVersion: helm.cattle.io/v1
    kind: HelmChart
    metadata:
      name: cert-manager
      namespace: kube-system
    spec:
      createNamespace: true
      chart: cert-manager
      version: $VERSION
      repo: https://charts.jetstack.io
      targetNamespace: cert-manager 
      valuesContent: |-
        crds:
          enabled: true
        enableCertificateOwnerRef: true
    EOF
    

Install the Axoflow Console

Complete the following steps to deploy the Axoflow Console Helm chart.

  1. Set the following environment variables as needed for your deployment.

    • The domain name for your Axoflow Console deployment. This must point to the IP address of the VM you’re installing Axoflow Console on.

      BASE_HOSTNAME=your-host.your-domain
      
    • A default email address for the Axoflow Console administrator.

      ADMIN_EMAIL="your-email@your-domain"
      
    • The username of the Axoflow Console administrator.

      ADMIN_USERNAME="admin"
      
    • The password of the Axoflow Console administrator.

      ADMIN_PASSWORD="your-admin-password"
      
    • A unique identifier of the administrator user. For example, you can generate one using the uuidgen. (Note that on Ubuntu, you must first install the uuid-runtime package by running sudo apt install uuid-runtime.

      ADMIN_UUID=$(uuidgen)
      
    • The bcrypt hash of the administrator password. For example, you can generate it with the htpasswd tool, which is part of the httpd-tools package on Red Hat (install it using sudo dnf install httpd-tools), and the apache2-utils package on Ubuntu (install it using sudo apt install apache2-utils).

      ADMIN_PASSWORD_HASH=$(echo $ADMIN_PASSWORD | htpasswd -BinC 10 $ADMIN_USERNAME | cut -d: -f2)
      
    • The version number of Axoflow Console you want to deploy.

      VERSION="0.50.1"
      
    • The internal IP address of the virtual machine. This is needed so the authentication service can communicate with the identity provider without relying on DNS.

      VM_IP_ADDRESS=$(hostname -I | cut -d' ' -f1)
      

      Check that the above command returned the correct IP, it might not be accurate if the host has multiple IP addresses.

    • The external IP address of the virtual machine. This is needed so external components (like AxoRouter and Axolet that’s running on other hosts) can access the Axoflow Console.

      EXTERNAL_IP="<external.ip.of.vm>"
      
    • The URL of the Axoflow Helm chart. You’ll receive this URL from our team. You can request it using the contact form.

      AXOFLOW_HELM_CHART_URL="<the-chart-URL-received-from-Axoflow>"
      
  2. Install Axoflow Console. The following configuration file enables the user set in ADMIN_USERNAME log in with the static password set in ADMIN_PASSWORD. If this basic setup is working, you can configure another authentication provider.

    (For details on what the different services enabled in the YAML file do, see the service reference.)

    sudo tee /var/lib/rancher/k3s/server/manifests/axoflow.yaml > /dev/null <<EOF
    ---
    apiVersion: helm.cattle.io/v1
    kind: HelmChart
    metadata:
      name: axoflow
      namespace: kube-system
    spec:
      createNamespace: true
      targetNamespace: axoflow
      version: $VERSION
      chart: $AXOFLOW_HELM_CHART_URL
      failurePolicy: abort
      valuesContent: |-
        baseHostName: $BASE_HOSTNAME
        issuer: self-sign-issuer
        issuerKind: Issuer
        imageTagDefault: $VERSION
        selfSignedRoot:
          create: true
        chalco:
          enabled: true
        axoletDist:
          enabled: true
        controllerManager:
          enabled: true
        kcp:
          enabled: true
        kcpCertManager:
          enabled: true
        telemetryproxy:
          enabled: true
        prometheus:
          enabled: true
        pomerium:
          enabled: true
          authenticateServiceUrl: https://authenticate.$BASE_HOSTNAME
          authenticateServiceIngress: authenticate
          certificates:
            - authenticate
          certificateAuthority:
            secretName: dex-certificates
          policy:
            emails:
              - $ADMIN_EMAIL
            domains: []
            groups: []
            claim/groups: []
            readOnly:
              emails: []
              domains: []
              groups: []
              claim/groups: []
        ui:
          enabled: true
        dex:
          enabled: true
          localIP: $VM_IP_ADDRESS
          config:
            create: true
            # connectors:
            #   ...
            staticPasswords:
              - email: $ADMIN_EMAIL
                hash: $ADMIN_PASSWORD_HASH
                username: $ADMIN_USERNAME
                userID: $ADMIN_UUID    
    EOF
    
  3. Wait a few minutes until everything is installed. Check that every component has been installed and is running:

    sudo k3s kubectl get pods -A
    

    The output should be similar to:

    NAMESPACE       NAME                                        READY   STATUS      RESTARTS   AGE
    NAMESPACE       NAME                                       READY   STATUS      RESTARTS       AGE
    axoflow         axolet-dist-55ccbc4759-q7l7n               1/1     Running     0              2d3h
    axoflow         chalco-6cfc74b4fb-mwbbz                    1/1     Running     0              2d3h
    axoflow         configure-kcp-1-x6js6                      0/1     Completed   2              2d3h
    axoflow         configure-kcp-cert-manager-1-wvhhp         0/1     Completed   1              2d3h
    axoflow         configure-kcp-token-1-xggg7                0/1     Completed   1              2d3h
    axoflow         controller-manager-5ff8d5ffd-f6xtq         1/1     Running     0              2d3h
    axoflow         dex-5b9d7b88b-bv82d                        1/1     Running     0              2d3h
    axoflow         kcp-0                                      1/1     Running     1 (2d3h ago)   2d3h
    axoflow         kcp-cert-manager-0                         1/1     Running     0              2d3h
    axoflow         pomerium-56b8b9b6b9-vt4t8                  1/1     Running     0              2d3h
    axoflow         prometheus-server-0                        2/2     Running     0              2d3h
    axoflow         telemetryproxy-65f77dcf66-4xr6b            1/1     Running     0              2d3h
    axoflow         ui-6dc7bdb4c5-frn2p                        2/2     Running     0              2d3h
    cert-manager    cert-manager-74c755695c-5fs6v              1/1     Running     5 (144m ago)   2d3h
    cert-manager    cert-manager-cainjector-dcc5966bc-9kn4s    1/1     Running     0              2d3h
    cert-manager    cert-manager-webhook-dfb76c7bd-z9kqr       1/1     Running     0              2d3h
    ingress-nginx   ingress-nginx-controller-bcc7fb5bd-n5htp   1/1     Running     0              2d3h
    kube-system     coredns-ccb96694c-fhk8f                    1/1     Running     0              2d3h
    kube-system     helm-install-axoflow-b9crk                 0/1     Completed   0              2d3h
    kube-system     helm-install-cert-manager-88lt7            0/1     Completed   0              2d3h
    kube-system     helm-install-ingress-nginx-qvkhf           0/1     Completed   0              2d3h
    kube-system     local-path-provisioner-5cf85fd84d-xdlnr    1/1     Running     0              2d3h
    

    (For details on what the different services do, see the service reference.)

Login to Axoflow Console

  1. If the domain name of Axoflow Console cannot be resolved from your desktop, add it to the /etc/hosts file in the following format. Use and IP address of Axoflow Console that can be accessed from your desktop.

    <AXOFLOW-CONSOLE-IP-ADDRESS> <your-host.your-domain> idp.<your-host.your-domain> authenticate.<your-host.your-domain>
    
  2. Open the https://<your-host.your-domain> URL in your browser.

  3. The on-premise deployment of Axoflow Console shows a self-signed certificate. If your browser complains about the related risks, accept it.

  4. Use the email address and password you’ve configured to log in to Axoflow Console.

Axoflow Console service reference

Service Name Namespace Purposes Function
KCP axoflow Backend API Kubernetes Like Service with built in database, that service manage all the settings that our system manage
Chalco axoflow Frontend API Serve the UI API Calls, implement business logic for the UI
Controller-Manager axoflow Backend Service Reacts to state changes in our business entities, manage business logic for the Backend
Telemetry Proxy axoflow Backend API Receives agents telemetries
UI axoflow Dashboard The frontend for Axoflow Console
Prometheus axoflow Backend API /Service Monitoring component to store time series information and an API for query, manage alert rules
Alertmanager axoflow Backend API /Service Monitoring component to Send alert based on alerting rules
Dex axoflow Identity Connector/Proxy Identity Connector/Proxy to allow the customer to use own identity (Google, Ldap, etc.)
Pomerium axoflow HTTP Proxy Allow policy based access to the Frontend API (Chalco)
NGINX Ingress Controller ingress-nginx HTTP Proxy Allow routing the HTTP traffic between multiple Frontend/Backend API
Axolet Dist axoflow Backend API Static artifact store to contains agents binaries
Cert Manager cert-manager Automated Certificate management tool Manage certificates for Axoflow components (Backend API, HTTP Proxy)
Cert Manager (kcp) axoflow Automated Certificate management tool Manage certificates for Agents
External DNS kube-system Automated DNS management tool Manage DNS records for Axoflow components