This is the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

On-premise

This guide shows you how to install Axoflow Console on a local, on-premises virtual machine using k3s and Helm. Installation of other components like the NGINX ingress controller NS cert-manager is also included. Since this is a single instance deployment, we don’t recommend using it in production environments. For additional steps and configurations needed in production environments, contact us.

At a high level, the deployment consists of the following steps:

  1. Preparing a virtual machine.
  2. Installing Kubernetes on the virtual machine.
  3. Configure authentication. This guide shows you how to configure LDAP authentication. Other supported authentication methods include Google and GitHub.
  4. Before deploying AxoRouter describes the steps you have to complete on a host before deploying AxoRouter on it. These steps are specific to on-premises Axoflow Console deployments, and are not needed when using the SaaS Axoflow Console.

Prerequisites

To install Axoflow Console, you’ll need the following:

System requirements

Supported operating system: Ubuntu 24.04, Red Hat 9 and compatible (tested with AlmaLinux 9)

The virtual machine (VM) must have at least:

  • 4 vCPUs
  • 4 GB RAM
  • 100 GB disk space

This setup can handle about 100 AxoRouter instances and 1000 data source hosts.

For reference, a real-life scenario that handles 100 AxoRouter and 3000 data source hosts with 30-day metric retention would need:

  • 16 vCPU
  • 16 GB RAM
  • 250 GB disk space

For details on sizing, contact us.

You’ll need to have access to a user with sudo privileges.

Network access

The hosts must be able to access the following domains related to the Axoflow Console:

  • When using Axoflow Console SaaS:

    • <your-tenant-id>.cloud.axoflow.io: HTTPS traffic on TCP port 443, needed to download the binaries for Axoflow software (like Axolet and AxoRouter).
    • kcp.<your-tenant-id>.cloud.axoflow.io: HTTPS (mutual TLS) traffic on TCP port 443 for management traffic.
    • telemetry.<your-tenant-id>.cloud.axoflow.io: HTTPS (mutual TLS) traffic on TCP port 443, where Axolet sends the metrics of the host.
    • us-docker.pkg.dev: HTTPS traffic on TCP port 443, for pulling container images (AxoRouter only).
  • When using an on-premise Axoflow Console:

    • The following domains should point to Axoflow Console IP address to access Axoflow from your desktop and AxoRouter hosts:

      • your-host.your-domain: The main domain of your Axoflow Console deployment.
      • authenticate.your-host.your-domain: A subdomain used for authentication.
      • idp.your-host.your-domain: A subdomain for the identity provider.
    • The Axoflow Console host must have the following Open Ports:

      • Port 80 (HTTP)
      • Port 443 (HTTPS)

Set up Kubernetes

Complete the following steps to install k3s and some dependencies on the virtual machine.

  1. Run getenforce to check the status of SELinux.

  2. Check the /etc/selinux/config file and make sure that the settings match the results of the getenforce output (for example, SELINUX=disabled). K3S supports SELinux, but the installation can fail if there is a mismatch between the configured and actual settings.

  3. Install k3s.

    1. Run the following command:

      curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik --disable metrics-server" K3S_KUBECONFIG_MODE="644" sh -s -
      
    2. Check that the k3s service is running:

      systemctl status k3s
      
  4. Install the NGINX ingress controller.

    VERSION="v4.11.3"
    sudo tee /var/lib/rancher/k3s/server/manifests/ingress-nginx.yaml > /dev/null <<EOF
    ---
    apiVersion: helm.cattle.io/v1
    kind: HelmChart
    metadata:
      name: ingress-nginx
      namespace: kube-system
    spec:
      createNamespace: true
      chart: ingress-nginx
      repo: https://kubernetes.github.io/ingress-nginx
      version: $VERSION
      targetNamespace: ingress-nginx
      valuesContent: |-
        controller:
          extraArgs:
            enable-ssl-passthrough: true
          hostNetwork: true
          hostPort:
            enabled: true
          service:
            enabled: false
          admissionWebhooks:
            enabled: false
          updateStrategy:
            strategy:
              type: Recreate
    EOF
    
  5. Install the cert-manager controller.

    VERSION="1.16.2"
    sudo tee /var/lib/rancher/k3s/server/manifests/cert-manager.yaml > /dev/null <<EOF
    ---
    apiVersion: helm.cattle.io/v1
    kind: HelmChart
    metadata:
      name: cert-manager
      namespace: kube-system
    spec:
      createNamespace: true
      chart: cert-manager
      version: $VERSION
      repo: https://charts.jetstack.io
      targetNamespace: cert-manager 
      valuesContent: |-
        crds:
          enabled: true
        enableCertificateOwnerRef: true
    EOF
    

Install the Axoflow Console

Complete the following steps to deploy the Axoflow Console Helm chart.

  1. Set the following environment variables as needed for your deployment.

    • The domain name for your Axoflow Console deployment. This must point to the IP address of the VM you’re installing Axoflow Console on.

      BASE_HOSTNAME=your-host.your-domain
      
    • A default email address for the Axoflow Console administrator.

      ADMIN_EMAIL="your-email@your-domain"
      
    • The username of the Axoflow Console administrator.

      ADMIN_USERNAME="admin"
      
    • The password of the Axoflow Console administrator.

      ADMIN_PASSWORD="your-admin-password"
      
    • A unique identifier of the administrator user. For example, you can generate one using the uuidgen. (Note that on Ubuntu, you must first install the uuid-runtime package by running sudo apt install uuid-runtime.

      ADMIN_UUID=$(uuidgen)
      
    • The bcrypt hash of the administrator password. For example, you can generate it with the htpasswd tool, which is part of the httpd-tools package on Red Hat (install it using sudo dnf install httpd-tools), and the apache2-utils package on Ubuntu (install it using sudo apt install apache2-utils).

      ADMIN_PASSWORD_HASH=$(echo $ADMIN_PASSWORD | htpasswd -BinC 10 $ADMIN_USERNAME | cut -d: -f2)
      
    • The version number of Axoflow Console you want to deploy.

      VERSION="0.50.1"
      
    • The internal IP address of the virtual machine. This is needed so the authentication service can communicate with the identity provider without relying on DNS.

      VM_IP_ADDRESS=$(hostname -I | cut -d' ' -f1)
      

      Check that the above command returned the correct IP, it might not be accurate if the host has multiple IP addresses.

    • The external IP address of the virtual machine. This is needed so external components (like AxoRouter and Axolet that’s running on other hosts) can access the Axoflow Console.

      EXTERNAL_IP="<external.ip.of.vm>"
      
    • The URL of the Axoflow Helm chart. You’ll receive this URL from our team. You can request it using the contact form.

      AXOFLOW_HELM_CHART_URL="<the-chart-URL-received-from-Axoflow>"
      
  2. Install Axoflow Console. The following configuration file enables the user set in ADMIN_USERNAME log in with the static password set in ADMIN_PASSWORD. If this basic setup is working, you can configure another authentication provider.

    (For details on what the different services enabled in the YAML file do, see the service reference.)

    sudo tee /var/lib/rancher/k3s/server/manifests/axoflow.yaml > /dev/null <<EOF
    ---
    apiVersion: helm.cattle.io/v1
    kind: HelmChart
    metadata:
      name: axoflow
      namespace: kube-system
    spec:
      createNamespace: true
      targetNamespace: axoflow
      version: $VERSION
      chart: $AXOFLOW_HELM_CHART_URL
      failurePolicy: abort
      valuesContent: |-
        baseHostName: $BASE_HOSTNAME
        issuer: self-sign-issuer
        issuerKind: Issuer
        imageTagDefault: $VERSION
        selfSignedRoot:
          create: true
        chalco:
          enabled: true
        axoletDist:
          enabled: true
        controllerManager:
          enabled: true
        kcp:
          enabled: true
        kcpCertManager:
          enabled: true
        telemetryproxy:
          enabled: true
        prometheus:
          enabled: true
        pomerium:
          enabled: true
          authenticateServiceUrl: https://authenticate.$BASE_HOSTNAME
          authenticateServiceIngress: authenticate
          certificates:
            - authenticate
          certificateAuthority:
            secretName: dex-certificates
          policy:
            emails:
              - $ADMIN_EMAIL
            domains: []
            groups: []
            claim/groups: []
            readOnly:
              emails: []
              domains: []
              groups: []
              claim/groups: []
        ui:
          enabled: true
        dex:
          enabled: true
          localIP: $VM_IP_ADDRESS
          config:
            create: true
            # connectors:
            #   ...
            staticPasswords:
              - email: $ADMIN_EMAIL
                hash: $ADMIN_PASSWORD_HASH
                username: $ADMIN_USERNAME
                userID: $ADMIN_UUID    
    EOF
    
  3. Wait a few minutes until everything is installed. Check that every component has been installed and is running:

    sudo k3s kubectl get pods -A
    

    The output should be similar to:

    NAMESPACE       NAME                                        READY   STATUS      RESTARTS   AGE
    NAMESPACE       NAME                                       READY   STATUS      RESTARTS       AGE
    axoflow         axolet-dist-55ccbc4759-q7l7n               1/1     Running     0              2d3h
    axoflow         chalco-6cfc74b4fb-mwbbz                    1/1     Running     0              2d3h
    axoflow         configure-kcp-1-x6js6                      0/1     Completed   2              2d3h
    axoflow         configure-kcp-cert-manager-1-wvhhp         0/1     Completed   1              2d3h
    axoflow         configure-kcp-token-1-xggg7                0/1     Completed   1              2d3h
    axoflow         controller-manager-5ff8d5ffd-f6xtq         1/1     Running     0              2d3h
    axoflow         dex-5b9d7b88b-bv82d                        1/1     Running     0              2d3h
    axoflow         kcp-0                                      1/1     Running     1 (2d3h ago)   2d3h
    axoflow         kcp-cert-manager-0                         1/1     Running     0              2d3h
    axoflow         pomerium-56b8b9b6b9-vt4t8                  1/1     Running     0              2d3h
    axoflow         prometheus-server-0                        2/2     Running     0              2d3h
    axoflow         telemetryproxy-65f77dcf66-4xr6b            1/1     Running     0              2d3h
    axoflow         ui-6dc7bdb4c5-frn2p                        2/2     Running     0              2d3h
    cert-manager    cert-manager-74c755695c-5fs6v              1/1     Running     5 (144m ago)   2d3h
    cert-manager    cert-manager-cainjector-dcc5966bc-9kn4s    1/1     Running     0              2d3h
    cert-manager    cert-manager-webhook-dfb76c7bd-z9kqr       1/1     Running     0              2d3h
    ingress-nginx   ingress-nginx-controller-bcc7fb5bd-n5htp   1/1     Running     0              2d3h
    kube-system     coredns-ccb96694c-fhk8f                    1/1     Running     0              2d3h
    kube-system     helm-install-axoflow-b9crk                 0/1     Completed   0              2d3h
    kube-system     helm-install-cert-manager-88lt7            0/1     Completed   0              2d3h
    kube-system     helm-install-ingress-nginx-qvkhf           0/1     Completed   0              2d3h
    kube-system     local-path-provisioner-5cf85fd84d-xdlnr    1/1     Running     0              2d3h
    

    (For details on what the different services do, see the service reference.)

Login to Axoflow Console

  1. If the domain name of Axoflow Console cannot be resolved from your desktop, add it to the /etc/hosts file in the following format. Use and IP address of Axoflow Console that can be accessed from your desktop.

    <AXOFLOW-CONSOLE-IP-ADDRESS> <your-host.your-domain> idp.<your-host.your-domain> authenticate.<your-host.your-domain>
    
  2. Open the https://<your-host.your-domain> URL in your browser.

  3. The on-premise deployment of Axoflow Console shows a self-signed certificate. If your browser complains about the related risks, accept it.

  4. Use the email address and password you’ve configured to log in to Axoflow Console.

Axoflow Console service reference

Service Name Namespace Purposes Function
KCP axoflow Backend API Kubernetes Like Service with built in database, that service manage all the settings that our system manage
Chalco axoflow Frontend API Serve the UI API Calls, implement business logic for the UI
Controller-Manager axoflow Backend Service Reacts to state changes in our business entities, manage business logic for the Backend
Telemetry Proxy axoflow Backend API Receives agents telemetries
UI axoflow Dashboard The frontend for Axoflow Console
Prometheus axoflow Backend API /Service Monitoring component to store time series information and an API for query, manage alert rules
Alertmanager axoflow Backend API /Service Monitoring component to Send alert based on alerting rules
Dex axoflow Identity Connector/Proxy Identity Connector/Proxy to allow the customer to use own identity (Google, Ldap, etc.)
Pomerium axoflow HTTP Proxy Allow policy based access to the Frontend API (Chalco)
NGINX Ingress Controller ingress-nginx HTTP Proxy Allow routing the HTTP traffic between multiple Frontend/Backend API
Axolet Dist axoflow Backend API Static artifact store to contains agents binaries
Cert Manager cert-manager Automated Certificate management tool Manage certificates for Axoflow components (Backend API, HTTP Proxy)
Cert Manager (kcp) axoflow Automated Certificate management tool Manage certificates for Agents
External DNS kube-system Automated DNS management tool Manage DNS records for Axoflow components

1 - Authentication

These sections show you how to configure on-premises and Kubernetes deployments of Axoflow Console to use different authentication backends.

1.1 - GitHub

This section shows you how to use GitHub OAuth2 as an authentication backend for Axoflow Console. It is assumed that you already have a GitHub organization. Complete the following steps.

Prerequisites

Register a new OAuth GitHub application for your organization. (For testing, you can create it under your personal account.) Make sure to:

  • Set the Homepage URL to the URL of your Axoflow Console deployment: https://<your-console-host.your-domain>/callback, for example, https://axoflow-console.example.com.

  • Set the Authorization callback URL to: https://auth.<your-console-host.your-domain>/callback, for example, https://auth.axoflow-console.example.com/callback.

    OAuth GitHub application registrations

  • Save the Client id of the app, you’ll need it to configure Axoflow Console.

  • Generate a Client secret and save it, you’ll need it to configure Axoflow Console.

Configuration

  1. Configure authentication by editing the spec.dex.config section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml file.

    1. (Optional) If you’ve used our earlier example, delete the spec.dex.config.staticPasswords section.

    2. Add the spec.dex.config.connectors section to the file, like this:

      dex:
        enabled: true
        localIP: $VM_IP_ADDRESS
        config:
          create: true
          connectors:
          - type: github
            # Required field for connector id.
            id: github
            # Required field for connector name.
            name: GitHub
            config:
              # Credentials can be string literals or pulled from the environment.
              clientID: <ID-of-GitHub-application>
              clientSecret: <Secret-of-GitHub-application>
              redirectURI: <idp.your-host.your-domain/callback>
              orgs:
                - name: <your-GitHub-organization>
      

      Note that if you have a valid domain name that points to the VM, you can omit the localIP: $VM_IP_ADDRESS line.

    3. Edit the following fields. For details on the configuration parameters, see the Dex GitHub connector documentation.

      • connectors.config.clientID: The ID of the GitHub OAuth application.
      • connectors.config.clientSecret: The client secret of the GitHub OAuth application.
      • connectors.config.redirectURI: The callback URL of the GitHub OAuth application: https://auth.<your-console-host.your-domain>/callback, for example, https://auth.axoflow-console.example.com/callback.
      • connectors.config.orgs: List of the GitHub organizations whose members can access Axoflow Console. Restrict access to your organization.
  2. Configure authorization in the spec.pomerium.policy section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml file.

    • List the primary email addresses of the users who have read and write access to Axoflow Console under the emails section.
    • List the primary email addresses of the users who have read-only access to Axoflow Console under the readOnly.emails section.
      policy:
        emails:
          - username@yourdomain.com
        domains: []
        groups: []
        claim/groups: []
        readOnly:
          emails:
            - readonly-username@yourdomain.com
          domains: []
          groups: []
          claim/groups: []
    

    For details on authorization settings, see Authorization.

  3. Save the file.

  4. Restart the dex deployment after changing the connector:

    kubectl rollout restart deployment/dex -n axoflow
    

    Expected output:

    deployment.apps/dex restarted
    
  5. Open the main page of your Axoflow Console deployment in your browser. You’ll be redirected to the GitHub authentication page.

    After completing the GitHub authentication you can access Axoflow Console.

Getting help

You can troubleshoot common errors by running kubectl logs -n axoflow <dex-container-name>

If you run into problems setting up the authentication or authorization, contact us.

1.2 - Google

This section shows you how to use Google OpenID Connect as an authentication backend for Axoflow Console. It is assumed that you already have a Google organization and Google Cloud Console access. Complete the following steps.

Prerequisites

  • To use Google authentication, Axoflow Console must be deployed on a publicly accessible domain name (the $BASE_HOSTNAME must end with a valid top-level domain, for example, .com or .io.)

  • Configure OpenID Connect and create an OpenID credential for a Web application.

    • Make sure to set the Authorized redirect URIs to: https://auth.<your-console-host.your-domain>/callback, for example, https://auth.axoflow-console.example.com/callback.
    • Save the Client ID of the app and the Client secret of the application, you’ll need them to configure Axoflow Console.

    For details on setting up OpenID Connect, see the official documentation.

Configuration

  1. Configure authentication by editing the spec.dex.config section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml file.

    1. (Optional) If you’ve used our earlier example, delete the spec.dex.config.staticPasswords section.

    2. Add the spec.dex.config.connectors section to the file, like this:

      connectors:
      - type: google
        id: google
        name: Google
        config:
          # Connector config values starting with a "$" will read from the environment.
          clientID: <ID-of-Google-application>
          clientSecret: <Secret-of-GitHub-application>
      
          # Dex's issuer URL + "/callback"
          redirectURI: <idp.your-host.your-domain/callback>
      
          # Set the value of `prompt` query parameter in the authorization request
          # The default value is "consent" when not set.
          promptType: consent
      
          # Google supports whitelisting allowed domains when using G Suite
          # (Google Apps). The following field can be set to a list of domains
          # that can log in:
          #
          hostedDomains:
          - <your-domain>
      
    3. Edit the following fields. For details on the configuration parameters, see the Dex Google connector documentation.

      • connectors.config.clientID: The ID of the Google application.
      • connectors.config.clientSecret: The client secret of the Google application.
      • connectors.config.redirectURI: The callback URL of the Google application: https://auth.<your-console-host.your-domain>/callback, for example, https://auth.axoflow-console.example.com/callback.
      • connectors.config.hostedDomains: The domain where Axoflow Console is deployed, for example, example.com. Your users must have email addresses for this domain at Google.
  2. Configure authorization in the spec.pomerium.policy section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml file.

    • List the email addresses of the users who have read and write access to Axoflow Console under the emails section.
    • List the email addresses of the users who have read-only access to Axoflow Console under the readOnly.emails section.
      policy:
        emails:
          - username@yourdomain.com
        domains: []
        groups: []
        claim/groups: []
        readOnly:
          emails:
            - readonly-username@yourdomain.com
          domains: []
          groups: []
          claim/groups: []
    

    For details on authorization settings, see Authorization.

  3. Save the file.

  4. Restart the dex deployment after changing the connector:

    kubectl rollout restart deployment/dex -n axoflow
    

    Expected output:

    deployment.apps/dex restarted
    
  5. Open the main page of your Axoflow Console deployment in your browser. You’ll be redirected to the Google authentication page.

    After completing the Google authentication you can access Axoflow Console.

Getting help

You can troubleshoot common errors by running kubectl logs -n axoflow <dex-container-name>

If you run into problems setting up the authentication or authorization, contact us.

1.3 - LDAP

This section shows you how to use LDAP as an authentication backend for Axoflow Console. In the examples we used the public demo service of FreeIPA as an LDAP server. It is assumed that you already have an LDAP server in place. Complete the following steps.

  1. Configure authentication by editing the spec.dex.config section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml file.

    1. (Optional) If you’ve used our earlier example, delete the spec.dex.config.staticPasswords section.

    2. Add the spec.dex.config.connectors section to the file, like this:

      CAUTION:

      This example shows a simple configuration suitable for testing. In production environments, make sure to:

          - configure TLS encryption to access your LDAP server
          - retrieve the bind password from a vault or environment variable. Note that if the bind password contains the `$` character, you must set it in an environment variable and pass it like `bindPW: $LDAP_BINDPW`.
      
          dex:
            enabled: true
            localIP: $VM_IP_ADDRESS
            config:
              create: true
              connectors:
              - type: ldap
                name: OpenLDAP
                id: ldap
                config:
                  host: ipa.demo1.freeipa.org
                  insecureNoSSL: true
                  # This would normally be a read-only user.
                  bindDN: uid=admin,cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org
                  bindPW: Secret123
                  usernamePrompt: Email Address
                  userSearch:
                    baseDN: dc=demo1,dc=freeipa,dc=org
                    filter: "(objectClass=person)"
                    username: mail
                    # "DN" (case sensitive) is a special attribute name. It indicates that
                    # this value should be taken from the entity's DN not an attribute on
                    # the entity.
                    idAttr: uid
                    emailAttr: mail
                    nameAttr: cn
                  groupSearch:
                    baseDN: dc=demo1,dc=freeipa,dc=org
                    filter: "(objectClass=groupOfNames)"
                    userMatchers:
                      # A user is a member of a group when their DN matches
                      # the value of a "member" attribute on the group entity.
                    - userAttr: DN
                      groupAttr: member
                    # The group name should be the "cn" value.
                    nameAttr: cn
      
    3. Edit the following fields. For details on the configuration parameters, see the Dex LDAP connector documentation.

      • connectors.config.host: The hostname and optionally the port of the LDAP server in “host:port” format.
      • connectors.config.bindDN and connectors.config.bindPW: The DN and password for an application service account that the connector uses to search for users and groups.
      • connectors.config.userSearch.bindDN and connectors.config.groupSearch.bindDN: The base DN for the user and group search.
  2. Configure authorization in the spec.pomerium.policy section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml file.

    • List the names of the LDAP groups whose members have read and write access to Axoflow Console under claim/groups. (Group managers in the example.)
    • List the names of the LDAP groups whose members have read-only access to Axoflow Console under readOnly.claim/groups. (Group employee in the example.)
      policy:
        emails: []
        domains: []
        groups: []
        claim/groups:
          - managers
        readOnly:
          emails: []
          domains: []
          groups: []
          claim/groups:
            - employee
    

    For details on authorization settings, see Authorization.

  3. Save the file.

  4. Restart the dex deployment after changing the connector:

    kubectl rollout restart deployment/dex -n axoflow
    

    Expected output:

    deployment.apps/dex restarted
    

Getting help

You can troubleshoot common errors by running kubectl logs -n axoflow <dex-container-name>

If you run into problems setting up the authentication or authorization, contact us.

2 - Authorization

These sections show you how to configure the authorization of Axoflow Console with different authentication backends.

You can configure authorization in the spec.pomerium.policy section of the Axoflow Console manifest. In on-premise deployments, the manifest is in the /var/lib/rancher/k3s/server/manifests/axoflow.yaml file.

You can list individual email addresses and user groups to have read and write (using the keys under spec.pomerium.policy) and read-only (using the keys under spec.pomerium.policy.readOnly) access to Axoflow Console. Which key to use depends on the authentication backend configured for Axoflow Console:

  • emails: Email addresses used with static passwords and GitHub authentication.

    With GitHub authentication, use the primary GitHub email addresses of your users, otherwise the authorization will fail.

  • claim/groups: LDAP groups used with LDAP authentication. For example:

      policy:
        emails: []
        domains: []
        groups: []
        claim/groups:
          - managers
        readOnly:
          emails: []
          domains: []
          groups: []
          claim/groups:
            - employee
    

3 - Prepare AxoRouter hosts

When using AxoRouter with an on-premises Axoflow Console deployment, you have to complete the following steps on the hosts you want to deploy AxoRouter on. These steps are specific to on-premises Axoflow Console deployments, and are not needed when using the SaaS Axoflow Console.

  1. If the domain name of Axoflow Console cannot be resolved from the AxoRouter host, add it to the /etc/hosts file of the AxoRouter host in the following format. Use and IP address of Axoflow Console that can be accessed from the AxoRouter host.

    <AXOFLOW-CONSOLE-IP-ADDRESS> <AXOFLOW-CONSOLE-BASE-URL> kcp.<AXOFLOW-CONSOLE-BASE-URL> telemetry.<AXOFLOW-CONSOLE-BASE-URL> idp.<AXOFLOW-CONSOLE-BASE-URL> authenticate.<AXOFLOW-CONSOLE-BASE-URL>
    
  2. Import Axoflow Console certificates to AxoRouter hosts.

    1. On the Axoflow Console host: Run the following command to extract certificates. The AxoRouter host will need these certificates to download the installation binaries and access management traffic.

      k3s kubectl get secret -n axoflow kcp-ca -o=jsonpath='{.data.ca\.crt}'|base64 -d > $BASE_HOSTNAME-kcp-ca.crt
      
      k3s kubectl get secret -n axoflow pomerium-certificates -o=jsonpath='{.data.ca\.crt}'|base64 -d > $BASE_HOSTNAME-pomerium-ca.crt
      

      This will create two files in the local folder. Copy them to the AxoRouter hosts.

    2. On the AxoRouter hosts: Copy the certificate files extracted from the Axoflow Console host.

      • On Red Hat: Copy the files into the /etc/pki/ca-trust/source/anchors/ folder, then run sudo update-ca-trust extract. (If needed, install the ca-certificates package.)
      • On Ubuntu: Copy the files into the /usr/local/share/ca-certificates/ folder, then run sudo update-ca-certificates
  1. curl https://kcp.<your-host.your-domain>
  1. Now you can deploy AxoRouter on the host.