This chapter describes the installation and configuration of Axoflow Console in different scenarios.
Axoflow Console is available as:
- an SaaS service,
- on-premises deployment,
- Kubernetes deployment.
This is the multi-page printable view of this section. Click here to print.
This chapter describes the installation and configuration of Axoflow Console in different scenarios.
Axoflow Console is available as:
This guide shows you how to install Axoflow Console on a local, on-premises virtual machine using k3s and Helm. Installation of other components like the NGINX ingress controller NS cert-manager is also included. Since this is a single instance deployment, we don’t recommend using it in production environments. For additional steps and configurations needed in production environments, contact us.
At a high level, the deployment consists of the following steps:
To install Axoflow Console, you’ll need the following:
The URL for the Axoflow Console Helm chart. You’ll receive this URL from our team. You can request it using the contact form.
CAUTION:
Don’t start the Install Axoflow Console process until you’ve received the chart URL.A host that meets the system requirements.
Network access set up for the host.
Supported operating system: Ubuntu 24.04, Red Hat 9 and compatible (tested with AlmaLinux 9)
The virtual machine (VM) must have at least:
This setup can handle about 100 AxoRouter instances and 1000 data source hosts.
For reference, a real-life scenario that handles 100 AxoRouter and 3000 data source hosts with 30-day metric retention would need:
For details on sizing, contact us.
You’ll need to have access to a user with sudo
privileges.
The hosts must be able to access the following domains related to the Axoflow Console:
When using Axoflow Console SaaS:
<your-tenant-id>.cloud.axoflow.io
: HTTPS traffic on TCP port 443, needed to download the binaries for Axoflow software (like Axolet and AxoRouter).kcp.<your-tenant-id>.cloud.axoflow.io
: HTTPS (mutual TLS) traffic on TCP port 443 for management traffic.telemetry.<your-tenant-id>.cloud.axoflow.io
: HTTPS (mutual TLS) traffic on TCP port 443, where Axolet sends the metrics of the host.us-docker.pkg.dev
: HTTPS traffic on TCP port 443, for pulling container images (AxoRouter only).When using an on-premise Axoflow Console:
The following domains should point to Axoflow Console IP address to access Axoflow from your desktop and AxoRouter hosts:
your-host.your-domain
: The main domain of your Axoflow Console deployment.authenticate.your-host.your-domain
: A subdomain used for authentication.idp.your-host.your-domain
: A subdomain for the identity provider.The Axoflow Console host must have the following Open Ports:
Complete the following steps to install k3s and some dependencies on the virtual machine.
Run getenforce
to check the status of SELinux.
Check the /etc/selinux/config
file and make sure that the settings match the results of the getenforce
output (for example, SELINUX=disabled
). K3S supports SELinux, but the installation can fail if there is a mismatch between the configured and actual settings.
Install k3s.
Run the following command:
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="--disable traefik --disable metrics-server" K3S_KUBECONFIG_MODE="644" sh -s -
Check that the k3s service is running:
systemctl status k3s
Install the NGINX ingress controller.
VERSION="v4.11.3"
sudo tee /var/lib/rancher/k3s/server/manifests/ingress-nginx.yaml > /dev/null <<EOF
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: ingress-nginx
namespace: kube-system
spec:
createNamespace: true
chart: ingress-nginx
repo: https://kubernetes.github.io/ingress-nginx
version: $VERSION
targetNamespace: ingress-nginx
valuesContent: |-
controller:
extraArgs:
enable-ssl-passthrough: true
hostNetwork: true
hostPort:
enabled: true
service:
enabled: false
admissionWebhooks:
enabled: false
updateStrategy:
strategy:
type: Recreate
EOF
Install the cert-manager controller.
VERSION="1.16.2"
sudo tee /var/lib/rancher/k3s/server/manifests/cert-manager.yaml > /dev/null <<EOF
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: cert-manager
namespace: kube-system
spec:
createNamespace: true
chart: cert-manager
version: $VERSION
repo: https://charts.jetstack.io
targetNamespace: cert-manager
valuesContent: |-
crds:
enabled: true
enableCertificateOwnerRef: true
EOF
Complete the following steps to deploy the Axoflow Console Helm chart.
Set the following environment variables as needed for your deployment.
The domain name for your Axoflow Console deployment. This must point to the IP address of the VM you’re installing Axoflow Console on.
BASE_HOSTNAME=your-host.your-domain
To use Google authentication, Axoflow Console must be deployed on a publicly accessible domain name (the `$BASE_HOSTNAME` must end with a valid top-level domain, for example, `.com` or `.io`.)
A default email address for the Axoflow Console administrator.
ADMIN_EMAIL="your-email@your-domain"
The username of the Axoflow Console administrator.
ADMIN_USERNAME="admin"
The password of the Axoflow Console administrator.
ADMIN_PASSWORD="your-admin-password"
A unique identifier of the administrator user. For example, you can generate one using the uuidgen
. (Note that on Ubuntu, you must first install the uuid-runtime
package by running sudo apt install uuid-runtime
.
ADMIN_UUID=$(uuidgen)
The bcrypt hash of the administrator password. For example, you can generate it with the htpasswd
tool, which is part of the httpd-tools
package on Red Hat (install it using sudo dnf install httpd-tools
), and the apache2-utils
package on Ubuntu (install it using sudo apt install apache2-utils
).
ADMIN_PASSWORD_HASH=$(echo $ADMIN_PASSWORD | htpasswd -BinC 10 $ADMIN_USERNAME | cut -d: -f2)
The version number of Axoflow Console you want to deploy.
VERSION="0.50.1"
The internal IP address of the virtual machine. This is needed so the authentication service can communicate with the identity provider without relying on DNS.
VM_IP_ADDRESS=$(hostname -I | cut -d' ' -f1)
Check that the above command returned the correct IP, it might not be accurate if the host has multiple IP addresses.
The external IP address of the virtual machine. This is needed so external components (like AxoRouter and Axolet that’s running on other hosts) can access the Axoflow Console.
EXTERNAL_IP="<external.ip.of.vm>"
The URL of the Axoflow Helm chart. You’ll receive this URL from our team. You can request it using the contact form.
AXOFLOW_HELM_CHART_URL="<the-chart-URL-received-from-Axoflow>"
Install Axoflow Console. The following configuration file enables the user set in ADMIN_USERNAME
log in with the static password set in ADMIN_PASSWORD
. If this basic setup is working, you can configure another authentication provider.
(For details on what the different services enabled in the YAML file do, see the service reference.)
sudo tee /var/lib/rancher/k3s/server/manifests/axoflow.yaml > /dev/null <<EOF
---
apiVersion: helm.cattle.io/v1
kind: HelmChart
metadata:
name: axoflow
namespace: kube-system
spec:
createNamespace: true
targetNamespace: axoflow
version: $VERSION
chart: $AXOFLOW_HELM_CHART_URL
failurePolicy: abort
valuesContent: |-
baseHostName: $BASE_HOSTNAME
issuer: self-sign-issuer
issuerKind: Issuer
imageTagDefault: $VERSION
selfSignedRoot:
create: true
chalco:
enabled: true
axoletDist:
enabled: true
controllerManager:
enabled: true
kcp:
enabled: true
kcpCertManager:
enabled: true
telemetryproxy:
enabled: true
prometheus:
enabled: true
pomerium:
enabled: true
authenticateServiceUrl: https://authenticate.$BASE_HOSTNAME
authenticateServiceIngress: authenticate
certificates:
- authenticate
certificateAuthority:
secretName: dex-certificates
policy:
emails:
- $ADMIN_EMAIL
domains: []
groups: []
claim/groups: []
readOnly:
emails: []
domains: []
groups: []
claim/groups: []
ui:
enabled: true
dex:
enabled: true
localIP: $VM_IP_ADDRESS
config:
create: true
# connectors:
# ...
staticPasswords:
- email: $ADMIN_EMAIL
hash: $ADMIN_PASSWORD_HASH
username: $ADMIN_USERNAME
userID: $ADMIN_UUID
EOF
Wait a few minutes until everything is installed. Check that every component has been installed and is running:
sudo k3s kubectl get pods -A
The output should be similar to:
NAMESPACE NAME READY STATUS RESTARTS AGE
NAMESPACE NAME READY STATUS RESTARTS AGE
axoflow axolet-dist-55ccbc4759-q7l7n 1/1 Running 0 2d3h
axoflow chalco-6cfc74b4fb-mwbbz 1/1 Running 0 2d3h
axoflow configure-kcp-1-x6js6 0/1 Completed 2 2d3h
axoflow configure-kcp-cert-manager-1-wvhhp 0/1 Completed 1 2d3h
axoflow configure-kcp-token-1-xggg7 0/1 Completed 1 2d3h
axoflow controller-manager-5ff8d5ffd-f6xtq 1/1 Running 0 2d3h
axoflow dex-5b9d7b88b-bv82d 1/1 Running 0 2d3h
axoflow kcp-0 1/1 Running 1 (2d3h ago) 2d3h
axoflow kcp-cert-manager-0 1/1 Running 0 2d3h
axoflow pomerium-56b8b9b6b9-vt4t8 1/1 Running 0 2d3h
axoflow prometheus-server-0 2/2 Running 0 2d3h
axoflow telemetryproxy-65f77dcf66-4xr6b 1/1 Running 0 2d3h
axoflow ui-6dc7bdb4c5-frn2p 2/2 Running 0 2d3h
cert-manager cert-manager-74c755695c-5fs6v 1/1 Running 5 (144m ago) 2d3h
cert-manager cert-manager-cainjector-dcc5966bc-9kn4s 1/1 Running 0 2d3h
cert-manager cert-manager-webhook-dfb76c7bd-z9kqr 1/1 Running 0 2d3h
ingress-nginx ingress-nginx-controller-bcc7fb5bd-n5htp 1/1 Running 0 2d3h
kube-system coredns-ccb96694c-fhk8f 1/1 Running 0 2d3h
kube-system helm-install-axoflow-b9crk 0/1 Completed 0 2d3h
kube-system helm-install-cert-manager-88lt7 0/1 Completed 0 2d3h
kube-system helm-install-ingress-nginx-qvkhf 0/1 Completed 0 2d3h
kube-system local-path-provisioner-5cf85fd84d-xdlnr 1/1 Running 0 2d3h
(For details on what the different services do, see the service reference.)
If the domain name of Axoflow Console cannot be resolved from your desktop, add it to the /etc/hosts
file in the following format. Use and IP address of Axoflow Console that can be accessed from your desktop.
<AXOFLOW-CONSOLE-IP-ADDRESS> <your-host.your-domain> idp.<your-host.your-domain> authenticate.<your-host.your-domain>
Open the https://<your-host.your-domain>
URL in your browser.
The on-premise deployment of Axoflow Console shows a self-signed certificate. If your browser complains about the related risks, accept it.
Use the email address and password you’ve configured to log in to Axoflow Console.
Service Name | Namespace | Purposes | Function |
---|---|---|---|
KCP | axoflow |
Backend API | Kubernetes Like Service with built in database, that service manage all the settings that our system manage |
Chalco | axoflow |
Frontend API | Serve the UI API Calls, implement business logic for the UI |
Controller-Manager | axoflow |
Backend Service | Reacts to state changes in our business entities, manage business logic for the Backend |
Telemetry Proxy | axoflow |
Backend API | Receives agents telemetries |
UI | axoflow |
Dashboard | The frontend for Axoflow Console |
Prometheus | axoflow |
Backend API /Service | Monitoring component to store time series information and an API for query, manage alert rules |
Alertmanager | axoflow |
Backend API /Service | Monitoring component to Send alert based on alerting rules |
Dex | axoflow |
Identity Connector/Proxy | Identity Connector/Proxy to allow the customer to use own identity (Google, Ldap, etc.) |
Pomerium | axoflow |
HTTP Proxy | Allow policy based access to the Frontend API (Chalco) |
NGINX Ingress Controller | ingress-nginx |
HTTP Proxy | Allow routing the HTTP traffic between multiple Frontend/Backend API |
Axolet Dist | axoflow |
Backend API | Static artifact store to contains agents binaries |
Cert Manager | cert-manager |
Automated Certificate management tool | Manage certificates for Axoflow components (Backend API, HTTP Proxy) |
Cert Manager (kcp) | axoflow |
Automated Certificate management tool | Manage certificates for Agents |
External DNS | kube-system |
Automated DNS management tool | Manage DNS records for Axoflow components |
These sections show you how to configure on-premises and Kubernetes deployments of Axoflow Console to use different authentication backends.
This section shows you how to use GitHub OAuth2 as an authentication backend for Axoflow Console. It is assumed that you already have a GitHub organization. Complete the following steps.
Register a new OAuth GitHub application for your organization. (For testing, you can create it under your personal account.) Make sure to:
Set the Homepage URL to the URL of your Axoflow Console deployment: https://<your-console-host.your-domain>/callback
, for example, https://axoflow-console.example.com
.
Set the Authorization callback URL to: https://auth.<your-console-host.your-domain>/callback
, for example, https://auth.axoflow-console.example.com/callback
.
Save the Client id of the app, you’ll need it to configure Axoflow Console.
Generate a Client secret and save it, you’ll need it to configure Axoflow Console.
Configure authentication by editing the spec.dex.config
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
(Optional) If you’ve used our earlier example, delete the spec.dex.config.staticPasswords
section.
Add the spec.dex.config.connectors
section to the file, like this:
dex:
enabled: true
localIP: $VM_IP_ADDRESS
config:
create: true
connectors:
- type: github
# Required field for connector id.
id: github
# Required field for connector name.
name: GitHub
config:
# Credentials can be string literals or pulled from the environment.
clientID: <ID-of-GitHub-application>
clientSecret: <Secret-of-GitHub-application>
redirectURI: <idp.your-host.your-domain/callback>
orgs:
- name: <your-GitHub-organization>
Note that if you have a valid domain name that points to the VM, you can omit the localIP: $VM_IP_ADDRESS
line.
Edit the following fields. For details on the configuration parameters, see the Dex GitHub connector documentation.
connectors.config.clientID
: The ID of the GitHub OAuth application.connectors.config.clientSecret
: The client secret of the GitHub OAuth application.connectors.config.redirectURI
: The callback URL of the GitHub OAuth application: https://auth.<your-console-host.your-domain>/callback
, for example, https://auth.axoflow-console.example.com/callback
.connectors.config.orgs
: List of the GitHub organizations whose members can access Axoflow Console. Restrict access to your organization.Configure authorization in the spec.pomerium.policy
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
emails
section.readOnly.emails
section.Use the primary GitHub email addresses of your users, otherwise the authorization will fail.
policy:
emails:
- username@yourdomain.com
domains: []
groups: []
claim/groups: []
readOnly:
emails:
- readonly-username@yourdomain.com
domains: []
groups: []
claim/groups: []
For details on authorization settings, see Authorization.
Save the file.
Restart the dex deployment after changing the connector:
kubectl rollout restart deployment/dex -n axoflow
Expected output:
deployment.apps/dex restarted
Open the main page of your Axoflow Console deployment in your browser. You’ll be redirected to the GitHub authentication page.
After completing the GitHub authentication you can access Axoflow Console.
You can troubleshoot common errors by running kubectl logs -n axoflow <dex-container-name>
If you run into problems setting up the authentication or authorization, contact us.
This section shows you how to use Google OpenID Connect as an authentication backend for Axoflow Console. It is assumed that you already have a Google organization and Google Cloud Console access. Complete the following steps.
To use Google authentication, Axoflow Console must be deployed on a publicly accessible domain name (the $BASE_HOSTNAME
must end with a valid top-level domain, for example, .com
or .io
.)
Configure OpenID Connect and create an OpenID credential for a Web application.
https://auth.<your-console-host.your-domain>/callback
, for example, https://auth.axoflow-console.example.com/callback
.For details on setting up OpenID Connect, see the official documentation.
Configure authentication by editing the spec.dex.config
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
(Optional) If you’ve used our earlier example, delete the spec.dex.config.staticPasswords
section.
Add the spec.dex.config.connectors
section to the file, like this:
connectors:
- type: google
id: google
name: Google
config:
# Connector config values starting with a "$" will read from the environment.
clientID: <ID-of-Google-application>
clientSecret: <Secret-of-GitHub-application>
# Dex's issuer URL + "/callback"
redirectURI: <idp.your-host.your-domain/callback>
# Set the value of `prompt` query parameter in the authorization request
# The default value is "consent" when not set.
promptType: consent
# Google supports whitelisting allowed domains when using G Suite
# (Google Apps). The following field can be set to a list of domains
# that can log in:
#
hostedDomains:
- <your-domain>
Edit the following fields. For details on the configuration parameters, see the Dex Google connector documentation.
connectors.config.clientID
: The ID of the Google application.connectors.config.clientSecret
: The client secret of the Google application.connectors.config.redirectURI
: The callback URL of the Google application: https://auth.<your-console-host.your-domain>/callback
, for example, https://auth.axoflow-console.example.com/callback
.connectors.config.hostedDomains
: The domain where Axoflow Console is deployed, for example, example.com
. Your users must have email addresses for this domain at Google.Configure authorization in the spec.pomerium.policy
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
emails
section.readOnly.emails
section.The email addresses of your users must have the same domain as the one set under `connectors.config.hostedDomains`, otherwise the authorization will fail.
policy:
emails:
- username@yourdomain.com
domains: []
groups: []
claim/groups: []
readOnly:
emails:
- readonly-username@yourdomain.com
domains: []
groups: []
claim/groups: []
For details on authorization settings, see Authorization.
Save the file.
Restart the dex deployment after changing the connector:
kubectl rollout restart deployment/dex -n axoflow
Expected output:
deployment.apps/dex restarted
Open the main page of your Axoflow Console deployment in your browser. You’ll be redirected to the Google authentication page.
After completing the Google authentication you can access Axoflow Console.
You can troubleshoot common errors by running kubectl logs -n axoflow <dex-container-name>
If you run into problems setting up the authentication or authorization, contact us.
This section shows you how to use LDAP as an authentication backend for Axoflow Console. In the examples we used the public demo service of FreeIPA as an LDAP server. It is assumed that you already have an LDAP server in place. Complete the following steps.
Configure authentication by editing the spec.dex.config
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
(Optional) If you’ve used our earlier example, delete the spec.dex.config.staticPasswords
section.
Add the spec.dex.config.connectors
section to the file, like this:
CAUTION:
This example shows a simple configuration suitable for testing. In production environments, make sure to:
- configure TLS encryption to access your LDAP server
- retrieve the bind password from a vault or environment variable. Note that if the bind password contains the `$` character, you must set it in an environment variable and pass it like `bindPW: $LDAP_BINDPW`.
dex:
enabled: true
localIP: $VM_IP_ADDRESS
config:
create: true
connectors:
- type: ldap
name: OpenLDAP
id: ldap
config:
host: ipa.demo1.freeipa.org
insecureNoSSL: true
# This would normally be a read-only user.
bindDN: uid=admin,cn=users,cn=accounts,dc=demo1,dc=freeipa,dc=org
bindPW: Secret123
usernamePrompt: Email Address
userSearch:
baseDN: dc=demo1,dc=freeipa,dc=org
filter: "(objectClass=person)"
username: mail
# "DN" (case sensitive) is a special attribute name. It indicates that
# this value should be taken from the entity's DN not an attribute on
# the entity.
idAttr: uid
emailAttr: mail
nameAttr: cn
groupSearch:
baseDN: dc=demo1,dc=freeipa,dc=org
filter: "(objectClass=groupOfNames)"
userMatchers:
# A user is a member of a group when their DN matches
# the value of a "member" attribute on the group entity.
- userAttr: DN
groupAttr: member
# The group name should be the "cn" value.
nameAttr: cn
Edit the following fields. For details on the configuration parameters, see the Dex LDAP connector documentation.
connectors.config.host
: The hostname and optionally the port of the LDAP server in “host:port” format.connectors.config.bindDN
and connectors.config.bindPW
: The DN and password for an application service account that the connector uses to search for users and groups.connectors.config.userSearch.bindDN
and connectors.config.groupSearch.bindDN
: The base DN for the user and group search.Configure authorization in the spec.pomerium.policy
section of the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
claim/groups
. (Group managers
in the example.)readOnly.claim/groups
. (Group employee
in the example.) policy:
emails: []
domains: []
groups: []
claim/groups:
- managers
readOnly:
emails: []
domains: []
groups: []
claim/groups:
- employee
For details on authorization settings, see Authorization.
Save the file.
Restart the dex deployment after changing the connector:
kubectl rollout restart deployment/dex -n axoflow
Expected output:
deployment.apps/dex restarted
You can troubleshoot common errors by running kubectl logs -n axoflow <dex-container-name>
If you run into problems setting up the authentication or authorization, contact us.
These sections show you how to configure the authorization of Axoflow Console with different authentication backends.
You can configure authorization in the spec.pomerium.policy
section of the Axoflow Console manifest. In on-premise deployments, the manifest is in the /var/lib/rancher/k3s/server/manifests/axoflow.yaml
file.
You can list individual email addresses and user groups to have read and write (using the keys under spec.pomerium.policy
) and read-only (using the keys under spec.pomerium.policy.readOnly
) access to Axoflow Console. Which key to use depends on the authentication backend configured for Axoflow Console:
emails
: Email addresses used with static passwords and GitHub authentication.
With GitHub authentication, use the primary GitHub email addresses of your users, otherwise the authorization will fail.
claim/groups
: LDAP groups used with LDAP authentication. For example:
policy:
emails: []
domains: []
groups: []
claim/groups:
- managers
readOnly:
emails: []
domains: []
groups: []
claim/groups:
- employee
When using AxoRouter with an on-premises Axoflow Console deployment, you have to complete the following steps on the hosts you want to deploy AxoRouter on. These steps are specific to on-premises Axoflow Console deployments, and are not needed when using the SaaS Axoflow Console.
If the domain name of Axoflow Console cannot be resolved from the AxoRouter host, add it to the /etc/hosts
file of the AxoRouter host in the following format. Use and IP address of Axoflow Console that can be accessed from the AxoRouter host.
<AXOFLOW-CONSOLE-IP-ADDRESS> <AXOFLOW-CONSOLE-BASE-URL> kcp.<AXOFLOW-CONSOLE-BASE-URL> telemetry.<AXOFLOW-CONSOLE-BASE-URL> idp.<AXOFLOW-CONSOLE-BASE-URL> authenticate.<AXOFLOW-CONSOLE-BASE-URL>
Import Axoflow Console certificates to AxoRouter hosts.
On the Axoflow Console host: Run the following command to extract certificates. The AxoRouter host will need these certificates to download the installation binaries and access management traffic.
k3s kubectl get secret -n axoflow kcp-ca -o=jsonpath='{.data.ca\.crt}'|base64 -d > $BASE_HOSTNAME-kcp-ca.crt
k3s kubectl get secret -n axoflow pomerium-certificates -o=jsonpath='{.data.ca\.crt}'|base64 -d > $BASE_HOSTNAME-pomerium-ca.crt
This will create two files in the local folder. Copy them to the AxoRouter hosts.
On the AxoRouter hosts: Copy the certificate files extracted from the Axoflow Console host.
/etc/pki/ca-trust/source/anchors/
folder, then run sudo update-ca-trust extract
. (If needed, install the ca-certificates
package.)/usr/local/share/ca-certificates/
folder, then run sudo update-ca-certificates
curl https://kcp.<your-host.your-domain>