Skip to main content
Version: 1.6.0

How to access Apache APISIX Prometheus metrics on Kubernetes

Observability (monitoring functionality) has always played an essential role in system maintenance. A sound monitoring system can help engineers quickly understand the status of services running in production environments and locate problems or give early warning of anomalies when they occur.

Prometheus is a leading open-source project focused on metrics and alerting that has changed the way the world does monitoring and observability. For more information, see Prometheus's official website.

Before you begin#

In the APISIX Ingress environment, ensure that the public-api and prometheus plugins are enabled and the prometheus pluginAttrs are configured. Please refer to the following install example:

helm repo add apisix https://charts.apiseven.com
helm repo update
helm install apisix apisix/apisix -f values.yaml --create-namespace -n ingress-apisix
values.yaml
gateway:
type: NodePort

ingress-controller:
enabled: true
config:
apisix:
serviceNamespace: ingress-apisix

pluginAttrs:
prometheus:
enable_export_server: false

plugins:
- api-breaker
- authz-keycloak
- basic-auth
- batch-requests
- consumer-restriction
- cors
- echo
- fault-injection
- file-logger
- grpc-transcode
- hmac-auth
- http-logger
- ip-restriction
- ua-restriction
- jwt-auth
- kafka-logger
- key-auth
- limit-conn
- limit-count
- limit-req
- node-status
- openid-connect
- authz-casbin
- proxy-cache
- proxy-mirror
- proxy-rewrite
- redirect
- referer-restriction
- request-id
- request-validation
- response-rewrite
- serverless-post-function
- serverless-pre-function
- sls-logger
- syslog
- tcp-logger
- udp-logger
- uri-blocker
- wolf-rbac
- zipkin
- traffic-split
- gzip
- real-ip
- ext-plugin-pre-req
- ext-plugin-post-req
- prometheus # enable prometheus
- public-api # enable public-api

Begin to access Apache APISIX Prometheus Metrics#

Before starting, please make sure that Apache APISIX (version >= 2.13)and APISIX Ingress controller are installed and working correctly. APISIX uses the prometheus plugin to expose metrics and integrate with prometheus but uses the public-api plugin to enhance its security after version 2.13. For more information, see the public-api plugin's official document.

Step 1: Enable Prometheus Plugin#

If you need to monitor Apache APISIX simultaneously, you can create the following ApisixClusterConfig resource.

kubectl apply -f default.yaml
# default.yaml
apiVersion: apisix.apache.org/v2
kind: ApisixClusterConfig
metadata:
name: default
spec:
monitoring:
prometheus:
enable: true

Step 2: Enable public-api Plugin#

Let's make a basic routing setup, and please note that further configuration should be done based on your local backend service information. The primary solution concept is to use the public-api plugin to protect the routes exposed by Prometheus. For a more detailed configuration, you can refer to the example section of the public-api plugin.

kubectl apply -f prometheus-route.yaml -n ingress-apisix
# prometheus-route.yaml
apiVersion: apisix.apache.org/v2
kind: ApisixRoute
metadata:
name: prometheus-route
spec:
http:
- name: public-api
match:
hosts:
- test.prometheus.org
paths:
- /apisix/prometheus/metrics
backends:
## Please notice that there must be your actual "serviceName" and "servicePort", and must be in the same namespace.
- serviceName: apisix-admin
servicePort: 9180
plugins:
- name: public-api
enable: true

Step 3: Collect the Metrics#

Use port forwarding to access service apisix-gateway in a cluster.

# Forward to 127.0.0.1:9080
kubectl port-forward service/apisix-gateway 9080:80 -n ingress-apisix

Now you can then get the indicator parameters by requesting command access.

curl http://127.0.0.1:9080/apisix/prometheus/metrics -H 'Host: test.prometheus.org'

Then you will get the metrics you want.

Defaulted container "apisix" out of: apisix, wait-etcd (init)
# HELP apisix_bandwidth Total bandwidth in bytes consumed per service in APISIX
# TYPE apisix_bandwidth counter
apisix_bandwidth{type="egress",route="",service="",consumer="",node=""} 1130
apisix_bandwidth{type="ingress",route="",service="",consumer="",node=""} 517
# HELP apisix_etcd_modify_indexes Etcd modify index for APISIX keys
# TYPE apisix_etcd_modify_indexes gauge
apisix_etcd_modify_indexes{key="consumers"} 0
apisix_etcd_modify_indexes{key="global_rules"} 13
apisix_etcd_modify_indexes{key="max_modify_index"} 13
apisix_etcd_modify_indexes{key="prev_index"} 13
apisix_etcd_modify_indexes{key="protos"} 0
apisix_etcd_modify_indexes{key="routes"} 0
apisix_etcd_modify_indexes{key="services"} 0
apisix_etcd_modify_indexes{key="ssls"} 0
apisix_etcd_modify_indexes{key="stream_routes"} 0
apisix_etcd_modify_indexes{key="upstreams"} 0
apisix_etcd_modify_indexes{key="x_etcd_index"} 13
# HELP apisix_etcd_reachable Config server etcd reachable from APISIX, 0 is unreachable
# TYPE apisix_etcd_reachable gauge
apisix_etcd_reachable 1
# HELP apisix_http_latency HTTP request latency in milliseconds per service in APISIX
# TYPE apisix_http_latency histogram
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="1"} 5
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="2"} 5
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="5"} 5
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="10"} 5
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="20"} 5
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="50"} 5
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="100"} 5
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="200"} 5
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="500"} 5
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="1000"} 5
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="2000"} 5
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="5000"} 5
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="10000"} 5
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="30000"} 5
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="60000"} 5
apisix_http_latency_bucket{type="apisix",route="",service="",consumer="",node="",le="+Inf"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="1"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="2"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="5"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="10"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="20"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="50"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="100"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="200"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="500"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="1000"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="2000"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="5000"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="10000"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="30000"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="60000"} 5
apisix_http_latency_bucket{type="request",route="",service="",consumer="",node="",le="+Inf"} 5
apisix_http_latency_count{type="apisix",route="",service="",consumer="",node=""} 5
apisix_http_latency_count{type="request",route="",service="",consumer="",node=""} 5
apisix_http_latency_sum{type="apisix",route="",service="",consumer="",node=""} 0
apisix_http_latency_sum{type="request",route="",service="",consumer="",node=""} 0
# HELP apisix_http_requests_total The total number of client requests since APISIX started
# TYPE apisix_http_requests_total gauge
apisix_http_requests_total 82
# HELP apisix_http_status HTTP status codes per service in APISIX
# TYPE apisix_http_status counter
apisix_http_status{code="404",route="",matched_uri="",matched_host="",service="",consumer="",node=""} 5
# HELP apisix_nginx_http_current_connections Number of HTTP connections
# TYPE apisix_nginx_http_current_connections gauge
apisix_nginx_http_current_connections{state="accepted"} 2346
apisix_nginx_http_current_connections{state="active"} 1
apisix_nginx_http_current_connections{state="handled"} 2346
apisix_nginx_http_current_connections{state="reading"} 0
apisix_nginx_http_current_connections{state="waiting"} 0
apisix_nginx_http_current_connections{state="writing"} 1
# HELP apisix_nginx_metric_errors_total Number of nginx-lua-prometheus errors
# TYPE apisix_nginx_metric_errors_total counter
apisix_nginx_metric_errors_total 0
# HELP apisix_node_info Info of APISIX node
# TYPE apisix_node_info gauge
apisix_node_info{hostname="apisix-7d6b8577b6-rqhq9"} 1

Conclusion#

This article describes how to use the public-api plugin to protect Prometheus and monitor the Apache APISIX. Currently, only some basic configurations include. We will continue to polish and upgrade, add more metrics and integrate data surface APISIX metrics to improve your monitoring experience.

Of course, we welcome all interested parties to contribute to the Apache APISIX Ingress Controller project and look forward to working together to make the APISIX Ingress Controller more comprehensive.