"Kafka Configs Metrics Exporter" for Prometheus allows you to export some of the Kafka configuration as metrics.
Unlike some other systems, Kafka doesn't expose its configurations as metrics.
There are few useful configuration parameters that might be beneficial to collect in order to improve the visibility and alerting over Kafka.
A good example might be log.retention.ms
parameter per topic, which can be integrated into Kafka's dashboards to extend its visibilty, or to integrate it into an alerting query to create smarter alerts or automations based on topic retention.
Therefore, I decided to create a Prometheus exporter to collect those metrics.
Read more on Confluent Blog
- Install Go version 1.12+
- Clone this repository
git clone https://github.com/EladLeev/kafka-config-metrics
cd kafka-config-metrics
- Build the exporter binary
go build -o kcm-exporter .
-
Copy and edit the config file from this repository and point it into your Kafka clusters.
Use topic filtering as needed. -
Deploy the binary and run the exporter
cp ~/my_kcm.toml /opt/kcm/kcm.toml
./kcm-exporter
The exporter will use /opt/kcm/kcm.toml
as default.
- Clone this repository
git clone https://github.com/EladLeev/kafka-config-metrics
cd kafka-config-metrics
- Build the Docker image
docker build . -t kcm-exporter
- Run it with your custom configuration file
docker run -p 9899:9899 -v ~/my_kcm.toml:/opt/kcm/kcm.toml kcm-exporter:latest
Helm chart is available under the /charts
dir.
This project tried to stand in the Prometheus community best practices -
"You should aim for an exporter that requires no custom configuration by the user beyond telling it where the application is".
In fact, you don't really need to change anything beyond the clusters
struct.
You can still change more advanced parameters if you wish.
| Stanza | Name | Acceptable Values | Description | Default |
| :----- | :--- ||:-- |:-- |:-- |
|global|port |string |What port to bind.
Start with :
. |":9899" |
|global|timeout |int |HTTP server timeout. |3 |
|log |level |string: info, debug, trace|Set the log level|info|
|log |format |string:text,json|Change log to JSON to collect using Splunk or Logstash|text|
|kafka|min_kafka_version|string:<KAFKA_VERSION>|Minimum Kafka version to use on Sarama Go client.
The minimum supported client is the default.|0.11.0.0
|kafka|admin_timeout|int|The maximum duration the administrative Kafka client will wait for ClusterAdmin operations.|5 sec
This struct defining the clusters to pull the config from.
[clusters]
[clusters.prod]
brokers = ["kafka01-prod"]
[clusters.test]
brokers = ["kafka02-prod", "kafka03-prod"]
topicfilter="^(qa-|test-).*$"
# Template
[clusters.<NAME>]
brokers = ["<BROKER_1>", "<BROKER_2>"]
topicfilter="<REGEX_FILTER>"
topicfilter
allows you to filter topics based on a Regex.
e.g - "^(qa-|test-).*$"
- Filter all topics that are starting with qa
or test
.
When setting this exporter in the Prometheus targets, bear in mind that topic configs are not subject to change that often in most use cases.
Setting a higher scrape_interval
, let's say to 10 minuts, will lead to lower requests rate to the Kafka cluster while still keeping the exporter functional.
- job_name: 'kcm'
scrape_interval: 600s
static_configs:
- targets: ['kcm-prod:9899']
/metrics
- Metrics endpoint
/-/healthy
- This endpoint returns 200 and should be used to check the exporter health.
/-/ready
- This endpoint returns 200 when the exporter is ready to serve traffic.
Please read CONTRIBUTING.md for details of submitting a pull requests.
This project is licensed under the Apache License - see the LICENSE.md file for details.