Get latency
and request-per-seconds
metrics from Prometheus for applications deployed as containers or container Pods. And expose these applications via REST API.
Applications are distinguished by mainly their IP address. For example, each Kubernetes Pod corresponds to one Application.
Currently, it can get applications from Istio exporter and Redis exporter. More exporters can be supported by implementing
their addon
.
The application metrics are served via REST API. Access endpoint /pod/metrics
, and will get json data:
{
"status": 0,
"message:omitemtpy": "Success",
"data:omitempty": [{
"uid": "10.2.1.104",
"type": 1,
"labels": {
"category": "Istio",
"ip": "10.2.1.104",
"name": "default/httpbin-74bc86dcd5-dl745"
},
"metrics": {
"latency": 0.0029995380270269887,
"tps": 0.21142857142857138
}
}, {
"uid": "10.2.2.127",
"type": 1,
"labels": {
"category": "Istio",
"ip": "10.2.2.127",
"name": "default/httpbin-74bc86dcd5-5bz22"
},
"metrics": {
"latency": 0.002993016999999995,
"tps": 0.22285714285714286
}
}, {
"uid": "10.2.2.65",
"type": 1,
"labels": {
"category": "Redis",
"ip": "10.2.2.65",
"port": "6379"
},
"metrics": {
"tps": 1.5028571428571427
}
}, {
"uid": "10.2.3.31",
"type": 1,
"labels": {
"category": "Redis",
"ip": "10.2.3.31",
"port": "6379"
},
"metrics": {
"tps": 1.5028571428571427
}
}]
}
The output json format is defined as:
type EntityMetric struct {
UID string `json:"uid"`
Type int32 `json:"type,omitempty"`
Labels map[string]string `json:"labels,omitempty"`
Metrics map[string]float64 `json:"metrics,omitempty"`
}
type MetricResponse struct {
Status int `json:"status"`
Message string `json:"message:omitemtpy"`
Data []*EntityMetric `json:"data:omitempty"`
}
- Kubernetes 1.7.3 +
- Istio 0.3 + (with Prometheus addon)
- Prometheus
Isito metrics, handlers and rules are defined in script, deploy it with:
istioctl create -f scripts/istio/ip.turbo.metric.yaml
Four Metrics: pod latency, pod request count, service latency and service request count.
One Handler: a Prometheus handler
to consume the four metrics, and generate metrics in Prometheus format. This server will provide REST API to get the metrics from Prometheus.
One Rule: Only the http
based metrics will be handled by the defined handler.
build and run this go application:
make build
./_output/appMetric --v=3 --promUrl=http://localhost:9090 --port=8081
Then the server will serve on port 8081
; access the REST API by:
curl http://localhost:8081/pod/metrics
{"status":0,"message:omitemtpy":"Success","data:omitempty":[{"uid":"10.0.2.3","type":1,"labels":{"ip":"10.0.2.3","name":"default/curl-1xfj"},"metrics":{"latency":133.2,"tps":12}},{"uid":"10.0.3.2","type":1,"labels":{"ip":"10.0.3.2","name":"istio/music-ftaf2"},"metrics":{"latency":13.2,"tps":10}}]}
docker run -d -p 18081:8081 beekman9527/appmetric:v2 --promUrl=http://10.10.200.34:9090 --v=3 --logtostderr
This REST API service can also be deployed in Kubernetes:
kubectl create -f scripts/k8s/deploy.yaml
# Access it in Kubernetes by service name:
curl http://appmetric.default:8081/service/metrics