-
Notifications
You must be signed in to change notification settings - Fork 849
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Exponential Histogram support to the Prometheus exporter #6015
Conversation
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #6015 +/- ##
============================================
- Coverage 91.11% 90.94% -0.18%
+ Complexity 5736 5676 -60
============================================
Files 628 626 -2
Lines 16810 16637 -173
Branches 1662 1649 -13
============================================
- Hits 15317 15131 -186
- Misses 1028 1050 +22
+ Partials 465 456 -9 ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As mentioned in the comment, the memory cost here is severe, and collides head to head with REUSABLE_DATA MemoryMode. I'm in the middle of implementing it, and when you make this change of data conversion, which cost lots of allocations, and worst: handing over control to a library outside the scope of this project, makes completing MemoryMode impossible.
My suggestion as proposed in your proposal is to implement this with existing implementation. It's a small amount of code, 0 disruption to OTel Java SDK and most importantly: no external dependencies. My 2 comments in the proposal explained it in more detail.
Thanks for looking into this @asafm.
I don't think this is true. Adding support for Prometheus native histograms is not possible within the existing implementation, as they require a different data model and exposition format. This is a major refactoring in any case. Moreover, converting OpenTelemetry's exponential histograms to Prometheus' native histograms is not trivial.
I actually think the upstream dependency for the Prometheus endpoint is a good thing. First, it's not a random 3rd party dependency. Prometheus and OpenTelemetry are both CNCF projects, and we pledged mutual compatibility in the CNCF governance. The previous implementation of the Prometheus exporter copied the upstream Prometheus code into Moreover, from a community perspective, using a common Prometheus metrics implementation will help keep the communities aligned. If we use a common Prometheus metrics implementation we will quickly recognize incompatibilities with future changes. If OpenTelemetry keeps using their own copy of the upstream code there is a higher risk of slight differences getting unnoticed.
I'm not convinced that caching objects in memory results in better gc performance than allocating short-lived objects. After all, the gc marks live objects, and the more live objects you have the longer the mark phase. However, if there is a way to improve gc performance in the Prometheus exporter it would be better to fix this upstream so that both communities can benefit rather than copying the code into Anyway, I would appreciate if you could consider this PR. The only alternative I see is to duplicate the upstream implementation, copy it into |
You are correct in the exposition format. Your PR title is a bit misleading. It's not trivial, but it's not rocket science or any mad algorithms.
Regarding (2): Converting exponential histogram to Prometheus native histogram in your PR is 20 lines of code. Doesn't seem overly complex. I looked at the Prometheus metrics library you wish to see: It's insanely expensive in memory allocations. In generates tons of objects as it translates to another model within. My intentions with MemoryMode.REUSABLE_DATA, which were detailed in my proposal, is to make sure the exporters are merely taking the OTel aggregated data structures and writing them to the wire - no middle man objects. This is truly happens for current Prometheus exporter. OTLP exporter needs to be refactored to get there. Adding another exporter which is located outside of this project, makes it almost impossible to do.
Protocols have a tendency to become stable and not evolve that much. Especially metric ones. So saying "increasingly difficult to maintain" is simply not true IMO. Protocols don't change often.
I beg the differ. We have seen in production in Apache Pulsar - its a absolute killer. Again, in my opinion, merging this PR is killing MemoryMode which I'm working on right now, before you submitted your proposal. My suggestion: Protocols are simple, hence it's not rocket science to have OTel implement protobuf-based exposition format. They also don't change often, so maintenance is easy. OTel is a super impactful project, that will be used in any type of applications, and as such should continue to be 0-dependency, as it ensures it has full control, especially in light of future development it will introduce. |
Having a dependency on https://github.com/prometheus/client_java is potentially a good thing. It's the official Prometheus Java client, and so I wouldn't generally expect to try to avoid it when exporting to Prometheus.
I think it would be great to explore this further. |
I hadn't thought about this problem from that perspective. We actually take this approach with all the other built in exporters:
However, prometheus is the exception. As @fstab points out, the "The previous implementation of the Prometheus exporter copied the upstream Prometheus code into opentelemetry-java", the current OpenTelemetry code was based on the prometheus TextFormat as referenced in Serializer. But if you look at #4178 where the prometheus serialization was originally added, part of the intent was definitely performance oriented, to avoid extra allocations:
I think the philosophical question we need to answer is: Do we have an obligation to hold ourselves to different or higher performance bar for the built in exporters we provider than the upstream implementations? I think probably no. But I still want to have a good answer for performance centric use cases. In my opinion, the answer for this is to lean into optimizing OTLP (as I discussed in this comment), and depending on prometheus support for native OTLP ingest. @asafm counters that the road for native OTLP ingest could be a long one, discussing other systems part of the prometheus ecosystem like M3DB, Cortex, and VictoriaMetrics. I did a quick search, and it looks like OTLP support is not on the radar of M3DB, is likely to be supported in Cortex, and ❗is supported in VictoriaMetrics❗, so I think it might not actually take that long. |
Thanks @jack-berg.
For completeness:
|
Thanks @jack-berg for the detailed response. It did add the context I lacked here. In light of having the other protocols use a 3rd party library, it makes sense for Prometheus to follow suit. My personal preference, in an answer to your philosophical question is a bit different, but it doesn't change the fact that I agree with the approach chosen. OTel, as it stands today, IMO is x10 a better metrics library than any other, at least in the Java ecosystem. As such, it can be even better by having its implementation set the higher bar performance wise. It's a bit like the Apple ecosystem - you have everything on the chain, hence it's ultimately performs better, with the disadvantages it brings. One point that also made me lean in to your approach is that eventually the industry would consolidate into a single protocol and libraries. OTLP protocol and OTel SDK will become the defacto standard, hence the usage of Prometheus exporter would eventually decline and will not be the go-to exporter. Practically speaking, it means in Apache Pulsar documentation, I'll recommend not configuring Prometheus exporter and opting for OTLP exporters, whether it's targeted at OTel collector or directly to a vendor supporting OTLP. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for working on this - I saw several things in the code that significantly simplify understanding the code and make it less brittle.
.../prometheus/src/main/java/io/opentelemetry/exporter/prometheus/Otel2PrometheusConverter.java
Outdated
Show resolved
Hide resolved
.../prometheus/src/main/java/io/opentelemetry/exporter/prometheus/Otel2PrometheusConverter.java
Outdated
Show resolved
Hide resolved
.../prometheus/src/main/java/io/opentelemetry/exporter/prometheus/Otel2PrometheusConverter.java
Outdated
Show resolved
Hide resolved
.../prometheus/src/main/java/io/opentelemetry/exporter/prometheus/Otel2PrometheusConverter.java
Outdated
Show resolved
Hide resolved
...ters/prometheus/src/main/java/io/opentelemetry/exporter/prometheus/PrometheusHttpServer.java
Show resolved
Hide resolved
...ometheus/src/main/java/io/opentelemetry/exporter/prometheus/PrometheusHttpServerBuilder.java
Show resolved
Hide resolved
* @see <a href="https://prometheus.io/docs/practices/naming/#base-units">Prometheus best practices | ||
* for units</a> | ||
*/ | ||
final class PrometheusUnitsHelper { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Apologies for the delayed response @jack-berg , this notification got missed - I will take a look once and raise any concerns I find.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @psx95!
.../prometheus/src/test/java/io/opentelemetry/exporter/prometheus/PrometheusHttpServerTest.java
Outdated
Show resolved
Hide resolved
// Null unit | ||
Arguments.of(null, null), | ||
// Misc - unit cleanup - no case match special char | ||
Arguments.of("$1000", "1000")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we adapt / retain this test to confirm the new behavior of PrometheusUnitHelper
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I restored the test, but I also changed some tests that I think were wrong:
Arguments.of("m/m", "meters_per_minute")
->Arguments.of("m/min", "meters_per_minute")
Arguments.of("W/w", "watts_per_week"),
->Arguments.of("W/wk", "watts_per_week")
Arguments.of("TBy/y", "terabytes_per_year")
->Arguments.of("TBy/a", "terabytes_per_year")
Arguments.of("g/g", "grams_per_g")
->Arguments.of("g/x", "grams_per_x")
(this is supposed to test an unknown "per" unit, butg
is not unknown)- Removed all tests for removing special characters like
Arguments.of("__test $/°C", "test_per_C")
because I could not find any reference that these characters should be removed.
Please let me know if I got it wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems reasonable to me. @psx95 please take a look if you can.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just wanted to confirm the following -
- In the collector-contrib, there is a separate list of 'per' unit(s). Which is why the "per" test cases were using those units specifically.
- I do not see "wk" as a specified unit in the collector. Are we okay with the discrepancy between collector and SDK ?
- In the collector, certain units - for instance "mo", "y" are only used for "per" unit cases i.e. only in the second part of the metric unit when split using "/". IIUC, the current implementation might use the mappings for any part of the unit. Is this intentional ?
- Regarding the removing unsupported symbols - it was from the OTel Spec Doc, OTLP -> Prometheus. It mentions that metric name is supposed to match the regex
[a-zA-Z_:]([a-zA-Z0-9_:])*
which makes symbols like$
invalid and replaced with_
instead. - In places where the spec was ambiguous, the unit conversion implementation in Append unit to prometheus metric names #5400 was done to be consistent with the collector implementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not see "wk" as a specified unit in the collector. Are we okay with the discrepancy between collector and SDK ?
While I like the idea of symmetry with the collector, the wk
unit is defined in ucum, so I'm ok including it. UCUM is defined in the metric semantic conventions for defining unit.
In the collector-contrib, there is a separate list of 'per' unit(s). Which is why the "per" test cases were using those units specifically.
In the collector, certain units - for instance "mo", "y" are only used for "per" unit cases i.e. only in the second part of the metric unit when split using "/". IIUC, the current implementation might use the mappings for any part of the unit. Is this intentional ?
The specification doesn't include any language that restricts the conversion to the *_per_*
version to a subset of units, so I think its fine. Makes it simpler to understand.
Regarding the removing unsupported symbols - it was from the OTel Spec Doc, OTLP -> Prometheus. It mentions that metric name is supposed to match the regex a-zA-Z_:* which makes symbols like $ invalid and replaced with _ instead.
Metric names are still sanitized according to those rules. However, we now delegate to the prometheus client library: https://github.com/open-telemetry/opentelemetry-java/blob/main/exporters/prometheus/src/main/java/io/opentelemetry/exporter/prometheus/Otel2PrometheusConverter.java#L448
In places where the spec was ambiguous, the unit conversion implementation in #5400 was done to be consistent with the collector implementation.
If we've diverged in any ways that aren't discussed here, I'd love to resolve those inconsistencies.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see, thanks for clearing this up.
One thing I did notice was that the definition of illegal characters is different in Prometheus Client Library and the regex defined in OTel Spec. (OTel spec does not allow .
)
- Regex allowed by OTel Spec -
[a-zA-Z_:]([a-zA-Z0-9_:])*
- Regex allowed by Prometheus Client Library -
^[a-zA-Z_.:][a-zA-Z0-9_.:]+$
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we've diverged in any ways that aren't discussed here, I'd love to resolve those inconsistencies.
I think we should add back the tests that were testing for removal of special characters and leading/trailing _
in the unit.
Regarding special characters: While it is not explicitly specified in spec, it is implicitly indicated by the regex used in the prometheus client library.
Regarding leading/trailing _
: Removing these are not mandated by the spec, but the implementation of unit conversion in the collector does not allow for any leading or trailing _
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
illegal characters
Here's the context: The Prometheus community decided (see dev summit 2023-09) to allow UTF-8 characters in metric and attribute names for better OTel compatibility.
This is not implemented in the Prometheus server yet, so today the Prometheus exposition format will only create metric names and attribute names matching [a-zA-Z_:]([a-zA-Z0-9_:])*
(without the dot).
However, we extended the Prometheus client library to allow dots in metric names, hence the new regexp that allows dots.
The idea is: Users can use dots when creating metrics, so they can use metric names compliant with OpenTelemetry's semantic conventions today. Dots are implicitly converted to underscores in the exposition format. However, as soon as the Prometheus server and exposition format supports dots users can enable dots without having to re-instrument their applications.
TL;DR: While dots are allowed when creating metrics, they are implicitly replaced with underscores when exposing metrics, at least until dots become officially supported by the Prometheus server and exposition format.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the context @fstab.
4aab15c
to
3e8fbe0
Compare
Signed-off-by: Fabian Stäber <[email protected]>
Signed-off-by: Fabian Stäber <[email protected]>
3e8fbe0
to
fd073f9
Compare
Thanks a lot for the review @jack-berg. I worked through the comments, rebased, and pushed a commit with the code review fulfillment. Please review again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are still a number of unaddressed comments (maybe they were hidden in the github UI or something?), but the only one which is non-trivial is the question of whether to use the default registry by default.
.../prometheus/src/main/java/io/opentelemetry/exporter/prometheus/Otel2PrometheusConverter.java
Show resolved
Hide resolved
// Null unit | ||
Arguments.of(null, null), | ||
// Misc - unit cleanup - no case match special char | ||
Arguments.of("$1000", "1000")); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems reasonable to me. @psx95 please take a look if you can.
Oh my, I didn't expand the "hidden conversations". Sorry for that, I'll work through them asap. |
c3a740b
to
79717ba
Compare
Signed-off-by: Fabian Stäber <[email protected]>
79717ba
to
f1f027d
Compare
I worked through the rest of the comment, sorry for missing the hidden conversations previously. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great to me. Thanks!
Heads up @open-telemetry/java-approvers - I'd like to get this merged for the January 1.34.0 release so please leave any comments / feedback you have with enough time allotted to respond. |
Will merge tomorrow if there are no additional comments. |
metricData, PrometheusType.forMetric(metricData))); | ||
} | ||
|
||
private static Stream<Arguments> provideRawMetricDataForTest() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IIUC, these test cases are still valid. If this is indeed the case, is there a way we could retain these test cases ?
This is mostly because the OTel spec doc places some constraints on metric names depending on type of metric.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check out #6138 where I restore these test cases.
@jack-berg, @fstab I left a couple comments (mostly around tests). Let me know if they make sense to you. Apologies for the late response. |
This PR adds support for exponential histograms in the Prometheus exporter as defined in the Spec:
About the Implementation
The upstream Prometheus Java Client library recently introduced a
prometheus-metrics-model
, which implements immutable read-onlyMetricSnapshots
.With this PR, we introduced an
Otel2PrometheusConverter
that converts OpenTelemetry'sMetricData
to Prometheus'MetricSnapshots
.That way, we can use the upstream HTTP exporter for exposing the
MetricSnapshots
in various exposition formats.Benefits
opentelemetry-java
.opentelemetry-java
once this is done, because it will be sufficient to update the HTTP server dependency.PrometheusMetricReader
class implements the PrometheusMultiCollector
interface, which means it can be registered as a collector with thePrometheusRegistry
. This makes it very easy to mix and match Prometheus instrumentation with OpenTelemetry SDK instrumentation when transitioning.Dependencies
This PR adds a dependency to
io.prometheus:prometheus-metrics-httpserver
. However, the upstream module does not introduce any transitive dependencies outside theio.prometheus
group. The Prometheus protobuf support is shaded, and does not introduce any external dependency. Here's the dependency tree as produced by./gradlew :exporters:prometheus:dependencies
:Related
This PR implements #5940 and fixes #5997. It will also make it easy to implement #6013.