Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proper Gauge support #8

Open
RubenHoms opened this issue May 28, 2018 · 13 comments
Open

Proper Gauge support #8

RubenHoms opened this issue May 28, 2018 · 13 comments

Comments

@RubenHoms
Copy link

Right now gauges don't work for gathering metrics. A gauge is implemented as a counter with no real way to create one/reset one.

As of now I don't see how this would work in this aggregation gateway as gauges should be able to be individually set (e.g. not aggregated). For example client1 writes value 5 to a gauge, you don't want client2 overwriting it with 6 when they write their value.

So maybe this is by design, but is there any way to do gauges?

@sebbel
Copy link

sebbel commented Aug 7, 2018

prom-aggregation-gateway sums up the gauges just like counters. I'm not too deep into prometheus, but wouldn't it be more logical to return some sort of mean value when multiple batch jobs send a gauge?
but then again you would not catch the outliers
maybe gauges can be sent with some kind of option label attached, takeHighest or takeLowest which would then be filtered out again when scraped by prometheus
DISCLAIMER: prometheus noobie :p

@Sovetnikov
Copy link

The principal solution has already been found in prometheus python client with multiprocessing mode https://github.com/prometheus/client_python#multiprocess-mode-gunicorn

Gauges can have additional "mode":

Gauges have several modes they can run in, which can be selected with the multiprocess_mode parameter.

'all': Default. Return a timeseries per process alive or dead.
'liveall': Return a timeseries per process that is still alive.
'livesum': Return a single timeseries that is the sum of the values of alive processes.
'max': Return a single timeseries that is the maximum of the values of all processes, alive or dead.
'min': Return a single timeseries that is the minimum of the values of all processes, alive or dead.

@snopoke
Copy link

snopoke commented Jun 12, 2020

Those modes make sense for multiprocess mode but not so much for the gateway. In facing this same problem we are looking at encoding the aggregation method into the metric help string: LarkTechnologies#2

@odinho
Copy link

odinho commented Nov 24, 2020

@snopoke That looks nice. Maybe open that as a PR here?

@snopoke
Copy link

snopoke commented Nov 24, 2020

@odinho that would include all the changes from the LarkTechnologies fork as well: https://github.com/weaveworks/prom-aggregation-gateway/compare/master...dimagi:sk/guage-aggregation?expand=1

Is that what you want or just the gauge aggregation changes?

@odinho
Copy link

odinho commented Nov 24, 2020

@odinho that would include all the changes from the LarkTechnologies fork as well: https://github.com/weaveworks/prom-aggregation-gateway/compare/master...dimagi:sk/guage-aggregation?expand=1

Is that what you want or just the gauge aggregation changes?

I don't have any say in this project. :) I just read through the PR you had there, and to be this seems like a nice feature to have in the project. There's a higher chance it'll be included if the Gauge support PR was added as a PR here (just those changes), than if it isn't.

So no guarantees that it won't be shot down in review. But as a user I'd be happy about this. (Of course the configuration way is a bit hacky, but on the other hand neat - working inside what's there already --- if the actual metrics were defined on the server instead, the preference could be set there, but that's not how it's working, so).

@lgoral
Copy link

lgoral commented Feb 10, 2021

Hello Guys,
Is there any chance to have that released. I was using your product but found that bug and unfortunately need to go back to prometheus-pushgateway :(

@maxpain
Copy link

maxpain commented Jun 18, 2021

any news?

@nicacioliveira
Copy link

Hello, Do We have any expectations to add this fix to the code?

I reviewed that code and looks good to me.

I really need this and if it's possible, I can open a PR with this solution passing the credits to @snopoke

@snopoke
Copy link

snopoke commented Jun 6, 2022

Hello, Do We have any expectations to add this fix to the code?

I reviewed that code and looks good to me.

I really need this and if it's possible, I can open a PR with this solution passing the credits to @snopoke

Please go ahead.

@ltagliamonte-dd
Copy link

folks any update on this? Cc @tomwilkie @bboreham

@bboreham
Copy link
Contributor

Neither myself nor Tom work on this project, or work for Weaveworks, now.
Maybe @dholbach can comment.

@kingdonb
Copy link

If someone wants this feature merged here, the straightest path to get it merged is to open a PR directly on this repo. I see there still hasn't been a PR opened on this repo, only one opened against a fork at LarkTechnologies (which also hasn't got any attention to speak of.)

Be forewarned, there hasn't been a maintainer working on this repo in several years and you could be volunteering yourself as the next one! ☠️ 🏎️ ⚠️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests