-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: App-Metrics on success or failure of an invocation #245
Comments
Currently, MicroProfile Metrics does not have a simple way to track how often a method fails with different exceptions. I can bring up this issue for the next MP Metrics call and see what the opinions are. It seems doable from the API side (with annotations) but it may not be very configurable. At the moment, tracking exceptions requires additional code in the business logic. Naive example,
|
@raymondlam Thank you for your answer and offer to bring the idea up to a MP Metrics call. What I could think of is something like this: public interface CustomMetric {
public void onBeforeEach(MetricRegistry mr);
public void onAfterEach(MetricRegistry mr);
public void onAfterSuccess(MetricRegistry mr);
public void onAfterError(MetricRegistry mr, Exception e);
} Then we could implement the imperative logic given in your example outside an annotated business method in the respective callback methods. An Object result = null;
Exception error = null;
// get an instance of a CustomMetric impl.
CustomMetric custometric = getCustomMetricFromAnnotationOf(method);
try {
custometric.onBeforeEach(mr);
result = method.proceed();
} catch (Exception e) {
error = e;
throw e;
} finally {
if (error != null) {
custometric.onAfterError(mr, error);
} else {
// alternatively: custometric.onAfterSuccess(mr, result);
custometric.onAfterSuccess(mr);
}
custometric.onAfterEach(mr);
} Related WorkThis draft is based on an Addendum: More Advanced Use Cases:Sometimes it could also be interesting to pass the non-exceptional result of an invocation. Then something like Edit: First draft meant to refer to |
What about a change that implicitly works with labels
which would then end up in a metric "bla" without a label for all invocations and another metric "bla" with an additional label "_reason=fail" that gets only bumped in the exception case. This could also have an attribute 'success'
which then creates a label "_reason=success" that only gets bumped when the method does not throw an exception. Actually the attributes could also be Btw: is this limited to Also: should e.g EDIT: I did not get that you want to catch and count individual Exceptions thrown out of the called method. |
@about-code How do you envision dealing with Exceptions that wrap other exceptions (at arbitrarily deep levels) ? |
@pilhuhn Thank you for your comments and thoughts.
I think compared to the interface I described earlier your attribute-based approach would probably be almost as equally expressive. It would allow for counting things differently based on the coarse-grained result (success/failure). The exception case is interesting, though, and could be a differentiator.
If your proposal is to let an MP With the interface above I'd imagine an app developer to naively implement the |
Return Values and Call Parameters for Application Metrics?In my interface proposal above I intentionally skipped most of the stuff which would be even more interesting from a business perspective than just success or failure or particular exception types (Please let me know when I should open another issue if you feel this is getting off topic). Example: Deriving Metrics from returned valuesImagine we extend above interface to something like public interface CustomMetric<T_RESULT> {
public void onBeforeEach(MetricRegistry mr);
public void onAfterEach(MetricRegistry mr, T_RESULT result);
public void onAfterSuccess(MetricRegistry mr, T_RESULT result);
public void onAfterError(MetricRegistry mr, Exception e);
} where
@Metrics
@CustomMetric(type=MyPopularCustomersMetric.class)`
public Customer readCustomer(int custId) {...} The @Metrics
@CustomMetric(type=MyPopularCustomersMetric.class)`
public Dogs readDogs(int animalId) {...} We'd probably need an annotation processor for Example: Deriving Metrics from method parameter valuesTo answer questions like when do customers of a particular age attempt to sign up an application might want to derive metrics from Customer entites passed into a Example: Find out which time of the day new customers do sign up @Metrics
@CustomMetric()`
public void registerCustomer(
@MetricParam(type=RegistrationsByTimeOfDay.class) Customer cust) {
// ....
} The interface expected by public interface CustomMetricParam<T_PARAM, T_RESULT> {
public void onBeforeEach(MetricRegistry mr, T_PARAM param);
public void onAfterEach(MetricRegistry mr, T_PARAM param, T_RESULT result);
public void onAfterSuccess(MetricRegistry mr, T_PARAM param, T_RESULT result);
public void onAfterError(MetricRegistry mr, T_PARAM param, Exception e);
} and @Metrics
@CustomMetric()`
public void registerCustomer (
@MetricParam(type=RegistrationsByTimeOfDay.class) Dog cust /* 'Dog' but expected 'Customer' */
) {
// ....
} I am not sure whether these ideas align with the goals of MP-Metrics spec. They are naive proposals intended to allow a bit more flexibility with respect to implementing business metrics. I think I could implement most of this also for my application as a supplement to MP-metrics. Just let me know if you consider it too domain-specific to be further discussed for the spec. Thank you. -- |
Ideas discussed on 05/29 call: Give annotations ability to track success/failure separately:
Another approach to counting separately:
Dropwizard also has the ExceptionMetered annotation https://github.com/dropwizard/metrics/blob/4.1-development/metrics-annotation/src/main/java/com/codahale/metrics/annotation/ExceptionMetered.java |
This is IMO very useful feature, I'm just looking for it also. But saying that each exception is a failure is not a very good solution I think. For example you can have exception indicating data error, and other which indicates system error. And you can count only system errors (eg now I need to count only system errors when I call remote service from my service, but I do not want to count data validation errors returned from it). |
@velias Maybe it's just a matter of wording. Something like
would leave it open whether the exception is a failure or not. As far as I understand things, the tag will enable counting exceptions individually by class name. So if a method throws ValidationExceptions and RuntimeExceptions, then there will be two tags: ValidationException with a count of e.g. three and RuntimeException with a count of e.g. two. |
I think a specific interface ErrorRate {
void addError();
void addSuccess();
ErrorRateSnapshot getSnapshot();
}
@interface ErrorRate {
/**
* The name of a method that takes a throwable and determines if it's actually an error.
* By default, all exceptions are considered to be an error.
*/
String isError() default "";
/**
* The name of a method that takes a response object and determines if it's actually a success
* By default, all response objects are considered to be a success.
*/
String isSuccess() default "";
ErrorRateSnapshot getSnapshot();
}
public interface ErrorRateSnapshot {
int getErrors();
int getSuccesses();
default double getErrorRate() {
return ((double) getErrors()) / getCount();
}
default int getCount() {
return getErrors() + getSuccesses();
}
} WDYT? I could add a PR here and maybe to https://github.com/smallrye/smallrye-metrics |
@t1 , looking at the Fault Tolerance spec, we might want to differentiate successes and failures in a manner similar to In particular, the Fallback annotation has a Using a similar idea for metrics annotations could give us a consistent way to indicate which thrown exceptions should be treated (and counted/timed) as failures. For example, someone could annotate a method with a
The ErrorRate interface would be more flexible than what I've described (in particular giving full control to evaluate returned objects for success or failure), but I'm not sure we should make the metrics annotations be more elaborate than the fault tolerance annotations when it comes to determining method success/failure. If needed, developers can always fall back to using the non-annotation API for arbitrarily complex cases. I also kind of like the alternate suggestion from @about-code for its simplicity:
|
@donbourne: I have created this issue in SmallRye Metrics including a suggestion for a @Inherited
@Documented
@InterceptorBinding
@Retention(RUNTIME)
@Target({TYPE, CONSTRUCTOR, METHOD, ANNOTATION_TYPE})
public @interface ErrorRated {
/**
* Create a default class so the value is not required to be set all the time.
*/
class DEFAULT implements ErrorRateHandler<Object> {
@Override public boolean isError(Object value) {
return false;
}
@Override public boolean isError(Throwable throwable) {
return true;
}
}
/**
* Specify the error rate handler class to be used. The type parameter of the fallback class must be assignable to the
* return type of the annotated method.
*
* @see #applyOn()
* @see #skipOn()
*/
@Nonbinding
Class<? extends ErrorRateHandler<?>> value() default DEFAULT.class;
/**
* The name of the meter.
*/
@Nonbinding
String name() default "";
/**
* The tags of the meter. Each {@code String} tag must be in the form of 'key=value'. If the input is empty or does
* not contain a '=' sign, the entry is ignored.
*
* @see org.eclipse.microprofile.metrics.Metadata
*/
@Nonbinding
String[] tags() default {};
/**
* If {@code true}, use the given name as an absolute name. If {@code false} (default), use the given name
* relative to the annotated class. When annotating a class, this must be {@code false}.
*/
@Nonbinding
boolean absolute() default false;
/**
* The display name of the meter.
*
* @see org.eclipse.microprofile.metrics.Metadata
*/
@Nonbinding
String displayName() default "";
/**
* The description of the meter.
*
* @see org.eclipse.microprofile.metrics.Metadata
*/
@Nonbinding
String description() default "";
/**
* The list of exception types which should be considered errors, including subclasses.
* <p>
* Only if an exception is <em>not</em> in this list, the {@link ErrorRateHandler} is considered.
*
* @see #value()
*/
@Nonbinding
Class<? extends Throwable>[] applyOn() default {};
/**
* The list of exception types which should <em>not</em> be considered errors, including subclasses.
* <p>
* Only if an exception is <em>not</em> in this list, the {@link ErrorRateHandler} is considered.
*
* @see #value()
*/
@Nonbinding
Class<? extends Throwable>[] skipOn() default {};
} This would give us all the flexibility with fully dynamic I think a simple counter is not enough, as we need to count successes and failures, and we need snapshots, so we have buckets of failure rates. |
@t1, a few thoughts on your suggested
|
Just to add: I consider using the exception class name as tag name to be a default, only. Of course, it is prone to breaking metric history in case of renaming an exception class. So my recommendation for production would still be to consider using |
I am currently about studying the microprofile APIs and I very much like most of its specifications and how approachable its documentation is (mostly). I have no hands-on experience so far but I am curious to find out how it could help solve some of the requirements I see on work. Something I couldn't find much about is how metrics would behave in case of exceptions in the execution of an intercepted method. From that I guess there is currently no way to distinct whether a counter metric should be incremented or not based on the result of an invocation.
This is the business-case I see:
Let's say I have standard CRUD REST-API for a customer relationship management system. There's some method
createCustomer(Customer cust)
which validates a customer and throws a validation exception if the given object contains invalid data.Now I can imagine different metrics beyond just how often
createCustomer()
was invoked:createCustomer()
method succeed (how often were customers actually created)?createCustomer()
fail ...I see that there are various other ways how something similar could be achieved, for example obtaining such metrics within the API-layer rather than in the business core. Nevertheless, I thought I could ask what the opinions are on these use cases and whether it is possible to apply/not apply metrics based on the results of invocations.
The text was updated successfully, but these errors were encountered: