-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mark more tests as flaky #3961
Mark more tests as flaky #3961
Conversation
1f8a634
to
80ca920
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes look good to me. However, I'm not sure why we need minSuccess = 2
.
It means that the test will be executed at least 2 times. I'm OK with it, however, it seems to me that minSuccess = 1
is good as well.
Apart from that, it would be great if you could add a comment with a link to GitHub Issue which is being closed to each affected test. In such case, later it will be easier to track back to the reason why a specific test is marked as Flaky.
I simply took that from our docs on flaky tests.
Good idea! I'll add that. Just note that our current policy is to leave flaky test issues open until the test is improved in a way so that it isn't flaky any more. |
This adds the necessary annotation to repeat flaky tests if necessary for tests described in these issues: * JanusGraph#1457 * JanusGraph#1459 * JanusGraph#1462 * JanusGraph#1464 * JanusGraph#1465 * JanusGraph#1497 * JanusGraph#1498 * JanusGraph#2272 * JanusGraph#3142 * JanusGraph#3356 * JanusGraph#3392 * JanusGraph#3393 * JanusGraph#3651 * JanusGraph#3931 * JanusGraph#3959 * JanusGraph#3960 I went through the recent builds on `master` and checked all failed builds. I think this should handle all flaky tests that occurred in these builds, apart from the TinkerPop test where I just don't know how to deal with it: JanusGraph#3841, but we can of course just address that in another PR. We however also sometimes get failed jobs because of some intermediate issues where a backend cannot be properly set up or teared down for some reason or downloading some artifact just fails. I don't know what we can do to improve this, but I would also consider that out of scope of this PR where I just wanted to deal with failing jobs due to specific tests being flaky. Hopefully, this will already lead to significantly fewer cases where we have to manually restart failed jobs (which I had to do a lot lately for all these Dependabot PRs). I also added a link to the relevant issue to all flaky tests. Signed-off-by: Florian Hockmann <[email protected]>
80ca920
to
87911fd
Compare
Done, I also added links to the issues for all flaky tests that already had the annotation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perfect, thank you @FlorianHockmann !
💔 All backports failed
Manual backportTo create the backport manually run:
Questions ?Please refer to the Backport tool documentation and see the Github Action logs for details |
Follow-up to #3961 where I forgot to remove this argument from these 2 tests because they are using `ParameterizedRepeatedIfExceptionsTest` instead of `RepeatedIfExceptionsTest`. Signed-off-by: Florian Hockmann <[email protected]>
Follow-up to #3961 where I forgot to remove this argument from these 2 tests because they are using `ParameterizedRepeatedIfExceptionsTest` instead of `RepeatedIfExceptionsTest`. Signed-off-by: Florian Hockmann <[email protected]>
💚 All backports created successfully
Note: Successful backport PRs will be merged automatically after passing CI. Questions ?Please refer to the Backport tool documentation |
Follow-up to #3961 where I forgot to remove this argument from these 2 tests because they are using `ParameterizedRepeatedIfExceptionsTest` instead of `RepeatedIfExceptionsTest`. Signed-off-by: Florian Hockmann <[email protected]>
This adds the necessary annotation to repeat flaky tests if necessary for tests described in these issues:
I went through the recent builds on
master
and checked all failed builds. I think this should handle all flaky tests that occurred in these builds, apart from the TinkerPop test where I just don't know how to deal with it: #3841, but we can of course just address that in another PR.We however also sometimes get failed jobs because of some intermediate issues where a backend cannot be properly set up or teared down for some reason or downloading some artifact just fails. I don't know what we can do to improve this, but I would also consider that out of scope of this PR where I just wanted to deal with failing jobs due to specific tests being flaky.
Hopefully, this will already lead to significantly fewer cases where we have to manually restart failed jobs (which I had to do a lot lately for all these Dependabot PRs).