-
Notifications
You must be signed in to change notification settings - Fork 359
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: increase timeout for k8s intg tests #9929
Conversation
✅ Deploy Preview for determined-ui ready!
To edit notification comments on pull requests, go to your Netlify site configuration. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #9929 +/- ##
==========================================
- Coverage 59.18% 54.52% -4.67%
==========================================
Files 751 1252 +501
Lines 104462 156550 +52088
Branches 3598 3599 +1
==========================================
+ Hits 61824 85354 +23530
- Misses 42506 71064 +28558
Partials 132 132
Flags with carried forward coverage won't be shown. Click here to find out more. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
We should probably ticket this though,so we can devise a more permanent solution to the k8s API changes.
910c942
to
e746c06
Compare
e746c06
to
f3fd024
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
The latest versions of k8s have a new Job condition called JobFailureTarget which is a signal to the jobs controller to kill off the pods of the job. Our jobUpdatedCallback() function waits for the JobFailure condition, which now comes after the pod is fully terminated, which takes a lot longer. Probably we need to make sure our k8s logic is still valid, and deal with the additional time it takes a job to reach JobFailed, if it affects anything other than this test. Until then, let's unblock CI for the whole team by just increasing the test time for TestExternalPodDelete and TestNodeWorkflows.
f3fd024
to
282d891
Compare
The latest versions of k8s have a new Job condition called JobFailureTarget which is a signal to the jobs controller to kill off the pods of the job. Our jobUpdatedCallback() function waits for the JobFailure condition, which now comes after the pod is fully terminated, which takes a lot longer. Probably we need to make sure our k8s logic is still valid, and deal with the additional time it takes a job to reach JobFailed, if it affects anything other than this test. Until then, let's unblock CI for the whole team by just increasing the test time for TestExternalPodDelete and TestNodeWorkflows.
The latest versions of k8s have a new Job condition called
JobFailureTarget which is a signal to the jobs controller to kill off
the pods of the job.
Our jobUpdatedCallback() function waits for the JobFailure condition,
which now comes after the pod is fully terminated, which takes a lot
longer.
Probably we need to make sure our k8s logic is still valid, and deal
with the additional time it takes a job to reach JobFailed, if it
affects anything other than this test.
Until then, let's unblock CI for the whole team by just increasing the
test time for TestExternalPodDelete and TestNodeWorkflows.