Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CloneSet scale down schedule-failed pods firstly #1515

Closed

Conversation

veophi
Copy link
Member

@veophi veophi commented Feb 29, 2024

Ⅰ. Describe what this PR does

In the scene:

replicas: 20
partition: 10

current-revision:
   runningPods: 5
   scheduleFailedPods: 5
update-revision:
   runningPods: 5
   scheduleFailedPods: 5

when scaling down CloneSet from 20 to 10, we hope that 10 running Pods are reserved. Howerver, 10 current-revision pods will be reserved in current kruise version.

Ⅱ. Does this pull request fix one issue?

Ⅲ. Describe how to verify it

Ⅳ. Special notes for reviews

@kruise-bot
Copy link

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please ask for approval from veophi by writing /assign @veophi in a comment. For more information see:The Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@kruise-bot kruise-bot added the size/L size/L: 100-499 label Feb 29, 2024
@veophi veophi requested a review from zmberg February 29, 2024 12:28
@furykerry
Copy link
Member

if the partition is 50%, then it is reasonable to keep 10 running pods. However if the partition is absolute value, keeping 10 running pods will break the semantic meaning of partition. Is it ok for you to change the partition to percent form?

@veophi veophi force-pushed the cloneset-scale-schedule-failed-first branch from 288f9ad to 344c29e Compare February 29, 2024 12:46
@veophi
Copy link
Member Author

veophi commented Feb 29, 2024

if the partition is 50%, then it is reasonable to keep 10 running pods. However if the partition is absolute value, keeping 10 running pods will break the semantic meaning of partition. Is it ok for you to change the partition to percent form?

replicas: 20
partition: 50%

current-revision:
   runningPods: 3
   scheduleFailedPods: 7
update-revision:
   runningPods: 7
   scheduleFailedPods: 3

Copy link

codecov bot commented Feb 29, 2024

Codecov Report

Attention: Patch coverage is 41.17647% with 20 lines in your changes missing coverage. Please review.

Project coverage is 49.19%. Comparing base (3b7c731) to head (344c29e).
Report is 73 commits behind head on master.

Files Patch % Lines
pkg/controller/cloneset/utils/cloneset_utils.go 0.00% 20 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master    #1515      +/-   ##
==========================================
- Coverage   49.21%   49.19%   -0.03%     
==========================================
  Files         157      157              
  Lines       22604    22634      +30     
==========================================
+ Hits        11125    11135      +10     
- Misses      10268    10288      +20     
  Partials     1211     1211              
Flag Coverage Δ
unittests 49.19% <41.17%> (-0.03%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@kruise-bot
Copy link

@veophi: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@Spground
Copy link
Contributor

However if the partition is absolute value, keeping 10 running pods

Why not upgra partition is absolute value,

if the partition is 50%, then it is reasonable to keep 10 running pods. However if the partition is absolute value, keeping 10 running pods will break the semantic meaning of partition. Is it ok for you to change the partition to percent form?

replicas: 20
partition: 50%

current-revision:
   runningPods: 3
   scheduleFailedPods: 7
update-revision:
   runningPods: 7
   scheduleFailedPods: 3

Will extra 2 upgraded Pods rollback to current revision if we prefer scale down failed Pod?

Copy link

stale bot commented Jul 18, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix This will not be worked on label Jul 18, 2024
@stale stale bot closed this Jul 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-rebase size/L size/L: 100-499 wontfix This will not be worked on
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants