Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OCPBUGS-32812: When newly built images rolled out, the update progress is not displaying correctly (went 0 --> 3) #4489

Merged
merged 2 commits into from
Sep 11, 2024

Conversation

djoshy
Copy link
Contributor

@djoshy djoshy commented Jul 24, 2024

- What I did

  • Tweaked a few things with how OCL populates MCP status numbers. More details here.
  • Removed MachineConfigNode based status population, this will have to be reworked as a separate card.
  • Beefed up a few of the node controller log messages, so it is easier to understand what node/pool the controller is working on.
  • Updated some of the OCL related unit tests to work with the fixed mechanism, but this is really more of a stopgap. I think a bigger rework needs to be done in the test suite which I believe @cheesesashimi has alluded to. Some of the tests did not seem accurate/correctly setup, but that might be my lack of expertise with OCL.

- How to verify it
Follow the reproduction steps in the bug, there should not be any odd jumps/inaccuracies in the MCP status numbers for both the layered and non layered MC/image rollouts.

@openshift-ci-robot openshift-ci-robot added jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Jul 24, 2024
@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request references Jira Issue OCPBUGS-32812, which is invalid:

  • expected the bug to target the "4.17.0" version, but no target version was set

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

The bug has been updated to refer to the pull request using the external bug tracker.

In response to this:

- What I did

  • Tweaked a few things with how OCL populates MCP status numbers. More details here.
  • Removed MachineConfigNode based status population, this will have to be reworked as a separate card.
  • Beefed up a few of the node controller log messages, so it is easier to understand what node/pool the controller is working on.
  • Updated some of the OCL related unit tests to work with the fixed mechanism, but this is really more of a stopgap. I think a bigger rework needs to be done in the test suite which I believe @cheesesashimi has alluded to. Some of the tests did not seem accurate/correctly setup, but that might be my lack of expertise with OCL.

- How to verify it
Follow the reproduction steps in the bug, there should not be any odd jumps/inaccuracies in the MCP status numbers for both the layered and non layered MC/image rollouts.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@djoshy
Copy link
Contributor Author

djoshy commented Jul 24, 2024

/jira refresh

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Jul 24, 2024
@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request references Jira Issue OCPBUGS-32812, which is valid. The bug has been moved to the POST state.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.17.0) matches configured target version for branch (4.17.0)
  • bug is in the state ASSIGNED, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @sergiordlr

In response to this:

/jira refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Jul 24, 2024
@djoshy djoshy force-pushed the mcn-pool-status branch 4 times, most recently from 1c90805 to 2951d96 Compare July 24, 2024 17:50
@djoshy
Copy link
Contributor Author

djoshy commented Jul 25, 2024

/test e2e-gcp-op
/test e2e-gcp-op-techpreview

@djoshy djoshy force-pushed the mcn-pool-status branch 3 times, most recently from cd23b3f to 84889cc Compare July 26, 2024 13:19
@djoshy
Copy link
Contributor Author

djoshy commented Jul 29, 2024

/test e2e-hypershift
/test e2e-gcp-op-techpreview

@djoshy djoshy force-pushed the mcn-pool-status branch from 84889cc to 90dc2a9 Compare July 30, 2024 16:44
@sergiordlr
Copy link

Verified using IPI on AWS

We have checked that the updated nodes are correctly reported in the following scenarios:

  • Apply techpreview
  • Apply a new MC in no-techpreview cluster
  • Apply OCL images in tech-preview

Nevertheless, we have observed that when applying the OCL image the maxUnavailable value in the MCP is not honored, and we have seen that in pools without any maxUnavailable value (default 1), 2 nodes start applying the OCL image at the same time.

We have tried to reproduce it in other clutsers without this fix. We have only been able to reproduce it once after trying many times. Nonetheless, in the clusters containing this fix this issue is observed basically 100% of the time.

Since we were able to reproduce it once without this fix, we cannot assure that the problem is in this fix, but it seems to be related. Please, could you pay special attention to the behaviour that whe have described here while reviewing this code?

If after the review it is decided that this PR has no impact in the behaviour related to maxUnavailable, we can approve this PR and create a different ticket to track the new issue

Thanks.

@djoshy
Copy link
Contributor Author

djoshy commented Aug 26, 2024

Nevertheless, we have observed that when applying the OCL image the maxUnavailable value in the MCP is not honored, and we have seen that in pools without any maxUnavailable value (default 1), 2 nodes start applying the OCL image at the same time.

I have a fix for this in my last push. Layered nodes were being incorrectly being marked ready for an update when they lacked the "current image" annotation on the node object(see isNodeReady() call). I cleaned up the logic to account for this case. This would only happen when the pool is first opted into layering because that is the only time the node object would be missing this annotation. So while testing, please ensure this particular scenario is being checked. Thanks!

@djoshy
Copy link
Contributor Author

djoshy commented Aug 26, 2024

It may be worth testing https://issues.redhat.com/browse/OCPBUGS-38869 on this PR as well. I'm not seeing the issue any longer in my testing, so it is possible some of the status rework has fixed it.

Copy link
Contributor

@yuqi-zhang yuqi-zhang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Generally makes sense. 1 question inline

/hold

To give Sergio a chance to re-verify (plus the https://issues.redhat.com/browse/OCPBUGS-38869)

// The MachineConfig annotations are loaded on boot-up by the daemon which
// isn't currently done for the image annotations, so the comparisons here
// are a bit more nuanced.
cimage, cok := node.Annotations[daemonconsts.CurrentImageAnnotationKey]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you remind me if any of these can exist but be "empty"? For example, if the pool is opted into layering, and then opted out of layering, do these get cleaned up?

Copy link

@sergiordlr sergiordlr Sep 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is no way to opt a pool out of OCL yet, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not at the moment, no. There is a PR here for that piece: #4284

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When reverted, the desiredImage and currentImage annotations should be cleared.

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 28, 2024
@djoshy
Copy link
Contributor Author

djoshy commented Sep 6, 2024

/test all

@sergiordlr
Copy link

Verified using IPI on AWS

We have checked that the updated nodes are correctly reported in the following scenarios:

  • Apply techpreview
  • Apply a new MC in no-techpreview cluster
  • Apply OCL images in tech-preview

In all cases the right order was used when applying the new configs/images, and the maxUnavailable value was honored (tested with values maxUnavailable=1 and maxUnavailable=2)

Nevertheless, https://issues.redhat.com/browse/OCPBUGS-38869 is not fixed we can still see this issue in this PR

MCN are not updated with the new config:

$ oc get node  -o jsonpath='{.metadata.annotations.machineconfiguration\.openshift\.io/desiredConfig}' ip-10-0-4-122.us-east-2.compute.internal
rendered-worker-048b8d2473be5ff15997817af213c185 
$ oc  get machineconfignode  -o jsonpath='{.spec.configVersion.desired}'  ip-10-0-4-122.us-east-2.compute.internal
rendered-worker-0ad05289a6c60cf6589022299b83f6f6
$ oc get mcp
NAME     CONFIG                                             UPDATED   UPDATING   DEGRADED   MACHINECOUNT   READYMACHINECOUNT   UPDATEDMACHINECOUNT   DEGRADEDMACHINECOUNT   AGE
master   rendered-master-5507a2319aefb31232b8dd16103766e4   True      False      False      3              3                   3                     0                      6h9m
worker   rendered-worker-048b8d2473be5ff15997817af213c185   True      False      False      4              4                   4                     0                      6h9m

/label qe-approved

@openshift-ci openshift-ci bot added the qe-approved Signifies that QE has signed off on this PR label Sep 10, 2024
@openshift-ci-robot openshift-ci-robot removed the jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. label Sep 10, 2024
@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request references Jira Issue OCPBUGS-32812, which is invalid:

  • expected the bug to target either version "4.18." or "openshift-4.18.", but it targets "4.17.0" instead

Comment /jira refresh to re-evaluate validity if changes to the Jira bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

- What I did

  • Tweaked a few things with how OCL populates MCP status numbers. More details here.
  • Removed MachineConfigNode based status population, this will have to be reworked as a separate card.
  • Beefed up a few of the node controller log messages, so it is easier to understand what node/pool the controller is working on.
  • Updated some of the OCL related unit tests to work with the fixed mechanism, but this is really more of a stopgap. I think a bigger rework needs to be done in the test suite which I believe @cheesesashimi has alluded to. Some of the tests did not seem accurate/correctly setup, but that might be my lack of expertise with OCL.

- How to verify it
Follow the reproduction steps in the bug, there should not be any odd jumps/inaccuracies in the MCP status numbers for both the layered and non layered MC/image rollouts.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci-robot openshift-ci-robot added the jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. label Sep 10, 2024
@djoshy
Copy link
Contributor Author

djoshy commented Sep 10, 2024

/jira refresh

/cherry-pick release-4.16 release-4.17

/unhold

@openshift-cherrypick-robot

@djoshy: once the present PR merges, I will cherry-pick it on top of release-4.16 in a new PR and assign it to you.

In response to this:

/jira refresh

/cherry-pick release-4.16 release-4.17

/unhold

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@openshift-ci-robot openshift-ci-robot added jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. and removed jira/invalid-bug Indicates that a referenced Jira bug is invalid for the branch this PR is targeting. labels Sep 10, 2024
@openshift-ci-robot
Copy link
Contributor

@djoshy: This pull request references Jira Issue OCPBUGS-32812, which is valid.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target version (4.18.0) matches configured target version for branch (4.18.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, POST)

Requesting review from QA contact:
/cc @sergiordlr

In response to this:

/jira refresh

/cherry-pick release-4.16 release-4.17

/unhold

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-ci openshift-ci bot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 10, 2024
@yuqi-zhang
Copy link
Contributor

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label Sep 10, 2024
Copy link
Contributor

openshift-ci bot commented Sep 10, 2024

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: djoshy, yuqi-zhang

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot
Copy link
Contributor

/retest-required

Remaining retests: 0 against base HEAD 9596e8f and 2 for PR HEAD da754a2 in total

Copy link
Contributor

openshift-ci bot commented Sep 10, 2024

@djoshy: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-azure-ovn-upgrade-out-of-change da754a2 link false /test e2e-azure-ovn-upgrade-out-of-change

Full PR test history. Your PR dashboard.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@djoshy
Copy link
Contributor Author

djoshy commented Sep 10, 2024

/test e2e-aws-ovn-upgrade

@openshift-merge-bot openshift-merge-bot bot merged commit bade313 into openshift:master Sep 11, 2024
15 of 17 checks passed
@openshift-ci-robot
Copy link
Contributor

@djoshy: Jira Issue OCPBUGS-32812: All pull requests linked via external trackers have merged:

Jira Issue OCPBUGS-32812 has been moved to the MODIFIED state.

In response to this:

- What I did

  • Tweaked a few things with how OCL populates MCP status numbers. More details here.
  • Removed MachineConfigNode based status population, this will have to be reworked as a separate card.
  • Beefed up a few of the node controller log messages, so it is easier to understand what node/pool the controller is working on.
  • Updated some of the OCL related unit tests to work with the fixed mechanism, but this is really more of a stopgap. I think a bigger rework needs to be done in the test suite which I believe @cheesesashimi has alluded to. Some of the tests did not seem accurate/correctly setup, but that might be my lack of expertise with OCL.

- How to verify it
Follow the reproduction steps in the bug, there should not be any odd jumps/inaccuracies in the MCP status numbers for both the layered and non layered MC/image rollouts.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository.

@openshift-cherrypick-robot

@djoshy: #4489 failed to apply on top of branch "release-4.16":

Applying: controller: stop using MCN to feed MCP status
Using index info to reconstruct a base tree...
M	pkg/controller/node/status.go
Falling back to patching base and 3-way merge...
Auto-merging pkg/controller/node/status.go
CONFLICT (content): Merge conflict in pkg/controller/node/status.go
error: Failed to merge in the changes.
hint: Use 'git am --show-current-patch=diff' to see the failed patch
Patch failed at 0001 controller: stop using MCN to feed MCP status
When you have resolved this problem, run "git am --continue".
If you prefer to skip this patch, run "git am --skip" instead.
To restore the original branch and stop patching, run "git am --abort".

In response to this:

/jira refresh

/cherry-pick release-4.16 release-4.17

/unhold

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@djoshy
Copy link
Contributor Author

djoshy commented Sep 11, 2024

/cherry-pick release-4.17

@openshift-cherrypick-robot

@djoshy: new pull request created: #4583

In response to this:

/cherry-pick release-4.17

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. jira/valid-bug Indicates that a referenced Jira bug is valid for the branch this PR is targeting. jira/valid-reference Indicates that this PR references a valid Jira ticket of any type. lgtm Indicates that a PR is ready to be merged. qe-approved Signifies that QE has signed off on this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants