Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix status check #117

Merged
merged 10 commits into from
Nov 21, 2023
Merged

fix status check #117

merged 10 commits into from
Nov 21, 2023

Conversation

eguzki
Copy link
Contributor

@eguzki eguzki commented Nov 20, 2023

What

Two main issues found:

  • Limitador CR status check is not reading the right deployment name. Hence, the Limitador CR is never in ready state.
  • Port configuration is not passed to the limitador process, only used to configure services and liveness probes. If default port is changed, the liveness probe tries a different port than the one limitador is listening on and thus, the deployment is never available.

Few minor issues found:

  • Deployment ports not reconciled. If the ports are changed in the Limitador CR, the deployment is broken.
  • Deployment liveness and readiness probes not reconciled. If the ports are changed in the Limitador CR, the deployment is broken.
  • Service ports not reconciled. If the ports are changed in the Limitador CR, the traffic will not reach limitador.
  • Integration tests never check Limitador CR status is ready, furthermore, tests never check deployments are correct and available.
  • Integration test have issues on the AfterEach and test isolation is broken as the next test reads leftover resources of the previous tests.

Thus, things done:

  • Status reconciliation fix to read the right deployment resource name
  • Configure ports (HTTP and RLS) for the limitador operand. Mutator was already there (command mutator)
  • Added mutators for deployment ports
  • Added mutators for deployment readiness probe
  • Added mutators for deployment liveness probe
  • Added mutators for service ports
  • Cover everything in e2e tests. Highlight the new check of limitador CR to be Ready in the tests.
  • Each tests happens in a dedicated (temporary) namespace. Tearing down the test implies removing the namespace. No cluster level objects should be read/created/updated by the tests.

@codecov-commenter
Copy link

Codecov Report

Merging #117 (2ea5461) into main (49a4073) will increase coverage by 0.64%.
The diff coverage is n/a.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #117      +/-   ##
==========================================
+ Coverage   69.27%   69.91%   +0.64%     
==========================================
  Files          15       15              
  Lines        1087     1057      -30     
==========================================
- Hits          753      739      -14     
+ Misses        288      273      -15     
+ Partials       46       45       -1     
Flag Coverage Δ
unit 71.34% <ø> (ø)

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
api/v1alpha1 (u) 100.00% <ø> (ø)
pkg/helpers (u) ∅ <ø> (∅)
pkg/log (u) 31.81% <ø> (ø)
pkg/reconcilers (u) 46.12% <ø> (ø)
pkg/limitador (u) 88.61% <ø> (ø)
controllers (i) 66.76% <ø> (+1.67%) ⬆️
Files Coverage Δ
controllers/limitador_status.go 77.77% <ø> (+3.77%) ⬆️

... and 1 file with indirect coverage changes

@alexsnaps alexsnaps added this to the v0.7.0 milestone Nov 21, 2023
@didierofrivia didierofrivia removed their assignment Nov 21, 2023
@eguzki eguzki marked this pull request as ready for review November 21, 2023 15:48
@eguzki eguzki requested a review from a team November 21, 2023 15:51
@@ -112,7 +112,7 @@ func (r *LimitadorReconciler) checkLimitadorAvailable(ctx context.Context, limit
deployment := &appsv1.Deployment{}
dKey := client.ObjectKey{ // Its deployment is built after the same name and namespace
Namespace: limitadorObj.Namespace,
Name: limitadorObj.Name,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This PR was originated only to fix this one line.

Comment on lines 134 to 135
var httpPortNumber int32 = 8000
var grpcPortNumber int32 = 8001
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One trick that used in the past is to apply an offset to the default ports e.g limitadorv1alpha1.DefaultServiceHTTPPort + 100 … but not asking to change anything. Just sharing.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, that's better, because you make sure you do not conflict with the default one. Let me update that.

Copy link
Member

@alexsnaps alexsnaps left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM (famous last words 😉)
Thanks for the refactoring!

@alexsnaps
Copy link
Member

LGTM (famous last words 😉)

Summarizing 1 Failure:
  [FAIL] Limitador controller manages ports Updating limitador object with new custom ports [It] Should modify the k8s resources with the custom ports
  /home/runner/work/limitador-operator/limitador-operator/controllers/limitador_controller_ports_test.go:212

And… I jinxed it!

@alexsnaps alexsnaps self-requested a review November 21, 2023 17:01
Copy link
Member

@alexsnaps alexsnaps left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🙏

@alexsnaps alexsnaps merged commit f6a01e8 into main Nov 21, 2023
11 checks passed
@alexsnaps alexsnaps deleted the fix-status branch November 21, 2023 17:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

4 participants