[bitnami/keycloak] Add HPA Behavior when scaling up and down #25681
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description of the change
This PR adds the ability to configure the
behavior
of theHorizontalPodAutoscaler
of the Keycloak StatefulSet with new parameter blockautoscaling.behavior
ref: https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#configurable-scaling-behavior
This allows us to set policies that control how the autoscaler scales up/down the pods e.g., the keycloak pods can be scaled by X units or percent of current pods during a
periodSeconds
window.stabilizationWindowSeconds
can be set to set the duration which past recommendations should be considered while scalingselectPolicy
- The priority of policies that the autoscaler will apply when scaling up/down. Can beMax
,Min
,Disabled
Benefits
Keycloak pods are stateful due to the embedded Infinispan cache that requires cache key rebalancing across all live nodes when a pod joins or leaves the cluster. (This process happens on pod startup and shutdown).
The cache keys have a set number of owners (pods) that can survive one node failure with its default config (https://www.keycloak.org/server/caching#_configuring_caches).
Adding the HPA behavior when scaling down to 1 pod every 300s allows the cache keys to be rebalanced across other pods before terminating, hence avoiding potential data loss if too many pods were to be terminated at the same time by the autoscaler's default behaviour.
Possible drawbacks
The default values set in
values.yaml
slows down the scaleDown of pods.Applicable issues
Additional information
The default HPA scaleDown behaviour is now one pod every 300s.
If anybody wishes to go back to the previous behaviour, they can set the values as follows:
@javsalgar - I'm happy to set the default value to the above so that this release doesn't change the default behaviour. But i figured the default value I'm proposing are reasonable for an HA deployment of Keycloak.
These are also the default values set in the Codecentric Keycloak Helm Chart
Checklist
Chart.yaml
according to semver. This is not necessary when the changes only affect README.md files.README.md
using readme-generator-for-helmTests
pods start fine using default values
Autoscaling enabled - scaleUp policy added and some values changes from default
values.yaml