Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Different naming convention on resources created by limitador-operator #105

Closed
slopezz opened this issue Oct 5, 2023 · 0 comments · Fixed by #108
Closed

Different naming convention on resources created by limitador-operator #105

slopezz opened this issue Oct 5, 2023 · 0 comments · Fixed by #108
Assignees
Milestone

Comments

@slopezz
Copy link

slopezz commented Oct 5, 2023

I have been successfully testing limitador-operator v0.6.0, and I have identified some inconsistencies with resource name's created by the operator.

I deployed the following CR called cluster:

apiVersion: limitador.kuadrant.io/v1alpha1
kind: Limitador
metadata:
  name: cluster
spec:
  affinity:
    podAntiAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
        - podAffinityTerm:
            labelSelector:
              matchLabels:
                app: limitador
                limitador-resource: cluster
            topologyKey: kubernetes.io/hostname
          weight: 100
        - podAffinityTerm:
            labelSelector:
              matchLabels:
                app: limitador
                limitador-resource: cluster
            topologyKey: topology.kubernetes.io/zone
          weight: 99
  limits:
    - conditions: []
      max_value: 400
      namespace: kuard
      seconds: 1
      variables:
        - per_hostname_per_second_burst
  listener:
    grpc:
      port: 8081
    http:
      port: 8080
  pdb:
    maxUnavailable: 1
  replicas: 3
  resourceRequirements:
    limits:
      cpu: 500m
      memory: 64Mi
    requests:
      cpu: 250m
      memory: 32Mi
  storage:
    redis:
      configSecretRef:
        name: redisconfig

And I saw that most created resources follow's naming convention of limitador- prefix plus $CR_NAME:

  • We know it is a limitador resource (in general), thanks to the limitador- prefix
  • Specifically we know it belongs to cluster instance, thanks to $CR_NAME

In that particual case it would be limitador-cluster, so these are 2 of the created resources:

  • Service name: limitador-cluster
  • PDB: limitador-cluster

Actually this same logic is applied to all label selectors, where there are 2 labels:

            labelSelector:
              matchLabels:
                app: limitador
                limitador-resource: cluster

However, there are 2 cases in which this naming convention is not addressed:

  • Deployment: cluster (without limitador- prefix)
    • From my point of view, it is really important having easily the possibility to identify what does a pod by the name, in that case as limitador- prefix is not being added, having pods whose name is just the CR_NAME (that can be anything) can be missleading
  • Configmap: limits-config-cluster (without limitador- prefix)
    • From my point of view, it would be more easier knowing the purpose of the configmap if the same limitador- prefix is used to all created resources, including this configmap
    • Suggested name: limitador-limits-config-cluster
@grzpiotrowski grzpiotrowski self-assigned this Oct 9, 2023
@grzpiotrowski grzpiotrowski moved this from To do to Review in Kuadrant Service Protection Oct 23, 2023
@alexsnaps alexsnaps added this to the v0.7.0 milestone Nov 13, 2023
@alexsnaps alexsnaps moved this to In Progress in Kuadrant Nov 16, 2023
@github-project-automation github-project-automation bot moved this from Review to To test in Kuadrant Service Protection Nov 20, 2023
@github-project-automation github-project-automation bot moved this from In Progress to Done in Kuadrant Nov 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Status: To test
Development

Successfully merging a pull request may close this issue.

4 participants