Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When MDCB service's type is "LoadBalancer", no pods are selected as endpoints #345

Open
chaich-pcs opened this issue Oct 25, 2024 · 0 comments

Comments

@chaich-pcs
Copy link

When testing in version 2.1.0, using the following configuration in the Helm chart for MDCB:

    service:
      type: LoadBalancer
      port: 443

I describe the service and it includes "externalTrafficPolicy=Local" in the pod selector. This does not match the MDCB pods and causes no endpoints for the service

Name:                     mdcb-svc-tyk-control-plane-tyk-mdcb
Namespace:                tyk
Labels:                   app=mdcb-tyk-control-plane-tyk-mdcb
                          app.kubernetes.io/instance=tyk-control-plane
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=tyk-mdcb
                          helm.sh/chart=tyk-mdcb-2.1.0
Annotations:              meta.helm.sh/release-name: tyk-control-plane
                          meta.helm.sh/release-namespace: tyk
Selector:                 app.kubernetes.io/instance=tyk-control-plane,app.kubernetes.io/name=tyk-mdcb,externalTrafficPolicy=Local
Type:                     LoadBalancer
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.106.232.203
IPs:                      10.106.232.203
Port:                     serviceport  443/TCP
TargetPort:               9091/TCP
NodePort:                 serviceport  32539/TCP
Endpoints:                <none>
Port:                     healthport  8181/TCP
TargetPort:               8181/TCP
NodePort:                 healthport  31070/TCP
Endpoints:                <none>
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

After doing a "kubectl delete" on this service, I update the Helm chart to use this configuration:

    service:
      type: NodePort
      port: 443

I did a "helm upgrade", and I describe the service again. This time the pod selector does not include "externalTrafficPolicy=Local"
and the endpoints are correctly populated.

Name:                     mdcb-svc-tyk-control-plane-tyk-mdcb
Namespace:                tyk
Labels:                   app=mdcb-tyk-control-plane-tyk-mdcb
                          app.kubernetes.io/instance=tyk-control-plane
                          app.kubernetes.io/managed-by=Helm
                          app.kubernetes.io/name=tyk-mdcb
                          helm.sh/chart=tyk-mdcb-2.1.0
Annotations:              meta.helm.sh/release-name: tyk-control-plane
                          meta.helm.sh/release-namespace: tyk
Selector:                 app.kubernetes.io/instance=tyk-control-plane,app.kubernetes.io/name=tyk-mdcb
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.96.28.212
IPs:                      10.96.28.212
Port:                     serviceport  443/TCP
TargetPort:               9091/TCP
NodePort:                 serviceport  30160/TCP
Endpoints:                10.244.120.70:9091
Port:                     healthport  8181/TCP
TargetPort:               8181/TCP
NodePort:                 healthport  30677/TCP
Endpoints:                10.244.120.70:8181
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

The pod selector appears to be wrong when the type is LoadBalancer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant