Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

limit bgp announcements #3211

Closed
patriziobassi opened this issue Sep 13, 2023 · 6 comments · Fixed by #3282
Closed

limit bgp announcements #3211

patriziobassi opened this issue Sep 13, 2023 · 6 comments · Fixed by #3282
Labels
feature New network feature

Comments

@patriziobassi
Copy link

patriziobassi commented Sep 13, 2023

Expected Behavior

each worker announces only its hosted pods

Actual Behavior

in a Layer3 scenario, where workers are deployed on different L3 networks, when BGP speakers are enabled they announce all the nets/pods without any possibility to filter on availability zone or worker containers.

This create problems on upstream connectivity because of asymmetrical routing

Steps to Reproduce the Problem

  1. create a cluster with workers deployed on different AZ (and, thus, different L3 networks)
  2. create an overlay network with some pods
  3. label workers and network with bgp=on
  4. check bgp announcements

Additional Info

  • Kubernetes version: 1.28

  • kube-ovn version: 1.11.10

@oilbeater
Copy link
Collaborator

@KillMaster9 could you please take a look at this and #3212

@patriziobassi
Copy link
Author

patriziobassi commented Sep 14, 2023

Hi, as example calico uses blocksize parameter in order not to announce only /32 addresses but optimise in /26 default prefixes https://docs.tigera.io/calico/latest/reference/resources/ippool

This optimization may affect the ipam functionality too.

@KillMaster9
Copy link
Contributor

@KillMaster9 could you please take a look at this and #3212
OK!

@KillMaster9
Copy link
Contributor

Hi, as example calico uses blocksize parameter in order not to announce only /32 addresses but optimise in /26 default prefixes https://docs.tigera.io/calico/latest/reference/resources/ippool

This optimization may affect the ipam functionality too.

  1. Yes, Calico's IPAM mechanism divides network segments according to nodes. Route aggregation is performed when Calico's bird advertises routes to the outside world.
  2. The IPAM mechanism of kube-ovn is different from calico and is a decentralized mechanism;

@KillMaster9
Copy link
Contributor

Similar to k8s service, can set a TrafficPolicy attribute. If it is Local, only the Pod resources of this node can be published. If it is Cluster, all Pod resources can be published. The default is Cluster. The Metallb also expanded this function. @oilbeater

@patriziobassi
Copy link
Author

patriziobassi commented Sep 15, 2023

Hi @KillMaster9 ,

can you please provide an example how to set this filtering? i may give a try now.

i could only find https://kubernetes.io/docs/concepts/services-networking/service-traffic-policy/ refering to Service definition, not pods.

thank you

@zhangzujian zhangzujian added the feature New network feature label Oct 7, 2023
@zhangzujian zhangzujian linked a pull request Oct 7, 2023 that will close this issue
1 task
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature New network feature
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants