Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optionally support Metal-network-internal API and ingress/service endpoints #51

Open
henrybell opened this issue Mar 9, 2021 · 1 comment
Labels
enhancement New feature or request

Comments

@henrybell
Copy link
Collaborator

Currently, clusters are created with Internet-facing Kubernetes API endpoints, and are configured such that Kubernetes ingress and service endpoints are also created with Internet-facing VIPs. This is done using kube-vip to provision Elastic IPs for control plane and ingress/service use, and managing them with BGP.

While this is perfect for demo/PoC work, to more closely replicate “real-world” deployments, it would be useful to be able to create clusters configured to expose API and/or ingress/service endpoints with Metal-network-internal IP addresses. Based on a conversation with @c0dyhi11 a while back, this may also require moving from an L3 to an L2 model for the internal network.

@displague displague added the enhancement New feature or request label Mar 10, 2021
@displague
Copy link
Member

expose API and/or ingress/service endpoints with Metal-network-internal IP addresses

If I am understanding this correctly, you would like to see the CCM assign additional Layer3 managed private addresses, 10.x.x.x, like those already in use as Node IPs, so that kube-vip may route those addresses throughout the cluster in the same way public addresses are managed and routed.

I imagine cloud-provider-equinix-metal would need to take new hints to provision private addresses. kubernetes-sigs/cloud-provider-equinix-metal#44

I think kube-vip may be ready to route any private address over BGP, so perhaps we would get that for free.

this may also require moving from an L3 to an L2 model for the internal network.

v0.1.0 relied on Layer2 networking before this project was adapted for Layer3 mode.

Equinix Metal devices may be provisioned in so-called Hybrid and Hybrid-bonded states where either or both bonding pairs (respectively) are Layer2 and Layer3 capable. (Hybrid-bonding mode is not available in all facilities.)

We could take advantage of the Hybrid mode across all nodes or we could introduce an option that would use Layer2 on compute nodes. Either of these could be explored more in a separate issue. If I captured your interest correctly in the first point, I don't think we would need to delve into Layer2 networking just yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants