You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, clusters are created with Internet-facing Kubernetes API endpoints, and are configured such that Kubernetes ingress and service endpoints are also created with Internet-facing VIPs. This is done using kube-vip to provision Elastic IPs for control plane and ingress/service use, and managing them with BGP.
While this is perfect for demo/PoC work, to more closely replicate “real-world” deployments, it would be useful to be able to create clusters configured to expose API and/or ingress/service endpoints with Metal-network-internal IP addresses. Based on a conversation with @c0dyhi11 a while back, this may also require moving from an L3 to an L2 model for the internal network.
The text was updated successfully, but these errors were encountered:
expose API and/or ingress/service endpoints with Metal-network-internal IP addresses
If I am understanding this correctly, you would like to see the CCM assign additional Layer3 managed private addresses, 10.x.x.x, like those already in use as Node IPs, so that kube-vip may route those addresses throughout the cluster in the same way public addresses are managed and routed.
I think kube-vip may be ready to route any private address over BGP, so perhaps we would get that for free.
this may also require moving from an L3 to an L2 model for the internal network.
v0.1.0 relied on Layer2 networking before this project was adapted for Layer3 mode.
Equinix Metal devices may be provisioned in so-called Hybrid and Hybrid-bonded states where either or both bonding pairs (respectively) are Layer2 and Layer3 capable. (Hybrid-bonding mode is not available in all facilities.)
We could take advantage of the Hybrid mode across all nodes or we could introduce an option that would use Layer2 on compute nodes. Either of these could be explored more in a separate issue. If I captured your interest correctly in the first point, I don't think we would need to delve into Layer2 networking just yet.
Currently, clusters are created with Internet-facing Kubernetes API endpoints, and are configured such that Kubernetes ingress and service endpoints are also created with Internet-facing VIPs. This is done using
kube-vip
to provision Elastic IPs for control plane and ingress/service use, and managing them with BGP.While this is perfect for demo/PoC work, to more closely replicate “real-world” deployments, it would be useful to be able to create clusters configured to expose API and/or ingress/service endpoints with Metal-network-internal IP addresses. Based on a conversation with @c0dyhi11 a while back, this may also require moving from an L3 to an L2 model for the internal network.
The text was updated successfully, but these errors were encountered: