Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding xDS Adapter CFP #14
base: main
Are you sure you want to change the base?
Adding xDS Adapter CFP #14
Changes from 1 commit
1310f6f
a0577de
7772844
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Won't they need to round trip through an xDS controller / server?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like one of the major advantages of an xDS control plane managing this information is that it's only handling this information, rather than being a generic workload-management API like Kubernetes.
For some cases, I don't it will ever make sense to put xDS between Cilium and Kubernetes, but for some use cases, particularly endpoints and endpoint grouping (Clusters in xDS, Services in Kubernetes), it seems more straightforward to map these to xDS objects.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand how we can have an xDS adapter without having an implementation of the server / controller.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that to validate that this works, we'll need an xDS control plane to talk to. Which is a not-insignificant engineering problem in itself.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that's a good point. What I'd really intended here was that this CFP was meant to be about foundational building blocks/infrastructure, not features. I agree that an xDS control plane needs to be bundled with this in some way and have already added it as phase 3 of this CFP.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Has any public high-level summary been written about this that could be linked here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sadly don't have anything public yet, we're still in pretty early stages.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just to be clear, in Cilium, an "xDS adapter" alone doesn't solve this. We will also need changes into Cilium's control plane to propagate its internal state back to the xDS adapter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, this just provides the foundation to solve this. The whole idea of topology aware routing or really any form of routing that requires a feedback loop is deceptively simple until you start to think through this part of it. What xDS gives us here is an established API + patterns for completing a feedback loop, but like you're saying, we'd still need to connect the dots here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't understand this problem. Kubernetes API Server is the "control plane" of the cluster, won't we have this same problem for any "control plane" that receives the load reports from Cilium?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is where the bidirectional nature of xDS comes in a bit more handy - every xDS communication is a message and response pair, unlike the Kubernetes API where each operation is either a write or a read. The design of the Load Reporting Service uses this to its advantage, effectively making the control plane a client instead of the server, since one "give me your load numbers" request will produce a stream of load numbers (that the control plane will simply acknowledge).
That said, I think it's important to remember that actually building these control planes is full of really hard concurrency problems. More on that under the "incremental" section in a few lines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LRS providing feedback on top can only be a net increase in load. I could see maybe an argument there that kube-apiserver protocols are not as well suited to the problem, but before we even consider handling the load feedback, what about the existing load of just configuring K8sEndpoints on all of the cilium-agents?
Perhaps some guiding questions:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let me preface this by saying that scalability was not my primary goal with this CFP, but I do think it could be a nice coincidental benefit of the approach even outside of load reporting. When we get to load reporting, I can't think of any feasible way to do it with existing k8s constructs, and I'd rather not invents something new here when LRS already exists.
I spent a fair amount of time thinking about the Kubernetes endpoint scalability problem as I was working on the EndpointSlice API, and went as far as scale testing large k8s clusters until they broke to really understand what was happening (relevant slides). This is a relatively unique problem in terms of Kubernetes APIs. Here's what we have to deal with:
Some Kubernetes APIs have to deal with the scale of EndpointSlices but are only consumed in one or two places - ie some kind of centralized controller(s). The Pod API is consumed by every Node thanks to Kubelet, but Kubelet is able to filter only the Pods that are local to it, meaning that each Pod is distributed to exactly one Node.
Let's consider a rolling update of a deployment with 100 Pods. Each of those Pods is going to go through a transition of unready -> ready while the old 100 Pods are transitioning from ready -> unready. Let's say that translates to 400 distinct events to Process (old Pod -> unready, old Pod -> terminated, new Pod -> unready, new Pod -> ready) * 100. In a naive implementation, that would mean that Kubernetes API Server would need to transmit 400 EndpointSlice updates to every node in the cluster. That becomes especially fun when you consider that some providers support 15,000 Nodes per cluster.
Fortunately the EndpointSlice controller uses batching to mitigate that to some extent, but hopefully that sets the stage for the unique problem that endpoints pose to the Kubernetes API Server. Like @joestringer mentions above, from the perspective of the API Server, it would be amazing if it didn't need to distribute all of those updates to each individual Node.
As far as LRS specifically, +1 to everything @youngnick mentioned above. Load reporting would be especially complex with existing Kubernetes constructs but relatively straightforward with LRS. Everything that goes through Kubernetes API Server would need to be persisted through writes to etcd. On the other hand, LRS combined with an xDS control plane could enable us to bypass that entirely.
I don't think this necessarily does that. It may have some marginal impact here, but I don't think it will be a huge one. In my opinion, a lot of the value here would be that we'd have a path to moving this load from the API Server to a separate control plane that could be scaled independently and potentially more optimized for this specific purpose. My proposal is not really focused on scalability, but I think it provides the potential for significant improvements here. For example, you might choose to deploy an xDS control plane per-zone as a solution that might improve both availability and scalability.
So in the base case where people just don't use this feature, I think there's no impact. Assuming they do, I think it entirely depends on how xDS control planes are deployed. Using the example above, if there were a separate instance of a control plane in each zone it may improve overall availability. On the other hand, if you're adding a single xDS control plane instance to a cluster that has 3 API Server replicas, you might decrease availability.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree that LRS is designed to do exactly this, but implementing incremental xDS is not straightforward. Unless your control plane is very carefully designed, it's nearly impossible (which is why many simpler Envoy control planes don't do it.)
To be clear, the hard part is on whoever's building the control plane, not inside the Cilium Agent. But given that we'll need an open-source thing to test with at the very least, the engineering cost of building this shouldn't be underestimated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, which is why the scope of the control plane work should be clear enough and be approached in a phased manner. Implementing something for e2e testing is much simpler than building one usable in production.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Discussing not running this in production gives me a bit of concern. How will Cilium users benefit from this functionality if the reference implementation isn't built with production in mind?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarify my previous comment, I am not suggesting we should build an experimental control plane. Instead, we should agree on the scope of the work as an initial deliverable. Building a production level control plane that handles all the edge cases can be challenging. There should definitely be a reference implementation that is usable outside of tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the minimum bar here in terms of initial development would be for a OSS/CNCF reference xDS control plane that is sufficiently reliable to run for e2e testing and development and also at least one that is production ready. Those will probably be the same project, but in the odd chance we are successful at building a good ecosystem here, it's at least theoretically possible that there will be multiple OSS production-ready options and we won't need the reference/testing implementation to also be production ready. Our e2e tests here could become something that could be paired with any compatible xDS control plane in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is it about this centralized xDS control plane that is more efficient?
Probably worth also contrasting with kvstoremesh (see CiliumCon @ kubecon NA 2023 or https://arthurchiao.art/blog/trip-first-step-towards-cloud-native-security/#222-kvstoremesh)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cilium/cilium#30283 describes the benefits of centralization in a different context. Just wanted to share this for reviewers not aware of the adjacent proposal.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you link the Cilium API this is modeled to target?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is currently referring to Cilium's Service cache, but I expect that at least some details may change a bit if we're building a common interface around that as part of phase 1: https://github.com/cilium/cilium/blob/0632058f820c05013dfcb010d6bda0911b1269d7/pkg/service/service.go#L73-L97
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A bit confusing with so many Service types within the code, but can this be mapped 1:1 to loadbalancer.SVC?