-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[DEV] Add CoreDHCP to Helm #86
Comments
I'm not sure how much we considered the needs of the DHCP server for the original dnsmasq Deployment. It was running, but IDK if we ever had a proof of concept for anything talking to it. The DHCP server will handle requests for hosts outside the Kubernetes network. Normal broadcast delivery will not work as such, and we'd need to forward traffic to it. This is apparently how CSM handles DHCP also--it has a Kea DHCP instance exposing a LoadBalancer, with metallb BGP peering to node networks and forwarding rules to the LB address on the node network (see CSM's PXE troubleshooting and DHCP troubleshooting guides). I'm unsure where all that gets configured for full CSM setups, but found a basic example of minimal configuration for such a setup. I don't think there's any way to handle dynamic population of the We oddly had an existing |
#87 provides a basic "it runs!" set of values and templates, with some caveats:
|
Does the latest version v0.0.5 work for you? I examined it and the |
At this point, we have not enabled TLS at the SMD level and rely on the API gateway for TLS termination and signed tokens for authN/authZ. Having said that, we have the ACME pieces running and we could create and rotate TLS certificates for the microservices using that or we could protect them using an mTLS service mesh. This matters more for k8S deployments than it does in our podman deployments Do you have a proposal for mTLS within k8s for SMD that doesn't preclude the current operations? |
You're driving at the right stuff here. We may need to explore options outside of the standard k8s networking in order to get this to work reliably. I've never understood how networking would work to bring DHCP properly into a remote k8s cluster without complex and unpleasant VLAN incantations. The solution in CSM only works because of direct connections to the worker nodes and plenty of VLAN tagging. |
@rainest Ah, I found the issue. We were originally pushing Thanks for reporting the issue! |
I will update the quickstart docker-compose recipe to use the correct container. |
The above PR also fixes an issue where 'permission denied' would be seen when binding to port 67. Fixed in coresmd v0.0.6. |
TIL TFTP is one of those archaic protocols that doesn't much care for the realities of NAT and plays fun games with ports. Because it sends responses from arbitrary high ports instead of 69, there's no matching entry in the iptables CNI goop, and the responses get dropped. I'm not exactly sure how CSM's existing TFTP handles this, since I don't see much of obvious interest in its Service definition. The BOS diagram doesn't go into NAT and the PXE overview doesn't cover much anything aside from the special switch routes. On my KinD instance:
is the internal single Pod IP. metallb is between the client and the Pod, but the problem is after that, within the cluster network, AFAICT. From within the kind container, tcpdump will show inbound requests to the Pod from 10.244.0.1 (IDK enough about the CNI innards to properly say what this is), with attempted outbound traffic on random high ports that goes nowhere:
Code state looks like what'd I'd expect: it successfully reads the file, but the
|
Short Description
Remove the existing dnsmasq Deployment from the chart and replace it with CoreDHCP, for OpenCHAMI/roadmap#50
Definition of Done
Additional context
Ref #78 and #84 for equivalent work on the Podman side.
The text was updated successfully, but these errors were encountered: