-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Idea/Suggestion] Use private IP with rancher in different network #45
Comments
Hi, sorry if my questions are silly, but I have 0 experience with ranger. So you are basically suggesting that when That being said, I have no idea whether So the question really is, is this within scope for a docker-machine driver or is this something that should happen after docker provisioning? @mxschmitt you are the ranger expert here, any thoughts? |
New flag would make more sense in this case yes. I forgot about 2376 port, but that port could be open just like 22 etc, since it's protected with cert authentication it shouldn't be a big security concern. Rancher / kubernetes would then use the private IP for etcd/controplane communication and other services which would make firewall management easier. There are some workarounds right now namely the one done here but in this case the traffic still goes through the public interface. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. |
Hi, any update on this? I opened a related issue in mxschmitt/ui-driver-hetzner#104 about this. |
@boris-savic HI, did you find a workaround? Thanks |
I think this is the relevant line? docker-machine-driver-hetzner/driver.go Line 353 in 16ecd2b
Why does it use the private IP if the private network is enabled? It looks like changing that line to use the public IP would be enough? @JonasProgrammer @mxschmitt |
Unfortunately I did not. |
OK. I am not familiar with Go to be honest but I am going to fork and try changing that line if the flag |
Hi, as stated before I'm still unsure whether this is something the lower-level docker provisioner (i.e. As @boris-savic suggested, perhaps adding a flag to force only |
Hi @JonasProgrammer what are the implications if we return the public IP on that line - only if ForcePublicSSH or some flag is set to true? |
If the private IP is returned there, Rancher will connect to the nodes using the private IP, meaning that Rancher and all the clusters need to be in the same private network and thus same project. If we just return the public IP there I think the issue is solved. |
The IPAddress field is not part of the hetzner specific driver part, but of the base code for all docker machine drivers. Existing APIs, such as The PR's author, that introduced the line in question, originally inteded the flag to be used so you have a 'provisioning machine' within the network IIRC, so yes, even returning it as the SSH host to connect to makes sense for this use case. IMHO just having a flag to override the SSH host is the way to go, |
I've added a flag and am doing this: func (d *Driver) GetSSHHostname() (string, error) {
if d.ForcePublicSSH {
return d.PublicIPAddress, nil
} else {
return d.GetIP()
}
} Would this be OK? |
I have added the public IP property too. I am new to Go please let me know if there is a more elegant way. |
Uhm I made that change and was going to test it that way, but I just checked the code at https://github.com/mxschmitt/ui-driver-hetzner and I don't understand how it works because it doesn't seem to set flags for the driver.... So how/where does this happen? 🤔 |
The values here https://github.com/mxschmitt/ui-driver-hetzner/blob/7932c861aeaa7ded4873dce3ba0c323afc7662dc/component/component.js#L48-L55 get converted to kebab case and then passed to the driver as CLI parameters. You want to add a new text field? |
Hi @mxschmitt thanks for the clarification. Those variables seemed in a different format so I was confused :) I just need to pass a boolean flag "forcePublicSSH" according to the changes I want to test in the driver. How should I name the variable then? "forcePublicSSH" or "forcePublicSsh"? |
Update: I first changed the driver so it only uses the public IP for the SSH hostname as you suggested, it worked in the beginning when creating the cluster but then it failed because something (I think Rancher) was trying to connect to the port 2376 on the private IPs. Just to try, I changed the code to always return the public IP of the machines and everything worked just fine. The cluster was deployed with Rancher in a different project, and I could configure Kubernetes to use the private interface for the traffic between nodes only. So... is it possible to ensure that the connections to the port 2376 are done to the public IP (when the aforementioned flag is set)? If yes, how? That would fix the remaining issue. Thanks @JonasProgrammer @mxschmitt |
I gave up because I want to use the node driver now. I would have preferred to keep things in separate projects but I can leave with a single project for now. @LKaemmerling also recommends this... I set up Rancher and a cluster with the docker driver and the node driver as they are now and everything seems to work. |
It seems that we have to return the Public IP in getIP and getURL when we set the flag. https://github.com/rancher/machine/blob/master/libmachine/drivers/drivers.go |
Together with: https://github.com/hetznercloud/hcloud-cloud-controller-manager |
Finally everyone, I think I found a solution that works for the use case we all want. Use the external IP for Rancher Communication while using the private Network and the internal IP for in Cluster Communication. Setup a Node Template with a private Network, but don't set the private network as first. Then setup a cluster using the Hetzner Cloud-Controller and define the used network in the used Secret: I added a pull request for this usecase. You don't have to use the pod network feature of the cloud controller in that case but it will set the external and internal IPs of the nodes correctly. @vitobotta I hope this is what you were looking for. This way we can use a central Rancher Instance that will talk with the public IPs and the cluster will use the Hetzner Network internally. Please correct me if I am wrong, I am exhausted by reading shitty docs and spaghetti Open Source Code all weekend. But thank god for open source. |
I wrote down my setup here: https://tech.mecodia.cloud/2021/01/12/running-a-tightly-integrated-hetzner-cloud-kubernetes-with-rancher/ |
Hi,
looking at the code and behaviour of the driver I have some suggestions that would in my opinion make it a little bit more user friendly.
What would be the end result?
When deploying a new cluster user can attach private network and select that he wants to use that network for inter-cluster communication (for etcd, controlplane, network overlay etc.). This would mean that in cloud-init user could simply setup firewall so it would allow all traffic on 10.0.0.0/8 and only expose port 22 ,80, 443 and 6443 to the outside world.
When setting up the node rancher would still ssh to the node via public IP.
How?
Based on the docs for Debian/Ubuntu the first private network interface will be attached at ens10, second at ens11 etc. RHEL distributions will use eth1, eth2, ...
When user selects he wants to use private IP this IP should only be used for nodes in that cluster - ssh should still be done via public IP. This would require changing the getSSHHostname to always return public IP - I think, unfortunately I don't have any experience with Go so I could be wrong here.
This should enable rancher to ssh into the machine via public IP and etc/control plane inside the new cluster to use the private IP.
To enforce all internal traffic goes via internal IP in the new Kubernetes cluster (i think this is part of the UI plugin) the
cloud.yaml
config shouldreplace
with:
The text was updated successfully, but these errors were encountered: