-
Notifications
You must be signed in to change notification settings - Fork 307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Nginx Reverse proxy #21
base: master
Are you sure you want to change the base?
Add Nginx Reverse proxy #21
Conversation
Hi, I've two questions about Nginx container:
Is VIRTUAL_HOST used to set a container's IP ? because in my tests, when doing "docker-compose up" all the containers' IP address changes and related containers do not work. Thanks |
I don't think there is any thing to set in reverse proxy because of SSO. For SSO only matters that the URL be accessible for login and to able to get JWT token. It have to be set on Application level. There are some solutions where the Web server makes authentication via the given web token but I think its out of scope for this solution.
Not at all - only just the port have to accessed on the docker's given network. So it depends on the dokcer compose settings. The VIRTUAL_HOST variable is a marker for the nginx-gen conatiner to pick up and create configuration where nginx can map the exposed ports to the given virtual domain. |
This is good, I was actually looking into this last year. I didn't get anywhere near as far as you have though. Only question is why use xip.io? We want to try to not rely on internet services (after the initial setup anyway). When I did it, I used PiHole as the DNS server and placed my Pi's IP into PiHole's hosts file so that |
For me it was the simplest, because I'm using this only local network, and only for just that to avoid the memorization of container ports. I don't use PiHole because I have 5 mikrotik routers / AP so the DNS management have to be there. I have a suggestion: the default configuration have to be dependent of the containers settings. For example: if somebody uses PiHole, on thet case the DNS given by that and all env files can be generated with that. Or we can give some other helper scripts which can modify services (generated) configs with sed - which can be called from setup. |
You catch the point! I wuld be able to access to all the other containers after a Single Sign On / token authentication, to have something more secure than all the user/password to be set for each container.
Ok, something is going to be clear... it's good to have this automatic generation.
From my point of view, I prefer having a static IP address set into docker file. I know that my way is not so "Plug container and play", because some additional configuration are needed to be done into the generated docker-compose.yml file before running it, but it is the best I did (starting from 0% knowledge of docker ;) ). Now, using those nghinx container, will I have issue with this kind of network related setup ? Thanks, |
Okay. Because we have template in ngnix-gen which can be override, it may possible that SSO configuration be a part of that process of the virtual host generation. But to achive that it I think have to create another feature request. In the last month I've made openID and SAML SSO integration for Keycloak (same goal as authelia), so I think I can check it, but not in this week.
As I understand (maybe I'm wrong) you are using the 172.x.x.x network addresses on your network to access the docker services. I think it's not the best practice. If you want to achieve that DuckDNS and OpenVPN port be constantly accessible I recommend use the host machine IP. If you are using 'host' networking mode for OpenVPN and PiHole in docker compose, and expose the required ports the docker network can be dynamic and the communications between docker instances can be made wit the internal domain name - which is the container name. We use to be use this approach for production environments without any problem. |
I'm using the bridged network sincerely because the first time I opened portainer, all the IOTstack container were put into the bridge network. |
I've done some reading and now it's clear: by default the network bridge is used, this is why I've initially found the container attached to the bridge network and why I've kept this way. Then I moved to a user-defined bridge network, statically setting all the IPs to resolve my issue as described above. As stated into docker's manual :
This allows to automatically resolve container IPs from other containers in the same user-defined bridge network as show in their tutorial. With this mechanism (user defined bridge network), you could remove the dependency on xip.io service keeping only the container name. |
By the way, you can specify hosts to appear in a docker instance's hosts file by putting this in the compose file:
When I set PiHole as my DNS, I was able to |
Have to differentiate the name resolution inside the bridge and in the outside world. The xip.io resolution required to be able to resolve all subdomains to the same IP, but the requested domain contains the virtual domain, so the nginx can decide what host can be used inside the bridge. Its important, because I (and a lot other users) don't use IOStack as DNS server. So the whole xip.io (or other domain) resolution is required to differentiate the virtual domain outside. The xip.io can be avoided when the local DNS server can resolve the host machine Pi domain name. I chose xip.io because for that no configuration required on any DNS server. Maybe have to make an option in setup.sh - where the domain can be replaced over the generation, similar like TZ is set. |
Thank you both for this discussion. I've to study extra_hosts, that seems interesting, because I would keep on my RPI the bridge networking instead of moving to "hosts network". About nginx, if in future I'll use it on my RPI, I'll ask you some help :) |
Hi, Thanks for your time! |
There is some differences between the initialization of containers. Some of them does not work the default networking of Docker - the port mapping can be on host directly which cannot be mixed with other networks - there are UDP ports based services. This can lead situations where IP Firewall and routing rules have to be changed in host machine, which is not a trivial task and we would like to avoid that type of complexity in IOTStack. When I will have time I will investigate a solution which can be applied for all of our existing container - I think Nginx will be replaced, instead of a LoadBalancer (like traefik or HAProxy) will be used - which can handle UDP/TCP ports also not only the HTTP ports. |
I noticed in file |
Hi, any update on this? Seems you have some git conflicts too. |
Hi everyone, |
Add support of nginx reverse proxy. It tracking all container where VIRTUAL_HOST env is defined,
automatically generate nginx proxy config for it.
As described in https://github.com/nginx-proxy/nginx-proxy. We use separate containers.
The HTTPS implementation as documented here: https://medium.com/@francoisromain/host-multiple-websites-with-https-inside-docker-containers-on-a-single-server-18467484ab95
By default the xip.io is used to be able make subdomains for IP address. So for example the nodered service can be accessed:
nodered.X.X.X.X.xip.io
where X.X.X.X is the IP address of IOTstack.
Any other domain can be used, on that case please replace VIRTUAL_HOST env variable of the given instance with the coresponding value.
HTTPS can be used, but xip.io method is not suitable for that, so as the links describes any container can be exposed with the definition of HTTPS proto, but some domain have to be defined.