-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Real IP not forwarded, tcp-services, metallb #9711
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@todeb a new issue template asks questions, and the answers are used as data for analyzing the issue. Please look at the template of a new issue and answer those questions. Then re-open this issue. The info like the details of the controller installation including service, version, logs are useful when they are extracted from the live state of the cluster and can be related to the curl/other request sent to the ingress-controller. /remove-kind bug |
@longwuyuan: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened: ingress nginx forwards controller IP What you expected to happen: ingress nginx forwards client real IP NGINX Ingress controller version
Kubernetes version (use Environment:
How to reproduce this issue:
@longwuyuan @k8s-ci-robot Could you /open ? |
/reopen |
/re-open |
@longwuyuan: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
I don't really understand your configuration and output. |
I had even change the type loadbalancer service to ClusterIP: so:
and did:
|
and when I execute curl directly to clusterip svc it shows clientIP correct.
So when it is not working, the only additional hop is ingress controller.. |
I just replicated same behavior in minikube:
Ćurl against clusterip service of ingress nginx controller return ingress nginx controller IP.
|
|
I don't know how to use controller with service type clusterIP because
clusterIP is not reachable from a client outside the cluster.
Better you discuss in Kubernetes slack where are more people. You are not
showing client outside the cluster sending request to cluster at the
external-ip of the controller service type Load balancer like a real world
use case. You are not showing client's ipaddress from outside the cluster.
I can't see real live state of controller service type Load balancer with
key externalTrafficPolicy set to value Local. I can't see logs of your curl
in controller logs. But I have shown all this info in my test. So wait for
an expert to comment and solve your problem or talk in slack, because you
say you can not understand the test I did. Not sure how to proceed. Hope
someone solves your problem soon.
…On Fri, 10 Mar, 2023, 5:30 am todeb, ***@***.***> wrote:
kubectl get no -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
minikube Ready control-plane 12m v1.26.1 192.168.49.2 <none> Ubuntu 20.04.5 LTS 5.4.0-135-generic docker://20.10.23
—
Reply to this email directly, view it on GitHub
<#9711 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABGZVWQ3M4RTYEOYWH6E2DLW3JVKPANCNFSM6AAAAAAVVCMBKA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@longwuyuan Are you sure that this is not connected with #9685 BTW the title you set there is not relevant IMO. |
@Azbesciak post the output of commands like |
@longwuyuan I just deployed clusterIP to say you that it even is replaced when you dont use loadbalancer. Here is installation of metallb:
Here is upgraded NIC with use of loadbalancer:
Here is the result of curl from host that is running minikube:
Host running minikube IP:
It is reproducible anytime. And I give you all command to reproduce it. /kind bug |
You can also check documentation, which says ClusterIP is not source NATed, same as LoadBalancer with service.spec.externalTrafficPolicy=Local So for me it does looks like a bug in NIC that it replaces original source IP with its own. |
You can deploy source-ip-app in same cluster as I do and execute curl from it: Or you have a 2nd option with deploying loadbalancer as in above example and then curl its IP and port that are porinting in NIC to source-ip-app: |
After repeated update, you still have not shown the data that traces the
curl request. First data is the controller logs.
…On Fri, 10 Mar, 2023, 4:55 pm todeb, ***@***.***> wrote:
clusterIP is not reachable from a client outside the cluster.
You can deploy source-ip-app as I do and execute curl from it:
kubectl exec deployment/source-ip-app -n todetest -- curl
nginx-ingress-test-ingress-nginx-controller:8080
Or you have a 2nd option with deploying loadbalancer as in above example
and then curl its IP and port that are porinting in NIC to source-ip-app:
curl 192.168.49.10:8080
—
Reply to this email directly, view it on GitHub
<#9711 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABGZVWUJVLABHB5RRXY6RM3W3MFR3ANCNFSM6AAAAAAVVCMBKA>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
What do you mean by that? I can run any command if you provide.
But they doesnt seem to be propagated / forwarded to the source-ip-app. |
I think that is because TCP connections are not implemented like HTTP/HTTPS. The ingress-nginx controller just opens a port to the backend. If it was a feature rich LB like AWS, there may be options/flags in the LB to retain the real_client_ip, all the way to the backend pod. Also I think a developer expert has to comment if other well known options like X-Forwarded-For and Proxy-Protocol work with TCP. |
If Im doing just telnet to LB IP:
I also see that connection is established from nginx ingress controller IP and not real client IP.
Ok so waiting for developer comment on that. |
@todeb I am not a nginx master, but did you look into generated nginx config inside the controller? Maybe you will spot something there. |
what i can see in config files:
And probably this is a function executed, although I just suspect. I do not know anything how those .lua scripts are working.
|
@todeb , your expectation is well understood but kindly do not mark this as a bug. The reason for that is there is no data here that shows that the client information will traverse to the backend pod as per implementation of the current TCP/UDP connections, from client outside cluster to backend pod inside cluster. There are 2 simple facts that matter here. (1) When the request is over HTTP/HTTPS protocols, If the client information is in a header of the client request, then retaining those headers can occur by terminating HTTP/HTTPS on the controller and directly sending the connection to the backend pod without extra hops that the kube-proxy conducts (externalTrafficPolicy: Local). (2) If proxy-protocol https://kubernetes.github.io/ingress-nginx/user-guide/miscellaneous/#proxy-protocol is enabled on all the proxies between the client and the backend pod, then the client information like source-ip-address is not lost when packet travels, over the different hops/proxies. In this issue, I have seen you post And on top of that there is no data here that you enabled proxy-protocol on all the proxies/hops. Metallb is not even capable of enabling proxy-protocol so as far as my assessment is concerned, there are several users who get real-client-ip-address by enabling proxy-protocol, so there is no problem with the controller code, in that regard. If getting real-client-ip-address was broken by any recent release of the controller, then there would be many many users reporting it. |
/remove-kind bug |
Im doing curl via http and telnet although I did it through 8080 port which configured as tcp stream not http proxy on INgress Nginx.
There is no point of metallb issue here as Im using ExternalTrafficPolicy = Local which naturally traverse original IP. So from what you saying we should use the proxy-protocol, so then the backend app should support it, and there is no like default behavior of traversing real IP on ingress. Just to be sure it is just setting? |
seems that using proxy protocol, client ip is kept.
and backend client config (nginx example):
|
Environment:
Metallb: image: quay.io/metallb/controller:v0.12.1
** NIC:
** Installation:
NIC:
TESTAPP:
** Deployed:
** Testing:
curl http://10.92.3.42:8080
** Result
Or from:
kubectl logs -n todetest deployment/source-ip-app
10.42.6.221 - - [09/Mar/2023:12:44:28 +0000] "GET / HTTP/1.1" 200 388 "-" "curl/7.58.0"
** Expected result:
My IP
** Comment
I see that returned client_address=10.42.6.221 which is nginx-ingress-test-ingress-nginx-controller-4n6cg although I expected to see IP of client from where I send the request.
I had also tried adding additional parameters, although it not help either.
The text was updated successfully, but these errors were encountered: