-
Notifications
You must be signed in to change notification settings - Fork 8.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TCP-services and proxy-protocol without LoadBalancer on microk8s - client IP replaced with controller internal IP #9685
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-kind bug show the output of
|
kubectl get svc,ing -A -o wide
kubectl describe svc -n $ingresscontrollernamespace
disable/... -> I did it now, I even removed the whole deployment and deployed it again. Did not work. I started with that, I made every permutation (on/off) with options I described. And I had No, I do not have metalib. Microk8s status below - as I mentioned ingress is 100% based on attached helm.
|
And as I mentioned that IP comes from the ingress controller - please see the image I attached, it is inside it. Log on the left is from my app. I also changed it back to
No difference |
BTW that is nginx logging pattern, into mentioned service
|
The ingress-controller status is pending so none of your curl/test data is valid. Please fix that and then test
|
@longwuyuan I changed to and according to https://stackoverflow.com/a/44112285/9658307 that is all what I can do, I can assign IP by myself - as I did also:
And it also did not change anything (I undeployed the whole chart, waited some time, and deployed again so there were nothing like grace period in work - the service was reachable from outside; IP is still invalid). I have literally that config apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
{{- include "ingress.labels" . | nindent 4 }}
name: ingress-nginx-controller
namespace: {{ .Release.Namespace }}
spec:
externalTrafficPolicy: Local
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
# I manage 80/443 via tcp services, default 80/443 is overridden in controller to 7998/7999, that service I mention operates on 8000
{{- range .Values.endpoints }}
- port: {{ .port }}
targetPort: {{ .port }}
name: {{ .name }}
protocol: {{ .protocol }}
{{- end }}
selector:
app.kubernetes.io/component: controller
{{- include "ingress.selectorLabels" . | nindent 4 }}
type: LoadBalancer
externalIPs:
- 10.20.18.30 and config map for it (deployed runtime version) kind: ConfigMap
apiVersion: v1
metadata:
name: ingress-nginx-controller
namespace: ingress-nginx
uid: f37f2643-0d6f-4248-b66e-0567f222aa31
resourceVersion: '17375231'
creationTimestamp: '2023-03-04T04:34:26Z'
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 0.1.0
helm.sh/chart: ingress-0.1.0
annotations:
meta.helm.sh/release-name: ingress-nginx
meta.helm.sh/release-namespace: ingress-nginx
managedFields:
- manager: helm
operation: Update
apiVersion: v1
time: '2023-03-04T04:34:26Z'
fieldsType: FieldsV1
fieldsV1:
f:data:
.: {}
f:allow-snippet-annotations: {}
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app.kubernetes.io/component: {}
f:app.kubernetes.io/instance: {}
f:app.kubernetes.io/managed-by: {}
f:app.kubernetes.io/name: {}
f:app.kubernetes.io/part-of: {}
f:app.kubernetes.io/version: {}
f:helm.sh/chart: {}
data:
allow-snippet-annotations: 'true' As I mentioned I have just bare metal server where I installed kubernetes and deployed ingress nginx. |
@Azbesciak I think you have provided information as per your thought process and your own convenience. The information that is related to client-ip is as follows to begin with ;
You can delete the other informaiton from this issue as that has no relevance to the issue. Also you need to factor that layer 7 inspection, of the headers in he client's request, containing client-ip address, will not happen, for the TCP/UDP port, that has been exposed in the service type LoadBalancer, via the config for this project's ingress-nginx controller |
I added a new section in the initial ticket (Update with request tracing). BTW that initial info was the same as expected in the template, I just removed the last section because I the whole helm deployment is based on your deployment (mentioned), I also gave the whole config and helm chart itself. And it also looked fine. I want to also notice that that app in total was migrated from docker compose, and it has the same architecture except that there is a k8s between. In docker-compose everything worked fine - I was able to see client IPs (I mean that we did not change anything outside). Also you mentioned
can you elaborate on that? Please notice that I also changed the service type to Thank you for your time and support. |
@Azbesciak after reading all the content here, my opinion is that you are repeatedly providing information and updates that is your opinion and point of view, and you are paying less attention to the details of the request for information and also you are paying less attention to the related info for triaging this issue. You could be trying to help sincerely but somehow I am not able to make the progress that I wish I could or I think I can. I am not an expert but I can surely help triaging this. I have experiences some odd error messages while testing this on release v1.6.4 of the controller. So I was hoping to get on the same page with you but its not happening. Here are some significant observations ;
|
And I have just now tested the controller on minikube and I can get the real client ip address in the logs of the controller so there is no problem to be solved in the controller code, related to getting the real client ip address |
@longwuyuan
So why the controller receives the client IP, but on my app side I see controller's internal ip? |
@longwuyuan now, look into my app logs No surprice when I - inside |
What is the real complete URL you are using to access your app ? |
And the whole traffic on given port is redirected to the app. So no matter if it is |
where is the ipaddress 10.20.18.30 ? |
Yes, we are in the private network. But this is a separate server, not my laptop or something. |
well I hope someone can solve your issue. I am not getting the answer to a simple question like "where is the ipaddress". I mean I really would like to understand where is the ipaddress because you mentioned you have the controller listening on nodePort so I expected that you need to have the ipaddress of the node+nodePort in your URL. On a completely different note, I think you should get on the K8S slack and discuss this it there as there are more people there. nodePort is never a good choice for real use The interface on which you terminate your connection needs to be capable working with a Layer7 process that can understand proxy-protocol and forward the headers to the upstream. In case of cloud environments like AWS etc, the service-provider offers configurable parameters to enable the proxy-protocol attributes like perserving the real-client-ip-address while forwarding traffic to the upstream. |
So I did not get you, sorry. I thought about geo location. That IP address belongs to the main cluster machine, it is totally hosted in our servers (no AWS, Azure or something). |
Ok, but... since
And when I see the internal nginx config inside that controller
I suppose the problem is there. I know that headers are into the main section, but maybe something does not work there? |
It seems to me that you are not using the documented and supported install https://kubernetes.github.io/ingress-nginx/deploy/#microk8s I don't see data here that points to a problem in the controller. |
@longwuyuan And btw, the installation I have comes from your repo - as also mentioned. |
@longwuyuan You were also included there |
The problem here is we are unable to make a data founded discussion about
the headers.
…On Sat, 4 Mar, 2023, 7:13 pm Witold Kupś, ***@***.***> wrote:
@longwuyuan <https://github.com/longwuyuan>
BTW I found the same issue, from 8 April 2021.
#7022 <#7022>
You were also included there
—
Reply to this email directly, view it on GitHub
<#9685 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ABGZVWTUHMK7LIM67MTPBE3W2NBGRANCNFSM6AAAAAAVOGHTWE>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I added in my nginx app printing of
I know that only
|
/retitle proxy-protocol without LoadBalancer on microk8s |
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach |
I managed to fix this issue in my Microk8s v1.30 Kubernetes cluster where The NGINX ingress controller is installed using the Microk8s ingress addon. To do so, I edited the data:
enable-real-ip: "true" |
Project is deprecating TCP/UDP forwarding #11666 so there is no action item to be tracked in this issue. Hence closing the issue. /close |
@longwuyuan: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
What happened:
I am exposing my services via tcp config map, not like the normal way the ingress does on 80 (although also one service is on it, but with the default 80/433 bypased to 7998 and 7999) -- all mapping is through it.
I need to retrieve my client IP.
I have the following config in the controller's config map
The controller's service is of type
LoadBalancer
, hasexternalTrafficPolicy: Local
; in general everything works, I can access grafana on 3000, kubernetes dashboard on 9000, my services on my desired ports. It is fine.What is not I cannot, even with the above config, retrieve my client IP.
I checked the nginx config inside the controller, find it attached - sorry for the extension, github does not support conf
[ingress-ngnix.txt]; most interesting fragment below
for grafana - as you see proxy config is not complete in compare to section in direct
http.server.listen
.I see in the
http.server.listen
that there is redirect, but as mentioned I still get invalid IP as a client IP. Instead of it I get internal controller's IP (10.1.38.72 for example)What you expected to happen:
I want to see my client IP
I also checked with
v1.5.1
, no differenceKubernetes version (use
kubectl version
):Environment:
uname -a
):that chart
ingress-helm.zip
with
kubectl version
kubectl get nodes -o wide
(ingress as you see is pinned to
maptest01
)How to reproduce this issue:
I suppose microk8s does not make problem there, you have the whole helm chart attached. My service which expect the client IP is also another nginx (web application, that one serves static files), but as mentioned I get there the controller internal IP, and it also changes when I restart the controller. (I also checked
enable-real-ip
, no diff except it was set to 0.0.0.0 in thestream.server
).Anything else we need to know:
I checked out for example #6163 or #6136 ( and
config map doc
) - no help.If it is not a bug, please excuse me and give me some hints on how to solve that. I cannot change the way I use these TCP services.
Update with request tracing
Windows'
ifconfig
relevant part (I am connected via VPN, but there were no problems with docker-compose solution that way, and nothing changed since that time in our architecture, except that we replaced docker-compose with k8s)The request comes from the web app, from chrome.
Below generated curl for bash
I also used tcpflow to see how it looks on the server side (same node where
ingress-nginx-controller
is placed; find it belowresponse headers from chrome - same as above, but it is a copy from chrome 'copy response headers'
kubectl logs $ingresscontrollerpodname -n $ingresscontrollernamespace
all requests above have the same ip. BTW not every request is placed there, I executed a couple more and these were not appended. I tried in general use that, but I do not know what is really is. I also tried to see the access log (I even enabled it with enable-access-log-for-default-backend: "true" - no difference). And just to be sure, inside
ingress-nginx-controller
I invoked:kubectl get svc,ing -A -o wide
As I mentioned in comments,
service/ingress-nginx-controller
was bothLoadBalancer
andNodePort
- no difference, also it hasexternalTrafficPolicy: Local
kubectl describe pod $ingresscontrollerpodname -n $ingresscontrollernamespace
kubectl describe svc $ingresscontrollersvcname -n $ingresscontrollernamespace
kubectl -n $appnamespace describe svc $svcname
kubectl -n $appnamespace describe ing $ingressname
...I have no ingress in the app namespace, because, as I mentioned, I am using TCP services, which directly redirect to the given service.
In general
kubectl describe ing -A
gives *No resources found` (my app is working fine on 8000, others on 3000, 9000 etc)kubectl -n $appnamespace logs $apppodname
Only relevant ones, all IP addresses are the same. nginx log format is:
Just for my own purpose, I created a simple nginx docker-compose
with config, which contains only one path (log pattern is the same as above, I get i.e.
$remote_addr
)it returns IP
10.20.18.1
, so the same as iningress-nginx-controller
.The text was updated successfully, but these errors were encountered: