-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bitnami/elasticsearch] GKE Ingress backend unhealthy. #16599
Comments
Quick update, I did verify the landing page that ES uses (root or /) does return a 401. Just FYI there. |
A couple of things to note with GKE: |
Any ideas? |
Ah, perhaps I can add an anon user then make the healthcheck url /_cluster/health? How could I add this user to the values yaml? |
I was able to add the anon user and allow it to see /_cluster/health without auth. This now works as the health check url. Unfortunately now I am seeing: upstream connect error or disconnect/reset before headers. retried and the latest reset reason: connection termination Any ideas? |
@corico44? Any sugestions? |
Hello @mmosierteamvelocity, Sorry for the late response. I have opened an internal task to handle this problem. Thank you very much for reporting this problem! We will notify you in this ticket with any updates of the task. |
If you had |
Hi everyone, I reproduced the issue installing the Elasticsearch chart on a GKE cluster using the values below: security:
enabled: true
elasticPassword: some-password
tls:
restEncryption: true
autoGenerated: true
usePemCerts: true
ingress:
enabled: true
pathType: ImplementationSpecific
hostname: blah.blah.blah
service:
type: NodePort
annotations:
cloud.google.com/backend-config: '{"default": "elasticsearch"}'
extraDeploy:
- apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: elasticsearch
spec:
healthCheck:
type: HTTPS
requestPath: /_cluster/health?local=true
port: 9200 As you can see, I added Google BackendConfig to adapt the health check used by GCP. However, the backend service is still listed as "UNHEALTHY" on GCP. The reason? There are actually two issues:
Successful probes on port 9200 can be achieved using a "curl" command such as the one below: $ kubectl port-forward svc/elasticsearch 9200:9200 &
$ curl -i -k --user elastic:some-password https://127.0.0.1:9200/_cluster/health?local=true
HTTP/1.1 200 OK
X-elastic-product: Elasticsearch
content-type: application/json
content-length: 383
{"cluster_name":"elastic","status":"green","timed_out":false,"number_of_nodes":3,"number_of_data_nodes":3,"active_primary_shards":0,"active_shards":0,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":100.0} As you can see the With this in mind.. I don't know if it would be possible to make this work with GKE Ingress, but I'd recommend to ask Google support team about possible alternatives. |
Name and Version
bitnami/elasticsearch
What architecture are you using?
amd64
What steps will reproduce the bug?
1.) Deploy helm chart to GKE using gcloud. Readiness and Liveness probes enabled. Basic auth enabled. Ingress enabled.
Everything comes up great, I can reach ES from the endpoints but not from the Ingress. I have the Ingress enabled as follows:
Are you using any custom parameters or values?
-Auth enabled with generated TLS
-Ingress enabled with GKE annotations
-Readiness and Liveness Probes enabled
What is the expected behavior?
Backend healthy
What do you see instead?
The backend on GKE for the LB is not healthy with basic auth enabled. I have tried changing the path to /login but that did not work. The GKE backed for the ingress/LB requires a return of 200 but does not get one even though I can access ES and login via the endpoints. I suspect it does not return a 200 because of the little pop up login dialog. I got around this with kibana by setting the backend healthcheck url to /login.
Additional information
No response
The text was updated successfully, but these errors were encountered: