-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[bitnami/mongodb] the mongosh in replicaset architecture not run as expected. #24783
Comments
Hi @mrfzy00, Are you using the bitnami images? |
Hi @Mauraza I bypassed the images via harbor to prevent the limit pull from docker, and yes I bypassed the bitnami images of mongodb |
I am having the same issue and I am using direct the bitnami images. This is my config: architecture: replicaset
auth:
enabled: true
usernames:
- my_db_user
databases:
- my_db
existingSecret: my-mongodb-auth-secret
replicaSetName: rs0
replicaCount: 2
resources:
limits: {}
requests:
cpu: 2
memory: 1024Mi
persistence:
enabled: true
storageClass: managed-csi
size: 15Gi |
same issue for replicaset today 2024-04-05 with the latest helm 15.1.3 and docker image 7.0.8-debian-12-r1 |
The following command got the same issue. with arbiter restarted once: |
It works today?? |
I have this same error, my small debug:
Workaround is to set:
|
Seeing the same issue:
|
Just in case anyone else had the same issue as me, I realised belatedly that On top of this, you may need to completely destroy all resources (this includes the PersistentVolume and PersistentVolumeClaim) because the credentials are set only once during creation and are not updated subsequently when the chart is upgraded, ref. I used chart version
|
@nhs-work interesting, will try to research on this one thanks. |
I tried the previous approach and it didn't work for me. For now the only valid approach (to be able to have MongoDB up and running) is what @WMP mention about disable the readiness probe. This may work untit some one in #Bitnami fix this issue, that seems to be related to the Bitnami's MongoDB Docker image. |
After trying different options, I realize that not even disabling the readiness probe it works. The only possible workaround, in my case, it to deploy it as standalone. |
I also run into this problem. Digging in the scripts in the container (at
This gave me the impression that the first line is causing the response to be invalid json (what makes kubernetes failing the probes). The warning is triggered as the My workaround: set the home directory to
Or as a oneliner: |
Hi I am trying to reproduce the issue but I am not having luck. These are the steps I am following:
architecture: replicaset
auth:
enabled: true
usernames:
- my_db_user
databases:
- my_db
replicaSetName: rs0
replicaCount: 2
persistence:
enabled: true
$ export MONGODB_RS_KEY=$(kubectl get secret --namespace default mongodb -o jsonpath="{.data.mongodb-replica-set-key}" | base64 -d)
$ export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 -d)
$ helm upgrade mongodb oci://registry-1.docker.io/bitnamicharts/mongodb --version 15.1.5 -f values.yaml --set
auth.rootPassword=$MONGODB_ROOT_PASSWORD --set auth.replicaSetKey=$MONGODB_RS_KEY --set image.debug=true
Pulled: registry-1.docker.io/bitnamicharts/mongodb:15.1.5
Digest: sha256:d711f7e3e7959a5e4dd5cdcdef1581b4d2836646d4deda2d12a7e5021e3389e4
Release "mongodb" has been upgraded. Happy Helming!
NAME: mongodb
LAST DEPLOYED: Wed Apr 24 14:43:58 2024
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
CHART NAME: mongodb
CHART VERSION: 15.1.5
APP VERSION: 7.0.8
...
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mongodb-arbiter-0 1/1 Running 0 77s
mongodb-1 1/1 Running 0 62s
mongodb-0 1/1 Running 0 35s Does anyone have concrete steps to reproduce the issue? |
Having the same issue with the current 7.0.8 version.
|
What is the output of
For me it happened on an upgrade. I'll try with a fresh setup. |
Setting up a new dev system (with our complete backend) shows:
My values:
|
I tried to reproduce as you did @fmulero but did not succeed. I tried further by imitating my setup: running mongodb as a subchart and can observe the problem now. Steps to reproduce:
Following theses steps I can see the failing *ness probes:
|
This is tricky. I am deploying via ArgoCD. If I deploy using the name mongodb it fails but if I deploy using mongodb-rs it works well. |
Thanks a lot @maon-fp! I followed the steps you shared but I am not able to reproduce the problem 😞 . I can see the warnings and the events you share (and they can be expected during startup ) but the pods get ready.
Could you try adding this value to your config and share the output? extraVolumeMounts:
- name: empty-dir
mountPath: /.mongodb
subPath: mongodb-home |
My fault, I needed to delete the PVC before deploying again. In my case it is working properly. |
I face the warning also when I am using customize startup probe. Even service is still working however it is better that there should be no warning. warning mongodb log
|
I agree. The volume I mentioned above was added in the PR #25397 to avoid the warning |
With the latest chart (v15.5.2) I could not observe this behavior anymore. Thanks a lot @fmulero! |
I'll close the issue then. Thanks for the output. |
Name and Version
bitnami/mongodb v15.1.0
What architecture are you using?
amd64
What steps will reproduce the bug?
Are you using any custom parameters or values?
What do you see instead?
The text was updated successfully, but these errors were encountered: