Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/mongodb] the mongosh in replicaset architecture not run as expected. #24783

Closed
mrfzy00 opened this issue Apr 2, 2024 · 25 comments
Closed
Assignees
Labels
mongodb solved tech-issues The user has a technical issue about an application

Comments

@mrfzy00
Copy link

mrfzy00 commented Apr 2, 2024

Name and Version

bitnami/mongodb v15.1.0

What architecture are you using?

amd64

What steps will reproduce the bug?

  1. On my values.yaml I add the replicaset architecture
  2. Have to try immediate recovery by deleting the other replicas but it seems the error keeps shown
  3. Delete all my resources, and redeploy, but the nodes keep saying not ready.

Are you using any custom parameters or values?

        global:
          imageRegistry: "proxy.harbor.id/oo"
          storageClass: "standard"
        externalAccess:
          enabled: true
          service:
            type: LoadBalancer
            loadBalancerIPs:
              - xxx.xxx.xxx.xxx
              - xxx.xxxx.xxx.xxx
        auth:
          enabled: true
          rootUser: root
          rootPassword: "myPassword"
        initdbScripts:
          setup_replicaset_script.js: |
            rs.add(“mongodb-ci-replica-0.mongodb-ologi-ci-replica.default.svc.cluster.local:27017”)
            rs.add(“mongodb-ci-replica-1.mongodb-ologi-ci-replica.default.svc.cluster.local:27017”)
        architecture: replicaset
        replicaSetHostnames: true
        replicaCount: 2

What do you see instead?

Readiness probe failed: Warning: Could not access file: ENOENT: no such file or directory, mkdir '/.mongodb/mongosh' Error: Not ready
@mrfzy00 mrfzy00 added the tech-issues The user has a technical issue about an application label Apr 2, 2024
@github-actions github-actions bot added the triage Triage is needed label Apr 2, 2024
@github-actions github-actions bot removed the triage Triage is needed label Apr 2, 2024
@github-actions github-actions bot assigned Mauraza and unassigned carrodher Apr 2, 2024
@Mauraza
Copy link
Contributor

Mauraza commented Apr 2, 2024

Hi @mrfzy00,

Are you using the bitnami images?

@mrfzy00
Copy link
Author

mrfzy00 commented Apr 2, 2024

Hi @Mauraza

I bypassed the images via harbor to prevent the limit pull from docker, and yes I bypassed the bitnami images of mongodb

@julianmina00
Copy link

julianmina00 commented Apr 4, 2024

I am having the same issue and I am using direct the bitnami images. This is my config:

architecture: replicaset
auth:
  enabled: true
  usernames:
    - my_db_user
  databases:
    - my_db
  existingSecret: my-mongodb-auth-secret
replicaSetName: rs0
replicaCount: 2
resources:
  limits: {}
  requests:
    cpu: 2
    memory: 1024Mi
persistence:
  enabled: true
  storageClass: managed-csi
  size: 15Gi

@tigerpeng2001
Copy link

same issue for replicaset today 2024-04-05 with the latest helm 15.1.3 and docker image 7.0.8-debian-12-r1

@tigerpeng2001
Copy link

tigerpeng2001 commented Apr 5, 2024

The following command got the same issue.
helm upgrade --install --namespace "myspace" "mongodb" bitnami/mongodb --version 15.1.3 --set architecture=replicaset

with arbiter restarted once:
NAME READY STATUS RESTARTS AGE
mongodb-0 0/1 Running 0 3m32s
mongodb-arbiter-0 1/1 Running 1 (2m39s ago) 3m32s

@tigerpeng2001
Copy link

It works today??

@WMP
Copy link
Contributor

WMP commented Apr 7, 2024

I have this same error, my small debug:

kubectl exec -ti mongodb-0 bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
Defaulted container "mongodb" out of: mongodb, metrics
I have no name!@mongodb-0:/$ mongosh --port 27017 --eval 'if (!(db.hello().isWritablePrimary || db.hello().secondary)) { throw new Error("Not ready") }'
Warning: Could not access file: ENOENT: no such file or directory, mkdir '/.mongodb/mongosh'
Error: Not ready
I have no name!@mongodb-0:/$ mongosh --port 27017      
Warning: Could not access file: ENOENT: no such file or directory, mkdir '/.mongodb/mongosh'
Current Mongosh Log ID: 6612fa7471a9d37472ef634a
Connecting to:          mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.2.3
Using MongoDB:          7.0.8
Using Mongosh:          2.2.3

For mongosh info see: https://docs.mongodb.com/mongodb-shell/


To help improve our products, anonymous usage data is collected and sent to MongoDB periodically (https://www.mongodb.com/legal/privacy-policy).
You can opt-out by running the disableTelemetry() command.

 
Error: Could not open history file.
REPL session history will not be persisted.
test> db.hello();
{
  topologyVersion: {
    processId: ObjectId('6612f88f0c7cad7c96a38555'),
    counter: Long('0')
  },
  isWritablePrimary: false,
  secondary: false,
  info: 'Does not have a valid replica set config',
  isreplicaset: true,
  maxBsonObjectSize: 16777216,
  maxMessageSizeBytes: 48000000,
  maxWriteBatchSize: 100000,
  localTime: ISODate('2024-04-07T19:56:39.226Z'),
  logicalSessionTimeoutMinutes: 30,
  connectionId: 354,
  minWireVersion: 0,
  maxWireVersion: 21,
  readOnly: false,
  ok: 1
}
test>

Workaround is to set:

    readinessProbe:
      enabled: false

@NandoTheessen
Copy link

Seeing the same issue:

I have no name!@mongodb-0:/$ ls -la total 88 drwxr-xr-x 1 root root 4096 Apr 11 10:40 . drwxr-xr-x 1 root root 4096 Apr 11 10:40 .. -rw-rw-r-- 1 root root 0 Nov 17 19:23 .dbshell drwxrwxr-x 2 root root 4096 Nov 17 19:23 .mongodb -rw-rw-r-- 1 root root 0 Nov 17 19:23 .mongorc.js -rw-rw-r-- 1 root root 0 Nov 17 19:23 .mongoshrc.js drwxr-xr-x 2 root root 4096 Nov 17 19:23 bin drwxr-xr-x 1 root root 4096 Apr 11 10:40 bitnami drwxr-xr-x 2 root root 4096 Apr 18 2023 boot drwxr-xr-x 5 root root 360 Apr 11 10:40 dev drwxrwxr-x 2 root root 4096 Nov 17 19:23 docker-entrypoint-initdb.d drwxr-xr-x 1 root root 4096 Apr 11 10:40 etc drwxr-xr-x 2 root root 4096 Apr 18 2023 home drwxr-xr-x 7 root root 4096 Nov 17 19:23 lib drwxr-xr-x 2 root root 4096 Apr 18 2023 lib64 drwxr-xr-x 2 root root 4096 Apr 18 2023 media drwxr-xr-x 2 root root 4096 Apr 18 2023 mnt drwxrwxr-x 3 root root 4096 Nov 17 19:23 opt dr-xr-xr-x 402 root root 0 Apr 11 10:40 proc drwx------ 2 root root 4096 Apr 18 2023 root drwxr-xr-x 4 root root 4096 Apr 18 2023 run drwxr-xr-x 2 root root 4096 Nov 17 19:23 sbin drwxr-xr-x 2 root root 4096 Apr 11 10:40 scripts drwxr-xr-x 2 root root 4096 Apr 18 2023 srv dr-xr-xr-x 13 root root 0 Apr 11 10:40 sys drwxrwsrwx 2 root 1001 4096 Apr 11 10:40 tmp drwxrwxr-x 11 root root 4096 Nov 17 19:23 usr drwxr-xr-x 11 root root 4096 Nov 17 19:23 var I have no name!@int1-mongodb-0:/$ ls .mongodb I have no name!@int1-mongodb-0:/$ ls -la .mongodb total 8 drwxrwxr-x 2 root root 4096 Nov 17 19:23 . drwxr-xr-x 1 root root 4096 Apr 11 10:40 ..

@nhs-work
Copy link

Just in case anyone else had the same issue as me, I realised belatedly that auth.rootPassword and auth.replicaSetKey needs to be set in order for the replica (non-primary) to connect correctly if authentication is enabled, ref.

On top of this, you may need to completely destroy all resources (this includes the PersistentVolume and PersistentVolumeClaim) because the credentials are set only once during creation and are not updated subsequently when the chart is upgraded, ref.

I used chart version 15.1.4 without touching any of the default image tags and had the following configs (stripped out configs unique to my setup):

auth:
  rootPassword: rootPassword
  replicaSetKey: replicaSetKey
architecture: replicaset
replicaCount: 2

@mrfzy00
Copy link
Author

mrfzy00 commented Apr 17, 2024

@nhs-work interesting, will try to research on this one thanks.

@julianmina00
Copy link

julianmina00 commented Apr 18, 2024

Hi @nhs-work and @mrfzy00,

I tried the previous approach and it didn't work for me. For now the only valid approach (to be able to have MongoDB up and running) is what @WMP mention about disable the readiness probe. This may work untit some one in #Bitnami fix this issue, that seems to be related to the Bitnami's MongoDB Docker image.

@julianmina00
Copy link

After trying different options, I realize that not even disabling the readiness probe it works. The only possible workaround, in my case, it to deploy it as standalone.
I will be watching this issue looking for news.

@carrodher carrodher assigned fmulero and unassigned Mauraza Apr 22, 2024
@maon-fp
Copy link

maon-fp commented Apr 23, 2024

I also run into this problem. Digging in the scripts in the container (at /bitnami/scripts) I ended up with a liveness probe like mongosh $TLS_OPTIONS --port $MONGODB_PORT_NUMBER --eval "db.adminCommand('ping')". Executing the script got me:

I have no name!@maon-mongo-0:/bitnami/scripts$ ./ping-mongodb.sh 
Warning: Could not access file: ENOENT: no such file or directory, mkdir '/.mongodb/mongosh'
{
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1713873381, i: 1 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1713873381, i: 1 })
}

This gave me the impression that the first line is causing the response to be invalid json (what makes kubernetes failing the probes). The warning is triggered as the mongosh wants to write the history in the home directory. As we have a readOnlyRootFilesystem this fails.

My workaround: set the home directory to /tmp where we have write access. Have not done this yet in the chart/values but manually in the stateful set:

...
         - name: MONGODB_ENABLE_DIRECTORY_PER_DB
           value: "no"
         - name: HOME
           value: /tmp
...

Or as a oneliner: kubectl patch statefulset <statefulset-name> --type='json' -p='[{"op": "add", "path": "/spec/template/spec/containers/0/env/-", "value": {"name": "HOME", "value": "/tmp"}}]' (no tested!)

@fmulero
Copy link
Collaborator

fmulero commented Apr 24, 2024

Hi

I am trying to reproduce the issue but I am not having luck. These are the steps I am following:

  1. Install the chart version: 15.1.5 (helm install mongodb oci://registry-1.docker.io/bitnamicharts/mongodb --version 15.1.5 -f values.yaml) with following values:
architecture: replicaset
auth:
  enabled: true
  usernames:
    - my_db_user
  databases:
    - my_db
replicaSetName: rs0
replicaCount: 2
persistence:
  enabled: true
  1. Retrieve the replicaset key and root password:
$ export MONGODB_RS_KEY=$(kubectl get secret --namespace default mongodb -o jsonpath="{.data.mongodb-replica-set-key}" | base64 -d)
$ export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace default mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 -d)
  1. Upgrade the chart with the instructions mentioned in the documentation and setting image.debug=true to force changes in the statefulsets:
$ helm upgrade mongodb oci://registry-1.docker.io/bitnamicharts/mongodb --version 15.1.5 -f values.yaml --set
 auth.rootPassword=$MONGODB_ROOT_PASSWORD --set auth.replicaSetKey=$MONGODB_RS_KEY --set image.debug=true
Pulled: registry-1.docker.io/bitnamicharts/mongodb:15.1.5
Digest: sha256:d711f7e3e7959a5e4dd5cdcdef1581b4d2836646d4deda2d12a7e5021e3389e4
Release "mongodb" has been upgraded. Happy Helming!
NAME: mongodb
LAST DEPLOYED: Wed Apr 24 14:43:58 2024
NAMESPACE: default
STATUS: deployed
REVISION: 2
TEST SUITE: None
NOTES:
CHART NAME: mongodb
CHART VERSION: 15.1.5
APP VERSION: 7.0.8
...
$ kubectl get pod
NAME                READY   STATUS    RESTARTS   AGE
mongodb-arbiter-0   1/1     Running   0          77s
mongodb-1           1/1     Running   0          62s
mongodb-0           1/1     Running   0          35s

Does anyone have concrete steps to reproduce the issue?

@go-native
Copy link

Having the same issue with the current 7.0.8 version.
Here is values.yml

architecture: replicaset

auth:
  enabled: true
  rootPassword: "blabla"
  username: "blabla"
  password: "blabla"
  database: "blabla"

replicaCount: 1
replicaSetName: rs0

persistence:
  enabled: true
  storageClass: local-storage
  accessModes:
    - ReadWriteOnce
  size: 5Gi

@maon-fp
Copy link

maon-fp commented Apr 25, 2024

$ kubectl get pod
NAME READY STATUS RESTARTS AGE
mongodb-arbiter-0 1/1 Running 0 77s
mongodb-1 1/1 Running 0 62s
mongodb-0 1/1 Running 0 35s

What is the output of kubectl describe pod mongodb-1? I could see the failed probes there but not initially on kubectl get pods.

Does anyone have concrete steps to reproduce the issue?

For me it happened on an upgrade. I'll try with a fresh setup.

@maon-fp
Copy link

maon-fp commented Apr 25, 2024

Setting up a new dev system (with our complete backend) shows:

$ kdp dev-mongo-1
...
Events:
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Normal   Scheduled         5m9s                   default-scheduler  Successfully assigned default/dev-mongo-1 to minikube
  Normal   Created           5m8s                   kubelet            Created container mongodb
  Normal   Started           5m8s                   kubelet            Started container mongodb
  Normal   Created           5m8s                   kubelet            Created container metrics
  Normal   Started           5m8s                   kubelet            Started container metrics
  Warning  Unhealthy         4m55s (x2 over 4m57s)  kubelet            Readiness probe failed: Warning: Could not access file: ENOENT: no such file or directory, mkdir '/.mongodb/mongosh'
Error: Not ready
  Warning  Unhealthy  4m36s (x2 over 4m46s)  kubelet  Readiness probe failed: Warning: Could not access file: ENOENT: no such file or directory, mkdir '/.mongodb/mongosh'

My values:

mongo: # <- aliased from mongodb
  architecture: replicaset
  auth:
    enabled: false
  podAnnotations:
    fluentbit.io/parser: mongodb
  persistence:
    # Will be overwritten by production pipeline with adequate size for production
    size: 8Gi

  metrics:
    enabled: true
  annotations:
    prometheus.io/scrape: 'true'
    prometheus.io/path: /
    prometheus.io/port: '9216'

@maon-fp
Copy link

maon-fp commented Apr 25, 2024

I tried to reproduce as you did @fmulero but did not succeed. I tried further by imitating my setup: running mongodb as a subchart and can observe the problem now. Steps to reproduce:

  1. mdkir /tmp/test_chart
  2. Download files: Chart.txt values.txt
  3. mv Chart.txt /tmp/test_chart/Chart.yaml
  4. mv values.txt /tmp/test_chart/values.yaml
  5. cd /tmp/test_chart
  6. helm dependency build
  7. helm upgrade --install --namespace test-mongo --install test . --set mongo.service.type=ClusterIP --set mongo.persistence.size=10Gi

Following theses steps I can see the failing *ness probes:

$ kubectl describe pod test-mongo-0
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  7m9s                 default-scheduler  0/1 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling..
  Normal   Scheduled         7m8s                 default-scheduler  Successfully assigned test-mongo/test-mongo-0 to minikube
  Normal   Pulled            7m8s                 kubelet            Container image "docker.io/bitnami/mongodb:7.0.8-debian-12-r2" already present on machine
  Normal   Created           7m8s                 kubelet            Created container mongodb
  Normal   Started           7m8s                 kubelet            Started container mongodb
  Normal   Pulled            7m8s                 kubelet            Container image "docker.io/bitnami/mongodb-exporter:0.40.0-debian-12-r15" already present on machine
  Normal   Created           7m8s                 kubelet            Created container metrics
  Normal   Started           7m8s                 kubelet            Started container metrics
  Warning  Unhealthy         7m1s (x2 over 7m2s)  kubelet            Readiness probe failed: Warning: Could not access file: ENOENT: no such file or directory, mkdir '/.mongodb/mongosh'

@julianmina00
Copy link

This is tricky. I am deploying via ArgoCD. If I deploy using the name mongodb it fails but if I deploy using mongodb-rs it works well.

@fmulero
Copy link
Collaborator

fmulero commented Apr 29, 2024

Thanks a lot @maon-fp!

I followed the steps you shared but I am not able to reproduce the problem 😞 . I can see the warnings and the events you share (and they can be expected during startup ) but the pods get ready.

Name:             test-mongo-1
Namespace:        test-mongo
Priority:         0
Service Account:  test-mongo
Node:             k3d-k3s-default-server-0/192.168.112.2
Start Time:       Mon, 29 Apr 2024 09:36:56 +0200
Labels:           app.kubernetes.io/component=mongodb
                  app.kubernetes.io/instance=test
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=mongo
                  app.kubernetes.io/version=7.0.8
                  controller-revision-hash=test-mongo-5dfb9b6645
                  helm.sh/chart=mongo-15.1.5
                  statefulset.kubernetes.io/pod-name=test-mongo-1
Annotations:      fluentbit.io/parser: mongodb
Status:           Running
IP:               10.42.0.43
IPs:
  IP:           10.42.0.43
Controlled By:  StatefulSet/test-mongo
Containers:
  mongodb:
    Container ID:    containerd://6fed830f06d99a8966bdb404370236549d722b1f19445452508983a1d8fcd2cc
    Image:           docker.io/bitnami/mongodb:7.0.8-debian-12-r2
    Image ID:        docker.io/bitnami/mongodb@sha256:3163c3842bfd29afdad249416641d7e13fe3801436b5e48756c13ea234d4cd74
    Port:            27017/TCP
    Host Port:       0/TCP
    SeccompProfile:  RuntimeDefault
    Command:
      /scripts/setup.sh
    State:          Running
      Started:      Mon, 29 Apr 2024 09:36:56 +0200
    Ready:          True
...
...
Events:
  Type     Reason     Age                From               Message
  ----     ------     ----               ----               -------
  Normal   Scheduled  86s                default-scheduler  Successfully assigned test-mongo/test-mongo-1 to k3d-k3s-default-server-0
  Normal   Pulled     86s                kubelet            Container image "docker.io/bitnami/mongodb:7.0.8-debian-12-r2" already present on machine
  Normal   Created    86s                kubelet            Created container mongodb
  Normal   Started    86s                kubelet            Started container mongodb
  Normal   Pulled     86s                kubelet            Container image "docker.io/bitnami/mongodb-exporter:0.40.0-debian-12-r15" already present on machine
  Normal   Created    86s                kubelet            Created container metrics
  Normal   Started    86s                kubelet            Started container metrics
  Warning  Unhealthy  74s (x2 over 79s)  kubelet            Readiness probe failed: Warning: Could not access file: ENOENT: no such file or directory, mkdir '/.mongodb/mongosh'
Error: Not ready
  Warning  Unhealthy  53s (x2 over 63s)  kubelet  Readiness probe failed: Warning: Could not access file: ENOENT: no such file or directory, mkdir '/.mongodb/mongosh'

Could you try adding this value to your config and share the output?

  extraVolumeMounts:
    - name: empty-dir
      mountPath: /.mongodb
      subPath: mongodb-home

@julianmina00
Copy link

This is tricky. I am deploying via ArgoCD. If I deploy using the name mongodb it fails but if I deploy using mongodb-rs it works well.

My fault, I needed to delete the PVC before deploying again. In my case it is working properly.

@chary1112004
Copy link

chary1112004 commented May 9, 2024

I face the warning also when I am using customize startup probe. Even service is still working however it is better that there should be no warning.

warning mongodb log

Warning: Could not access file: ENOENT: no such file or directory, mkdir '/.mongodb/mongosh'
apiVersion: v1
kind: ConfigMap
metadata:
  name: customize-mongodb-common-scripts
  {{- if .Values.commonAnnotations }}
  annotations:
  {{ toYaml .Values.commonAnnotations | nindent 4 }}
  {{- end }}
data:
  startup-probe.sh: |
    #!/bin/bash
    /bitnami/scripts/startup-probe.sh
    ...
    create user script
    ...

@fmulero
Copy link
Collaborator

fmulero commented May 9, 2024

I agree. The volume I mentioned above was added in the PR #25397 to avoid the warning

@maon-fp
Copy link

maon-fp commented May 23, 2024

With the latest chart (v15.5.2) I could not observe this behavior anymore. Thanks a lot @fmulero!

@fmulero
Copy link
Collaborator

fmulero commented May 27, 2024

I'll close the issue then. Thanks for the output.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mongodb solved tech-issues The user has a technical issue about an application
Projects
None yet
Development

No branches or pull requests