Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Corrupted filesystem on a PVC with fsRepair=false will not prevent the pod from starting #416

Open
valleedelisle opened this issue Sep 24, 2024 · 6 comments

Comments

@valleedelisle
Copy link

The pod will start and instead of having a volume, it will write directly in the ephemeral folder: (ex: /var/lib/kubelet/pods/28e492a8-e01e-41fb-a8e0-06b9dce20db1/volumes/kubernetes.io~csi/pvc-d72a0458-d8aa-4ed4-b503-f38ac4d2914a/mount).

If the volume is not mountable, the pod shouldn't write in the ephemeral storage, it simply shouldn't start.

@datamattsson
Copy link
Collaborator

Thanks for reporting this issue. I'll forward it to engineering.

@AnushaY1916
Copy link
Contributor

@valleedelisle can you please provide logs? May I know if the the CSI driver reported mount failure when this pod was created?

@datamattsson
Copy link
Collaborator

Can we also get the StorageClass? (filesystem etc)

@valleedelisle
Copy link
Author

Here's some of the logs I have. Pod is minio and the operation I was trying to achieve is to restart it (oc delete pod minio-4)

[1] Pod creation
[2] Pod deletion starts
[3] One of the volume successfully detached
[4] And was re-attached to the new pod
[5] This is where the broken volume timedout and the csi-attacher fails to attach it

Now that I'm deepdiving the logs, it looks like the detach wasn't completed when the attach was started and this is probably what caused these IO errors. Also, we hit the MountVolume.WaitForAttach succeeded while it was not true.

I believe that, in those cases, we should failed the pod.

For reference:

  • failed volume: 0635370e1e208cb2220000000000000000000011bc pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff
  • successfull volume: 0635370e1e208cb2220000000000000000000011ba pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08

And here's the storageClass. I've enabled the fsRepair flag but it wasn't enabled before. After deepdiving the logs, I wonder if it's a good idea because it might have break things even more considering the volume was double-connected.

allowVolumeExpansion: true
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: hpe-minio
parameters:
  accessProtocol: iscsi
  csi.storage.k8s.io/controller-expand-secret-name: xxx-alletra6k-01
  csi.storage.k8s.io/controller-expand-secret-namespace: hpe-storage
  csi.storage.k8s.io/controller-publish-secret-name: xxx-alletra6k-01
  csi.storage.k8s.io/controller-publish-secret-namespace: hpe-storage
  csi.storage.k8s.io/fstype: xfs
  csi.storage.k8s.io/node-publish-secret-name: xxx-alletra6k-01
  csi.storage.k8s.io/node-publish-secret-namespace: hpe-storage
  csi.storage.k8s.io/node-stage-secret-name: xxx-alletra6k-01
  csi.storage.k8s.io/node-stage-secret-namespace: hpe-storage
  csi.storage.k8s.io/provisioner-secret-name: xxx-alletra6k-01
  csi.storage.k8s.io/provisioner-secret-namespace: hpe-storage
  dedupeEnabled: "false"
  description: Volume created by the HPE CSI Driver for Minio
  encrypted: "false"
  folder: k8s-minio
  fsRepair: "true"
  performancePolicy: ceph
provisioner: csi.hpe.com
reclaimPolicy: Delete
volumeBindingMode: Immediate

[1]

Sep 24 13:02:08 w01.example.com bash[13921]: I0924 13:02:08.553855   13921 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba\") pod \"minio-4\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") " pod="minio/minio-4"
Sep 24 13:02:08 w01.example.com bash[13921]: I0924 13:02:08.553961   13921 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc\") pod \"minio-4\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") " pod="minio/minio-4"
Sep 24 13:02:08 w01.example.com bash[13921]: I0924 13:02:08.559400   13921 operation_generator.go:1581] "Controller attach succeeded for volume \"pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc\") pod \"minio-4\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") device path: \"\"" pod="minio/minio-4"
Sep 24 13:02:08 w01.example.com bash[13921]: I0924 13:02:08.559467   13921 operation_generator.go:1581] "Controller attach succeeded for volume \"pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba\") pod \"minio-4\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") device path: \"\"" pod="minio/minio-4"
Sep 24 13:02:08 w01.example.com bash[13921]: I0924 13:02:08.655190   13921 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba\") pod \"minio-4\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") " pod="minio/minio-4"
Sep 24 13:02:08 w01.example.com bash[13921]: I0924 13:02:08.655275   13921 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc\") pod \"minio-4\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") " pod="minio/minio-4"
Sep 24 13:02:08 w01.example.com bash[13921]: I0924 13:02:08.655392   13921 operation_generator.go:623] "MountVolume.WaitForAttach entering for volume \"pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc\") pod \"minio-4\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") DevicePath \"\"" pod="minio/minio-4"
Sep 24 13:02:08 w01.example.com bash[13921]: I0924 13:02:08.655439   13921 operation_generator.go:623] "MountVolume.WaitForAttach entering for volume \"pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba\") pod \"minio-4\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") DevicePath \"\"" pod="minio/minio-4"
Sep 24 13:02:08 w01.example.com bash[13921]: I0924 13:02:08.658402   13921 operation_generator.go:633] "MountVolume.WaitForAttach succeeded for volume \"pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc\") pod \"minio-4\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") DevicePath \"csi-0c705594e99e6ed63ddaace092a25496598408e505d4f9bf1ae79926c5198705\"" pod="minio/minio-4"
Sep 24 13:02:08 w01.example.com bash[13921]: I0924 13:02:08.658439   13921 operation_generator.go:633] "MountVolume.WaitForAttach succeeded for volume \"pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba\") pod \"minio-4\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") DevicePath \"csi-f06ebff7d27800558240a3bf43b734c49c4c5815c69da124bc032b5d3f541e98\"" pod="minio/minio-4"
Sep 24 13:02:08 w01.example.com kernel: scsi 10:0:0:9: Direct-Access     Nimble   Server           1.0  PQ: 0 ANSI: 5
Sep 24 13:02:08 w01.example.com kernel: scsi 10:0:0:9: alua: supports implicit TPGS
Sep 24 13:02:08 w01.example.com kernel: scsi 10:0:0:9: alua: device eui.1ec360188e74db7b6c9ce9003cbcbb15 port group 1 rel port 4
Sep 24 13:02:08 w01.example.com kernel: sd 10:0:0:9: Attached scsi generic sg14 type 0
Sep 24 13:02:08 w01.example.com kernel: sd 10:0:0:9: Power-on or device reset occurred
Sep 24 13:02:08 w01.example.com kernel: scsi 9:0:0:9: Direct-Access     Nimble   Server           1.0  PQ: 0 ANSI: 5
Sep 24 13:02:08 w01.example.com kernel: scsi 9:0:0:9: alua: supports implicit TPGS
Sep 24 13:02:08 w01.example.com kernel: scsi 9:0:0:9: alua: device eui.1ec360188e74db7b6c9ce9003cbcbb15 port group 1 rel port 3
Sep 24 13:02:08 w01.example.com kernel: sd 9:0:0:9: Attached scsi generic sg15 type 0
Sep 24 13:02:08 w01.example.com kernel: sd 9:0:0:9: Power-on or device reset occurred
Sep 24 13:02:08 w01.example.com kernel: sd 10:0:0:9: alua: transition timeout set to 60 seconds
Sep 24 13:02:08 w01.example.com kernel: sd 10:0:0:9: alua: port group 01 state A non-preferred supports tolusna
Sep 24 13:02:08 w01.example.com kernel: sd 10:0:0:9: [sdn] 3565158400 512-byte logical blocks: (1.83 TB/1.66 TiB)
Sep 24 13:02:08 w01.example.com kernel: sd 10:0:0:9: [sdn] Write Protect is off
Sep 24 13:02:08 w01.example.com kernel: sd 10:0:0:9: [sdn] Mode Sense: 9b 00 00 08
Sep 24 13:02:08 w01.example.com kernel: sd 10:0:0:9: [sdn] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Sep 24 13:02:08 w01.example.com kernel: sd 9:0:0:9: [sdo] 3565158400 512-byte logical blocks: (1.83 TB/1.66 TiB)
Sep 24 13:02:08 w01.example.com kernel: sd 9:0:0:9: [sdo] Write Protect is off
Sep 24 13:02:08 w01.example.com kernel: sd 9:0:0:9: [sdo] Mode Sense: 9b 00 00 08
Sep 24 13:02:08 w01.example.com kernel: sd 9:0:0:9: [sdo] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Sep 24 13:02:08 w01.example.com kernel: sd 10:0:0:9: [sdn] Attached SCSI disk
Sep 24 13:02:08 w01.example.com kernel: sd 9:0:0:9: [sdo] Attached SCSI disk
Sep 24 13:02:08 w01.example.com multipathd[1608]: mpathdr: addmap [0 3565158400 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 1 1 8:208 1]
Sep 24 13:02:08 w01.example.com multipathd[1608]: sdn [8:208]: path added to devmap mpathdr
Sep 24 13:02:09 w01.example.com multipathd[1608]: mpathdr: performing delayed actions
Sep 24 13:02:09 w01.example.com multipathd[1608]: mpathdr: reload [0 3565158400 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 2 1 8:208 1 8:224 1]
Sep 24 13:02:09 w01.example.com kernel: XFS (dm-6): Mounting V5 Filesystem
Sep 24 13:02:10 w01.example.com kernel: XFS (dm-6): Ending clean mount
Sep 24 13:02:10 w01.example.com bash[13921]: I0924 13:02:10.042825   13921 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba\") pod \"minio-4\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.hpe.com/92b2ca12f3460e8b7c5f75f24b3d763b1230b8907a5e45d8d43e1ff07493a2a4/globalmount\"" pod="minio/minio-4"
Sep 24 13:02:10 w01.example.com kernel: scsi 10:0:0:8: Direct-Access     Nimble   Server           1.0  PQ: 0 ANSI: 5
Sep 24 13:02:10 w01.example.com kernel: scsi 10:0:0:8: alua: supports implicit TPGS
Sep 24 13:02:10 w01.example.com kernel: scsi 10:0:0:8: alua: device eui.6404eb6b077709696c9ce9003cbcbb15 port group 1 rel port 4
Sep 24 13:02:10 w01.example.com kernel: sd 10:0:0:8: Attached scsi generic sg24 type 0
Sep 24 13:02:10 w01.example.com kernel: sd 10:0:0:8: Power-on or device reset occurred
Sep 24 13:02:10 w01.example.com kernel: scsi 9:0:0:8: Direct-Access     Nimble   Server           1.0  PQ: 0 ANSI: 5
Sep 24 13:02:10 w01.example.com kernel: scsi 9:0:0:8: alua: supports implicit TPGS
Sep 24 13:02:10 w01.example.com kernel: scsi 9:0:0:8: alua: device eui.6404eb6b077709696c9ce9003cbcbb15 port group 1 rel port 3
Sep 24 13:02:10 w01.example.com kernel: sd 9:0:0:8: Attached scsi generic sg25 type 0
Sep 24 13:02:10 w01.example.com kernel: sd 9:0:0:8: Power-on or device reset occurred
Sep 24 13:02:10 w01.example.com kernel: sd 10:0:0:8: alua: transition timeout set to 60 seconds
Sep 24 13:02:10 w01.example.com kernel: sd 10:0:0:8: alua: port group 01 state A non-preferred supports tolusna
Sep 24 13:02:10 w01.example.com kernel: sd 10:0:0:8: [sdw] 3565158400 512-byte logical blocks: (1.83 TB/1.66 TiB)
Sep 24 13:02:10 w01.example.com kernel: sd 10:0:0:8: [sdw] Write Protect is off
Sep 24 13:02:10 w01.example.com kernel: sd 10:0:0:8: [sdw] Mode Sense: 9b 00 00 08
Sep 24 13:02:10 w01.example.com kernel: sd 10:0:0:8: [sdw] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Sep 24 13:02:10 w01.example.com kernel: sd 9:0:0:8: [sdx] 3565158400 512-byte logical blocks: (1.83 TB/1.66 TiB)
Sep 24 13:02:10 w01.example.com kernel: sd 9:0:0:8: [sdx] Write Protect is off
Sep 24 13:02:10 w01.example.com kernel: sd 9:0:0:8: [sdx] Mode Sense: 9b 00 00 08
Sep 24 13:02:10 w01.example.com kernel: sd 9:0:0:8: [sdx] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Sep 24 13:02:10 w01.example.com kernel: sd 10:0:0:8: [sdw] Attached SCSI disk
Sep 24 13:02:10 w01.example.com kernel: sd 9:0:0:8: [sdx] Attached SCSI disk
Sep 24 13:02:10 w01.example.com multipathd[1608]: mpathdo: addmap [0 3565158400 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 1 1 65:96 1]
Sep 24 13:02:10 w01.example.com multipathd[1608]: sdw [65:96]: path added to devmap mpathdo
Sep 24 13:02:10 w01.example.com bash[13921]: I0924 13:02:10.269551   13921 operation_generator.go:722] "MountVolume.SetUp succeeded for volume \"pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba\") pod \"minio-4\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") " pod="minio/minio-4"
Sep 24 13:02:11 w01.example.com multipathd[1608]: mpathdo: performing delayed actions

[2]

Sep 24 13:53:20 w01.example.com ovs-vswitchd[1971]: ovs|612622|connmgr|INFO|br-ex<->unix#3949769: 2 flow_mods in the last 0 s (2 adds)
Sep 24 13:53:20 w01.example.com bash[13811]: time="2024-09-24 13:53:20.798573387Z" level=info msg="Stopped pod sandbox: 8edc321d679d150d87dd5de4f49e9a7037bc773dc8fb8b9aef2f390fea22976a" id=d69957ba-1f92-48fb-b6a9-f1c04c2b971a name=/runtime.v1.RuntimeService/StopPodSandbox
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.881743   13921 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vhqfm\" (UniqueName: \"kubernetes.io/projected/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-kube-api-access-vhqfm\") pod \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") "
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.881833   13921 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"libminio\" (UniqueName: \"kubernetes.io/configmap/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-libminio\") pod \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") "
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.881870   13921 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"minio-update-host\" (UniqueName: \"kubernetes.io/configmap/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-minio-update-host\") pod \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") "
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.881926   13921 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"empty-dir\" (UniqueName: \"kubernetes.io/empty-dir/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-empty-dir\") pod \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") "
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.881965   13921 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"minio-run\" (UniqueName: \"kubernetes.io/configmap/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-minio-run\") pod \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") "
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.881991   13921 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"minio-creds\" (UniqueName: \"kubernetes.io/secret/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-minio-creds\") pod \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") "
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.882119   13921 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data-1\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba\") pod \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") "
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.882196   13921 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"podnetinfo\" (UniqueName: \"kubernetes.io/downward-api/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-podnetinfo\") pod \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") "
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.882267   13921 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"data-0\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc\") pod \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\" (UID: \"c8d568e4-d76d-4642-8bfa-9ced4ec4412f\") "
Sep 24 13:53:20 w01.example.com bash[13921]: W0924 13:53:20.908206   13921 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/c8d568e4-d76d-4642-8bfa-9ced4ec4412f/volumes/kubernetes.io~configmap/libminio: clearQuota called, but quotas disabled
Sep 24 13:53:20 w01.example.com bash[13921]: W0924 13:53:20.908206   13921 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/c8d568e4-d76d-4642-8bfa-9ced4ec4412f/volumes/kubernetes.io~configmap/minio-update-host: clearQuota called, but quotas disabled
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.908333   13921 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-libminio" (OuterVolumeSpecName: "libminio") pod "c8d568e4-d76d-4642-8bfa-9ced4ec4412f" (UID: "c8d568e4-d76d-4642-8bfa-9ced4ec4412f"). InnerVolumeSpecName "libminio". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.908397   13921 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-minio-update-host" (OuterVolumeSpecName: "minio-update-host") pod "c8d568e4-d76d-4642-8bfa-9ced4ec4412f" (UID: "c8d568e4-d76d-4642-8bfa-9ced4ec4412f"). InnerVolumeSpecName "minio-update-host". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Sep 24 13:53:20 w01.example.com bash[13921]: W0924 13:53:20.909110   13921 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/c8d568e4-d76d-4642-8bfa-9ced4ec4412f/volumes/kubernetes.io~configmap/minio-run: clearQuota called, but quotas disabled
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.909214   13921 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-minio-run" (OuterVolumeSpecName: "minio-run") pod "c8d568e4-d76d-4642-8bfa-9ced4ec4412f" (UID: "c8d568e4-d76d-4642-8bfa-9ced4ec4412f"). InnerVolumeSpecName "minio-run". PluginName "kubernetes.io/configmap", VolumeGidValue ""
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.909825   13921 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-podnetinfo" (OuterVolumeSpecName: "podnetinfo") pod "c8d568e4-d76d-4642-8bfa-9ced4ec4412f" (UID: "c8d568e4-d76d-4642-8bfa-9ced4ec4412f"). InnerVolumeSpecName "podnetinfo". PluginName "kubernetes.io/downward-api", VolumeGidValue ""
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.909970   13921 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-kube-api-access-vhqfm" (OuterVolumeSpecName: "kube-api-access-vhqfm") pod "c8d568e4-d76d-4642-8bfa-9ced4ec4412f" (UID: "c8d568e4-d76d-4642-8bfa-9ced4ec4412f"). InnerVolumeSpecName "kube-api-access-vhqfm". PluginName "kubernetes.io/projected", VolumeGidValue ""
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.924266   13921 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba" (OuterVolumeSpecName: "data-1") pod "c8d568e4-d76d-4642-8bfa-9ced4ec4412f" (UID: "c8d568e4-d76d-4642-8bfa-9ced4ec4412f"). InnerVolumeSpecName "pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08". PluginName "kubernetes.io/csi", VolumeGidValue ""
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.943260   13921 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc" (OuterVolumeSpecName: "data-0") pod "c8d568e4-d76d-4642-8bfa-9ced4ec4412f" (UID: "c8d568e4-d76d-4642-8bfa-9ced4ec4412f"). InnerVolumeSpecName "pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff". PluginName "kubernetes.io/csi", VolumeGidValue ""
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.943825   13921 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-minio-creds" (OuterVolumeSpecName: "minio-creds") pod "c8d568e4-d76d-4642-8bfa-9ced4ec4412f" (UID: "c8d568e4-d76d-4642-8bfa-9ced4ec4412f"). InnerVolumeSpecName "minio-creds". PluginName "kubernetes.io/secret", VolumeGidValue ""
Sep 24 13:53:20 w01.example.com bash[13921]: W0924 13:53:20.943970   13921 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/c8d568e4-d76d-4642-8bfa-9ced4ec4412f/volumes/kubernetes.io~empty-dir/empty-dir: clearQuota called, but quotas disabled
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.944187   13921 operation_generator.go:888] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-empty-dir" (OuterVolumeSpecName: "empty-dir") pod "c8d568e4-d76d-4642-8bfa-9ced4ec4412f" (UID: "c8d568e4-d76d-4642-8bfa-9ced4ec4412f"). InnerVolumeSpecName "empty-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.983963   13921 reconciler_common.go:300] "Volume detached for volume \"minio-update-host\" (UniqueName: \"kubernetes.io/configmap/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-minio-update-host\") on node \"w01.example.com\" DevicePath \"\""
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.983993   13921 reconciler_common.go:300] "Volume detached for volume \"empty-dir\" (UniqueName: \"kubernetes.io/empty-dir/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-empty-dir\") on node \"w01.example.com\" DevicePath \"\""
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.984011   13921 reconciler_common.go:300] "Volume detached for volume \"minio-run\" (UniqueName: \"kubernetes.io/configmap/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-minio-run\") on node \"w01.example.com\" DevicePath \"\""
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.984025   13921 reconciler_common.go:300] "Volume detached for volume \"minio-creds\" (UniqueName: \"kubernetes.io/secret/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-minio-creds\") on node \"w01.example.com\" DevicePath \"\""
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.984062   13921 reconciler_common.go:293] "operationExecutor.UnmountDevice started for volume \"pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba\") on node \"w01.example.com\" "
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.984075   13921 reconciler_common.go:300] "Volume detached for volume \"podnetinfo\" (UniqueName: \"kubernetes.io/downward-api/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-podnetinfo\") on node \"w01.example.com\" DevicePath \"\""
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.984095   13921 reconciler_common.go:293] "operationExecutor.UnmountDevice started for volume \"pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc\") on node \"w01.example.com\" "
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.984109   13921 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-vhqfm\" (UniqueName: \"kubernetes.io/projected/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-kube-api-access-vhqfm\") on node \"w01.example.com\" DevicePath \"\""
Sep 24 13:53:20 w01.example.com bash[13921]: I0924 13:53:20.984120   13921 reconciler_common.go:300] "Volume detached for volume \"libminio\" (UniqueName: \"kubernetes.io/configmap/c8d568e4-d76d-4642-8bfa-9ced4ec4412f-libminio\") on node \"w01.example.com\" DevicePath \"\""

[3]

Sep 24 13:53:52 w01.example.com kernel: XFS (dm-6): Unmounting Filesystem
Sep 24 13:53:59 w01.example.com multipathd[1608]: libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: table ioctl on mpathdr  failed: No such device or address
Sep 24 13:53:59 w01.example.com bash[13921]: I0924 13:53:59.165678   13921 operation_generator.go:1002] UnmountDevice succeeded for volume "pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08" (UniqueName: "kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba") on node "w01.example.com"
Sep 24 13:53:59 w01.example.com bash[13921]: I0924 13:53:59.194383   13921 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba\") pod \"minio-4\" (UID: \"8ba70c16-31c5-4ebb-993e-8bcd01a2adb7\") " pod="minio/minio-4"
Sep 24 13:53:59 w01.example.com bash[13921]: I0924 13:53:59.194671   13921 operation_generator.go:623] "MountVolume.WaitForAttach entering for volume \"pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba\") pod \"minio-4\" (UID: \"8ba70c16-31c5-4ebb-993e-8bcd01a2adb7\") DevicePath \"csi-f06ebff7d27800558240a3bf43b734c49c4c5815c69da124bc032b5d3f541e98\"" pod="minio/minio-4"
Sep 24 13:53:59 w01.example.com bash[13921]: I0924 13:53:59.197053   13921 operation_generator.go:633] "MountVolume.WaitForAttach succeeded for volume \"pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba\") pod \"minio-4\" (UID: \"8ba70c16-31c5-4ebb-993e-8bcd01a2adb7\") DevicePath \"csi-f06ebff7d27800558240a3bf43b734c49c4c5815c69da124bc032b5d3f541e98\"" pod="minio/minio-4"
Sep 24 13:53:59 w01.example.com kernel: scsi 10:0:0:9: Direct-Access     Nimble   Server           1.0  PQ: 0 ANSI: 5
Sep 24 13:53:59 w01.example.com kernel: scsi 10:0:0:9: alua: supports implicit TPGS
Sep 24 13:53:59 w01.example.com kernel: scsi 10:0:0:9: alua: device eui.1ec360188e74db7b6c9ce9003cbcbb15 port group 1 rel port 4
Sep 24 13:53:59 w01.example.com kernel: sd 10:0:0:9: Attached scsi generic sg14 type 0
Sep 24 13:53:59 w01.example.com kernel: sd 10:0:0:9: [sdy] 3565158400 512-byte logical blocks: (1.83 TB/1.66 TiB)
Sep 24 13:53:59 w01.example.com kernel: sd 10:0:0:9: [sdy] Write Protect is off
Sep 24 13:53:59 w01.example.com kernel: sd 10:0:0:9: [sdy] Mode Sense: 9b 00 00 08
Sep 24 13:53:59 w01.example.com kernel: sd 10:0:0:9: [sdy] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Sep 24 13:53:59 w01.example.com kernel: scsi 9:0:0:9: Direct-Access     Nimble   Server           1.0  PQ: 0 ANSI: 5
Sep 24 13:53:59 w01.example.com kernel: scsi 9:0:0:9: alua: supports implicit TPGS
Sep 24 13:53:59 w01.example.com kernel: scsi 9:0:0:9: alua: device eui.1ec360188e74db7b6c9ce9003cbcbb15 port group 1 rel port 3
Sep 24 13:53:59 w01.example.com kernel: sd 9:0:0:9: Attached scsi generic sg15 type 0
Sep 24 13:53:59 w01.example.com kernel: sd 9:0:0:9: [sdz] 3565158400 512-byte logical blocks: (1.83 TB/1.66 TiB)
Sep 24 13:53:59 w01.example.com kernel: sd 9:0:0:9: [sdz] Write Protect is off
Sep 24 13:53:59 w01.example.com kernel: sd 9:0:0:9: [sdz] Mode Sense: 9b 00 00 08
Sep 24 13:53:59 w01.example.com kernel: sd 9:0:0:9: [sdz] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Sep 24 13:53:59 w01.example.com kernel: sd 10:0:0:9: [sdy] Attached SCSI disk
Sep 24 13:53:59 w01.example.com kernel: sd 9:0:0:9: [sdz] Attached SCSI disk
Sep 24 13:53:59 w01.example.com multipathd[1608]: mpathdr: reload [0 3565158400 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 1 1 8:224 1]
Sep 24 13:53:59 w01.example.com multipathd[1608]: libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: reload ioctl on mpathdr  failed: No such device or address
Sep 24 13:53:59 w01.example.com multipathd[1608]: dm_addmap: libdm task=1 error: No such device or address
Sep 24 13:53:59 w01.example.com multipathd[1608]: mpathdr: failed in domap for removal of path sdn
Sep 24 13:53:59 w01.example.com kernel: scsi 10:0:0:9: alua: Detached
Sep 24 13:53:59 w01.example.com multipathd[1608]: libdevmapper: ioctl/libdm-iface.c(1980): device-mapper: table ioctl on mpathdr  failed: No such device or address
Sep 24 13:53:59 w01.example.com multipathd[1608]: mpathdr: map flushed
Sep 24 13:53:59 w01.example.com multipathd[1608]: mpathdr: removed map after removing all paths
Sep 24 13:53:59 w01.example.com kernel: scsi 9:0:0:9: alua: Detached
Sep 24 13:53:59 w01.example.com multipathd[1608]: mpathdr: addmap [0 3565158400 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 1 1 65:144 1]
Sep 24 13:53:59 w01.example.com multipathd[1608]: mpathdr: already waiting for events on device
Sep 24 13:53:59 w01.example.com multipathd[1608]: sdz [65:144]: path added to devmap mpathdr

[4]

Sep 24 13:54:00 w01.example.com kernel: XFS (dm-6): Mounting V5 Filesystem
Sep 24 13:54:00 w01.example.com kernel: XFS (dm-6): Ending clean mount
Sep 24 13:54:00 w01.example.com bash[13921]: I0924 13:54:00.853141   13921 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba\") pod \"minio-4\" (UID: \"8ba70c16-31c5-4ebb-993e-8bcd01a2adb7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.hpe.com/92b2ca12f3460e8b7c5f75f24b3d763b1230b8907a5e45d8d43e1ff07493a2a4/globalmount\"" pod="minio/minio-4"
Sep 24 13:54:00 w01.example.com multipathd[1608]: mpathdr: performing delayed actions
Sep 24 13:54:00 w01.example.com multipathd[1608]: mpathdr: reload [0 3565158400 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 2 1 65:144 1 65:128 1]
Sep 24 13:54:01 w01.example.com bash[13921]: I0924 13:54:01.322125   13921 operation_generator.go:722] "MountVolume.SetUp succeeded for volume \"pvc-c3503b7e-ff65-446a-8b91-36d6b1d6da08\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011ba\") pod \"minio-4\" (UID: \"8ba70c16-31c5-4ebb-993e-8bcd01a2adb7\") " pod="minio/minio-4"

[5]

Sep 24 13:55:14 w01.example.com multipathd[1608]: mpathdo: removing map by alias
Sep 24 13:55:14 w01.example.com multipath[1886850]: dm-11 is not a multipath map
Sep 24 13:55:14 w01.example.com kernel: scsi 10:0:0:8: alua: Detached
Sep 24 13:55:14 w01.example.com kernel: scsi 9:0:0:8: alua: Detached
Sep 24 13:55:19 w01.example.com kernel: device-mapper: ioctl: Target type does not support messages
Sep 24 13:55:19 w01.example.com multipath[1887407]: dm-11 is not a multipath map
Sep 24 13:55:20 w01.example.com bash[13921]: E0924 13:55:20.996795   13921 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc podName: nodeName:}" failed. No retries permitted until 2024-09-24 13:55:21.49676056 +0000 UTC m=+9005516.128061589 (durationBeforeRetry 500ms). Error: UnmountDevice failed for volume "pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff" (UniqueName: "kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc") on node "w01.example.com" : kubernetes.io/csi: attacher.UnmountDevice failed: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Sep 24 13:55:21 w01.example.com bash[13921]: I0924 13:55:21.018537   13921 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc\") pod \"minio-4\" (UID: \"8ba70c16-31c5-4ebb-993e-8bcd01a2adb7\") " pod="minio/minio-4"
Sep 24 13:55:21 w01.example.com bash[13921]: I0924 13:55:21.018632   13921 operation_generator.go:623] "MountVolume.WaitForAttach entering for volume \"pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc\") pod \"minio-4\" (UID: \"8ba70c16-31c5-4ebb-993e-8bcd01a2adb7\") DevicePath \"csi-0c705594e99e6ed63ddaace092a25496598408e505d4f9bf1ae79926c5198705\"" pod="minio/minio-4"
Sep 24 13:55:21 w01.example.com bash[13921]: I0924 13:55:21.021393   13921 operation_generator.go:633] "MountVolume.WaitForAttach succeeded for volume \"pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc\") pod \"minio-4\" (UID: \"8ba70c16-31c5-4ebb-993e-8bcd01a2adb7\") DevicePath \"csi-0c705594e99e6ed63ddaace092a25496598408e505d4f9bf1ae79926c5198705\"" pod="minio/minio-4"
Sep 24 13:55:21 w01.example.com kernel: scsi 10:0:0:8: Direct-Access     Nimble   Server           1.0  PQ: 0 ANSI: 5
Sep 24 13:55:21 w01.example.com kernel: scsi 10:0:0:8: alua: supports implicit TPGS
Sep 24 13:55:21 w01.example.com kernel: scsi 10:0:0:8: alua: device eui.6404eb6b077709696c9ce9003cbcbb15 port group 1 rel port 4
Sep 24 13:55:21 w01.example.com kernel: sd 10:0:0:8: Attached scsi generic sg24 type 0
Sep 24 13:55:21 w01.example.com kernel: sd 10:0:0:8: [sdn] 3565158400 512-byte logical blocks: (1.83 TB/1.66 TiB)
Sep 24 13:55:21 w01.example.com kernel: sd 10:0:0:8: [sdn] Write Protect is off
Sep 24 13:55:21 w01.example.com kernel: sd 10:0:0:8: [sdn] Mode Sense: 9b 00 00 08
Sep 24 13:55:21 w01.example.com kernel: scsi 9:0:0:8: Direct-Access     Nimble   Server           1.0  PQ: 0 ANSI: 5
Sep 24 13:55:21 w01.example.com kernel: sd 10:0:0:8: [sdn] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Sep 24 13:55:21 w01.example.com kernel: scsi 9:0:0:8: alua: supports implicit TPGS
Sep 24 13:55:21 w01.example.com kernel: scsi 9:0:0:8: alua: device eui.6404eb6b077709696c9ce9003cbcbb15 port group 1 rel port 3
Sep 24 13:55:21 w01.example.com kernel: sd 9:0:0:8: Attached scsi generic sg25 type 0
Sep 24 13:55:21 w01.example.com kernel: sd 9:0:0:8: [sdo] 3565158400 512-byte logical blocks: (1.83 TB/1.66 TiB)
Sep 24 13:55:21 w01.example.com kernel: sd 9:0:0:8: [sdo] Write Protect is off
Sep 24 13:55:21 w01.example.com kernel: sd 9:0:0:8: [sdo] Mode Sense: 9b 00 00 08
Sep 24 13:55:21 w01.example.com kernel: sd 9:0:0:8: [sdo] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Sep 24 13:55:21 w01.example.com ovs-vswitchd[1971]: ovs|612633|connmgr|INFO|br-ex<->unix#3949828: 2 flow_mods in the last 0 s (2 adds)
Sep 24 13:55:21 w01.example.com kernel: sd 10:0:0:8: alua: transition timeout set to 60 seconds
Sep 24 13:55:21 w01.example.com kernel: sd 10:0:0:8: alua: port group 01 state A non-preferred supports tolusna
Sep 24 13:55:21 w01.example.com kernel: sd 9:0:0:8: [sdo] Attached SCSI disk
Sep 24 13:55:21 w01.example.com kernel: sd 10:0:0:8: [sdn] Attached SCSI disk
Sep 24 13:55:21 w01.example.com multipathd[1608]: mpathdo: reload [0 3565158400 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 1 1 8:224 1]
Sep 24 13:55:21 w01.example.com multipathd[1608]: sdo [8:224]: path added to devmap mpathdo
Sep 24 13:55:22 w01.example.com multipathd[1608]: mpathdo: reload [0 3565158400 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 2 1 8:224 1 8:208 1]
Sep 24 13:55:22 w01.example.com multipathd[1608]: sdn [8:208]: path added to devmap mpathdo
Sep 24 13:55:24 w01.example.com bash[13921]: E0924 13:55:24.252057   13921 kubelet.go:1948] "Unable to attach or mount volumes for pod; skipping pod" err="unmounted volumes=[data-0], unattached volumes=[], failed to process volumes=[]: timed out waiting for the condition" pod="minio/minio-4"
Sep 24 13:55:24 w01.example.com bash[13921]: E0924 13:55:24.252092   13921 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[data-0], unattached volumes=[], failed to process volumes=[]: timed out waiting for the condition" pod="minio/minio-4" podUID=8ba70c16-31c5-4ebb-993e-8bcd01a2adb7
Sep 24 13:55:24 w01.example.com multipathd[1608]: mpathdo: removing map by alias
Sep 24 13:55:24 w01.example.com multipath[1888119]: dm-11 is not a multipath map
Sep 24 13:55:24 w01.example.com kernel: scsi 9:0:0:8: alua: Detached
Sep 24 13:55:24 w01.example.com kernel: scsi 10:0:0:8: alua: Detached
Sep 24 13:55:29 w01.example.com kernel: device-mapper: ioctl: Target type does not support messages
Sep 24 13:55:29 w01.example.com multipath[1888832]: dm-11 is not a multipath map
Sep 24 13:55:34 w01.example.com kernel: device-mapper: ioctl: Target type does not support messages
Sep 24 13:55:35 w01.example.com multipath[1889479]: dm-11 is not a multipath map
Sep 24 13:55:36 w01.example.com ovs-vswitchd[1971]: ovs|612634|connmgr|INFO|br-ex<->unix#3949837: 2 flow_mods in the last 0 s (2 adds)
Sep 24 13:55:40 w01.example.com kernel: device-mapper: ioctl: Target type does not support messages
Sep 24 13:55:40 w01.example.com multipath[1890113]: dm-11 is not a multipath map
Sep 24 13:55:45 w01.example.com kernel: device-mapper: ioctl: Target type does not support messages
Sep 24 13:55:45 w01.example.com multipath[1890711]: dm-11 is not a multipath map
Sep 24 13:55:50 w01.example.com kernel: device-mapper: ioctl: Target type does not support messages
Sep 24 13:55:50 w01.example.com multipath[1891303]: dm-11 is not a multipath map
Sep 24 13:55:51 w01.example.com ovs-vswitchd[1971]: ovs|612635|connmgr|INFO|br-ex<->unix#3949841: 2 flow_mods in the last 0 s (2 adds)
Sep 24 13:55:52 w01.example.com kernel: XFS (dm-11): Unmounting Filesystem
Sep 24 13:55:55 w01.example.com kernel: device-mapper: ioctl: Target type does not support messages
Sep 24 13:55:55 w01.example.com multipath[1891920]: dm-11 is not a multipath map
Sep 24 13:56:00 w01.example.com kernel: device-mapper: ioctl: Target type does not support messages
Sep 24 13:56:00 w01.example.com multipath[1892655]: dm-11 is not a multipath map
Sep 24 13:56:05 w01.example.com kernel: device-mapper: ioctl: Target type does not support messages
Sep 24 13:56:05 w01.example.com multipath[1893386]: dm-11 is not a multipath map
Sep 24 13:56:06 w01.example.com ovs-vswitchd[1971]: ovs|612636|connmgr|INFO|br-ex<->unix#3949850: 2 flow_mods in the last 0 s (2 adds)
Sep 24 13:56:06 w01.example.com kernel: I/O error, dev dm-11, sector 1782738266 op 0x1:(WRITE) flags 0x9800 phys_seg 1 prio class 2
Sep 24 13:56:06 w01.example.com kernel: XFS (dm-11): log I/O error -5
Sep 24 13:56:06 w01.example.com kernel: XFS (dm-11): Log I/O Error (0x2) detected at xlog_ioend_work+0x6e/0x70 [xfs] (fs/xfs/xfs_log.c:1378).  Shutting down filesystem.
Sep 24 13:56:06 w01.example.com kernel: XFS (dm-11): Please unmount the filesystem and rectify the problem(s)
Sep 24 13:56:09 w01.example.com kernel: I/O error, dev dm-11, sector 0 op 0x0:(READ) flags 0x1000 phys_seg 1 prio class 2
Sep 24 13:56:09 w01.example.com kernel: XFS (dm-11): SB validate failed with error -5.
Sep 24 13:56:09 w01.example.com kernel: I/O error, dev dm-11, sector 3565158272 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 2
Sep 24 13:56:09 w01.example.com kernel: I/O error, dev dm-11, sector 3565158272 op 0x0:(READ) flags 0x0 phys_seg 1 prio class 2
Sep 24 13:56:09 w01.example.com kernel: Buffer I/O error on dev dm-11, logical block 445644784, async page read
Sep 24 13:56:09 w01.example.com kernel: ext4: Unknown parameter 'nouuid'
Sep 24 13:56:09 w01.example.com kernel: ext3: Unknown parameter 'nouuid'
Sep 24 13:56:09 w01.example.com kernel: ext2: Unknown parameter 'nouuid'
Sep 24 13:56:09 w01.example.com kernel: fuseblk: Unknown parameter 'nouuid'
Sep 24 13:56:09 w01.example.com kernel: I/O error, dev dm-11, sector 0 op 0x0:(READ) flags 0x1000 phys_seg 1 prio class 2
Sep 24 13:56:09 w01.example.com kernel: I/O error, dev dm-11, sector 0 op 0x0:(READ) flags 0x800 phys_seg 129 prio class 2
Sep 24 13:56:09 w01.example.com bash[13921]: E0924 13:56:09.654389   13921 csi_attacher.go:364] kubernetes.io/csi: attacher.MountDevice failed: rpc error: code = Internal desc = Failed to stage volume 0635370e1e208cb2220000000000000000000011bc, err: Filesystem issues has been detected and will not be repaired for the volume 0635370e1e208cb2220000000000000000000011bc as the fsRepair parameter is not set in the StorageClass
Sep 24 13:56:09 w01.example.com bash[13921]: E0924 13:56:09.654609   13921 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc podName: nodeName:}" failed. No retries permitted until 2024-09-24 13:56:10.154590919 +0000 UTC m=+9005564.785891942 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff" (UniqueName: "kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc") pod "minio-4" (UID: "8ba70c16-31c5-4ebb-993e-8bcd01a2adb7") : rpc error: code = Internal desc = Failed to stage volume 0635370e1e208cb2220000000000000000000011bc, err: Filesystem issues has been detected and will not be repaired for the volume 0635370e1e208cb2220000000000000000000011bc as the fsRepair parameter is not set in the StorageClass
Sep 24 13:56:10 w01.example.com bash[13921]: I0924 13:56:10.181897   13921 reconciler_common.go:231] "operationExecutor.MountVolume started for volume \"pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc\") pod \"minio-4\" (UID: \"8ba70c16-31c5-4ebb-993e-8bcd01a2adb7\") " pod="minio/minio-4"
Sep 24 13:56:10 w01.example.com bash[13921]: I0924 13:56:10.181994   13921 operation_generator.go:623] "MountVolume.WaitForAttach entering for volume \"pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc\") pod \"minio-4\" (UID: \"8ba70c16-31c5-4ebb-993e-8bcd01a2adb7\") DevicePath \"csi-0c705594e99e6ed63ddaace092a25496598408e505d4f9bf1ae79926c5198705\"" pod="minio/minio-4"
Sep 24 13:56:10 w01.example.com bash[13921]: I0924 13:56:10.184289   13921 operation_generator.go:633] "MountVolume.WaitForAttach succeeded for volume \"pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc\") pod \"minio-4\" (UID: \"8ba70c16-31c5-4ebb-993e-8bcd01a2adb7\") DevicePath \"csi-0c705594e99e6ed63ddaace092a25496598408e505d4f9bf1ae79926c5198705\"" pod="minio/minio-4"
Sep 24 13:56:10 w01.example.com kernel: scsi 10:0:0:8: Direct-Access     Nimble   Server           1.0  PQ: 0 ANSI: 5
Sep 24 13:56:10 w01.example.com kernel: scsi 10:0:0:8: alua: supports implicit TPGS
Sep 24 13:56:10 w01.example.com kernel: scsi 10:0:0:8: alua: device eui.6404eb6b077709696c9ce9003cbcbb15 port group 1 rel port 4
Sep 24 13:56:10 w01.example.com kernel: sd 10:0:0:8: Attached scsi generic sg24 type 0
Sep 24 13:56:10 w01.example.com kernel: sd 10:0:0:8: [sdn] 3565158400 512-byte logical blocks: (1.83 TB/1.66 TiB)
Sep 24 13:56:10 w01.example.com kernel: sd 10:0:0:8: [sdn] Write Protect is off
Sep 24 13:56:10 w01.example.com kernel: sd 10:0:0:8: [sdn] Mode Sense: 9b 00 00 08
Sep 24 13:56:10 w01.example.com kernel: sd 10:0:0:8: [sdn] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Sep 24 13:56:10 w01.example.com kernel: scsi 9:0:0:8: Direct-Access     Nimble   Server           1.0  PQ: 0 ANSI: 5
Sep 24 13:56:10 w01.example.com kernel: scsi 9:0:0:8: alua: supports implicit TPGS
Sep 24 13:56:10 w01.example.com kernel: scsi 9:0:0:8: alua: device eui.6404eb6b077709696c9ce9003cbcbb15 port group 1 rel port 3
Sep 24 13:56:10 w01.example.com kernel: sd 9:0:0:8: Attached scsi generic sg25 type 0
Sep 24 13:56:10 w01.example.com kernel: sd 9:0:0:8: [sdo] 3565158400 512-byte logical blocks: (1.83 TB/1.66 TiB)
Sep 24 13:56:10 w01.example.com kernel: sd 9:0:0:8: [sdo] Write Protect is off
Sep 24 13:56:10 w01.example.com kernel: sd 9:0:0:8: [sdo] Mode Sense: 9b 00 00 08
Sep 24 13:56:10 w01.example.com kernel: sd 9:0:0:8: [sdo] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA
Sep 24 13:56:10 w01.example.com kernel: sd 9:0:0:8: [sdo] Attached SCSI disk
Sep 24 13:56:10 w01.example.com kernel: sd 10:0:0:8: alua: transition timeout set to 60 seconds
Sep 24 13:56:10 w01.example.com kernel: sd 10:0:0:8: alua: port group 01 state A non-preferred supports tolusna
Sep 24 13:56:10 w01.example.com kernel: sd 10:0:0:8: [sdn] Attached SCSI disk
Sep 24 13:56:10 w01.example.com multipathd[1608]: mpathdo: reload [0 3565158400 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 1 1 8:224 1]
Sep 24 13:56:10 w01.example.com bash[13811]: time="2024-09-24 13:56:10.332439960Z" level=warning msg="Found defunct process with PID 1893831 (consul)"
Sep 24 13:56:10 w01.example.com multipathd[1608]: sdo [8:224]: path added to devmap mpathdo
Sep 24 13:56:11 w01.example.com kernel: XFS (dm-11): Mounting V5 Filesystem
Sep 24 13:56:11 w01.example.com kernel: XFS (dm-11): Starting recovery (logdev: internal)
Sep 24 13:56:11 w01.example.com kernel: XFS (dm-11): Ending recovery (logdev: internal)
Sep 24 13:56:11 w01.example.com multipathd[1608]: mpathdo: reload [0 3565158400 multipath 1 queue_if_no_path 1 alua 1 1 service-time 0 2 1 8:224 1 8:208 1]
Sep 24 13:56:11 w01.example.com multipathd[1608]: sdn [8:208]: path added to devmap mpathdo
Sep 24 13:56:11 w01.example.com bash[13921]: I0924 13:56:11.474693   13921 operation_generator.go:665] "MountVolume.MountDevice succeeded for volume \"pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc\") pod \"minio-4\" (UID: \"8ba70c16-31c5-4ebb-993e-8bcd01a2adb7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/csi.hpe.com/80da37719934a5d9a1765da953ecbcd7aa3a41d1f0ad151f6b747c312f445bc7/globalmount\"" pod="minio/minio-4"
Sep 24 13:56:11 w01.example.com bash[13921]: I0924 13:56:11.715620   13921 operation_generator.go:722] "MountVolume.SetUp succeeded for volume \"pvc-90aba057-181e-408d-9eef-adf0b8d5d0ff\" (UniqueName: \"kubernetes.io/csi/csi.hpe.com^0635370e1e208cb2220000000000000000000011bc\") pod \"minio-4\" (UID: \"8ba70c16-31c5-4ebb-993e-8bcd01a2adb7\") " pod="minio/minio-4"
Sep 24 13:56:11 w01.example.com bash[13921]: I0924 13:56:11.872984   13921 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio/minio-4"
Sep 24 13:56:11 w01.example.com bash[13811]: time="2024-09-24 13:56:11.874156511Z" level=info msg="Running pod sandbox: minio/minio-4/POD" id=ea480fcd-dd7c-4d40-a7f6-25619b1be1ed name=/runtime.v1.RuntimeService/RunPodSandbox
Sep 24 13:56:11 w01.example.com bash[13811]: time="2024-09-24 13:56:11.874218894Z" level=warning msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
Sep 24 13:56:11 w01.example.com bash[13811]: time="2024-09-24 13:56:11.886554769Z" level=info msg="Got pod network &{Name:minio-4 Namespace:minio ID:2d715a8c972f30e726ca16829683ef6983553d2c1e0cc52ff68f60eeb3834153 UID:8ba70c16-31c5-4ebb-993e-8bcd01a2adb7 NetNS:/var/run/netns/27353c27-9f06-4c80-8787-1838509dabb5 Networks:[] RuntimeConfig:map[multus-cni-network:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
Sep 24 13:56:11 w01.example.com bash[13811]: time="2024-09-24 13:56:11.886585795Z" level=info msg="Adding pod minio_minio-4 to CNI network \"multus-cni-network\" (type=multus-shim)"
Sep 24 13:56:11 w01.example.com NetworkManager[2068]: <info>  [1727186171.9459] manager: (2d715a8c972f30e): new Veth device (/org/freedesktop/NetworkManager/Devices/1080)
Sep 24 13:56:11 w01.example.com kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
Sep 24 13:56:11 w01.example.com kernel: IPv6: ADDRCONF(NETDEV_CHANGE): 2d715a8c972f30e: link becomes ready
Sep 24 13:56:11 w01.example.com NetworkManager[2068]: <info>  [1727186171.9471] device (2d715a8c972f30e): carrier: link connected
Sep 24 13:56:11 w01.example.com kernel: device 2d715a8c972f30e entered promiscuous mode
Sep 24 13:56:11 w01.example.com ovs-vswitchd[1971]: ovs|612637|bridge|INFO|bridge br-int: added interface 2d715a8c972f30e on port 458
Sep 24 13:56:11 w01.example.com NetworkManager[2068]: <info>  [1727186171.9739] manager: (2d715a8c972f30e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/1081)
Sep 24 13:56:12 w01.example.com kernel: iavf 0000:d8:09.4: Reset indication received from the PF
Sep 24 13:56:12 w01.example.com kernel: iavf 0000:d8:09.4: Scheduling reset task
Sep 24 13:56:12 w01.example.com kernel: i40e 0000:d8:00.0: VF 60 is now trusted
Sep 24 13:56:12 w01.example.com kernel: iavf 0000:d8:09.4 temp_50: renamed from ens3f0v60
Sep 24 13:56:12 w01.example.com NetworkManager[2068]: <info>  [1727186172.3282] device (ens3f0v60): interface index 50 renamed iface from 'ens3f0v60' to 'temp_50'
Sep 24 13:56:12 w01.example.com kernel: iavf 0000:d8:09.4 net1: renamed from temp_50
Sep 24 13:56:12 w01.example.com kernel: iavf 0000:d8:0a.2: Reset indication received from the PF
Sep 24 13:56:12 w01.example.com kernel: iavf 0000:d8:0a.2: Scheduling reset task
Sep 24 13:56:12 w01.example.com kernel: iavf 0000:d8:09.4 net1: NIC Link is Up Speed is 25 Gbps Full Duplex
Sep 24 13:56:12 w01.example.com kernel: i40e 0000:d8:00.1: VF 2 is now trusted

@datamattsson
Copy link
Collaborator

I've spent some time on this today and I'm unable to reproduce this with the methods we used to test the fsRepair feature.

Do you mind sharing the manifests for your workload? It looks like you're using minio, is that controlled by a StatefulSet or a Deployment?

@valleedelisle
Copy link
Author

Sure, it's a statefulset [1]. Before this crash, we had 13 million small objects in there. We were hitting this issue when we restarted the pods so we had to apply this workaround. We're also passing some SR-IOV virtual function for internode traffic mostly but that shouldn't change anything here since the volume is connected on another NIC, from the host.

[1]

kind: StatefulSet
apiVersion: apps/v1
metadata:
  name: minio
  namespace: minio
  labels:
    helm.sh/chart: minio-14.3.2
spec:
  serviceName: minio-headless
  revisionHistoryLimit: 10
  persistentVolumeClaimRetentionPolicy:
    whenDeleted: Retain
    whenScaled: Retain
  volumeClaimTemplates:
    - kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: data-0
        creationTimestamp: null
        labels:
          app.kubernetes.io/instance: minio
          app.kubernetes.io/name: minio
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1700Gi
        storageClassName: hpe-minio
        volumeMode: Filesystem
      status:
        phase: Pending
    - kind: PersistentVolumeClaim
      apiVersion: v1
      metadata:
        name: data-1
        creationTimestamp: null
        labels:
          app.kubernetes.io/instance: minio
          app.kubernetes.io/name: minio
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1700Gi
        storageClassName: hpe-minio
        volumeMode: Filesystem
      status:
        phase: Pending
  template:
    metadata:
      creationTimestamp: null
      labels:
        app.kubernetes.io/instance: minio
        app.kubernetes.io/managed-by: Helm
        app.kubernetes.io/name: minio
        app.kubernetes.io/version: 2024.4.28
        helm.sh/chart: minio-14.3.2
      annotations:
        k8s.v1.cni.cncf.io/networks: |-
          [
            {
              "name": "internode-minio",
              "namespace": "minio",
              "capabilities": { "ips": true }
            },
            {
              "name": "s3-minio",
              "namespace": "minio",
              "capabilities": { "ips": true }
            }
          ]
    spec:
      restartPolicy: Always
      serviceAccountName: minio
      schedulerName: default-scheduler
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
            - weight: 1
              podAffinityTerm:
                labelSelector:
                  matchLabels:
                    app.kubernetes.io/instance: minio
                    app.kubernetes.io/name: minio
                topologyKey: kubernetes.io/hostname
      terminationGracePeriodSeconds: 30
      securityContext:
        seLinuxOptions:
          type: spc_t
        fsGroup: 1001
        fsGroupChangePolicy: OnRootMismatch
      containers:
        - resources:
            limits:
              cpu: '16'
              ephemeral-storage: 1Gi
              memory: 128Gi
            requests:
              cpu: '16'
              ephemeral-storage: 1Gi
              memory: 128Gi
          readinessProbe:
            tcpSocket:
              port: minio-api
            initialDelaySeconds: 45
            timeoutSeconds: 1
            periodSeconds: 5
            successThreshold: 1
            failureThreshold: 5
          terminationMessagePath: /dev/termination-log
          name: minio
          livenessProbe:
            httpGet:
              path: /minio/health/live
              port: minio-api
              scheme: HTTP
            initialDelaySeconds: 45
            timeoutSeconds: 5
            periodSeconds: 5
            successThreshold: 1
            failureThreshold: 5
          env:
            - name: BITNAMI_DEBUG
              value: 'false'
            - name: MINIO_DISTRIBUTED_MODE_ENABLED
              value: 'yes'
            - name: MINIO_DISTRIBUTED_NODES
              value: 'minio-{0...5}.minio-headless.minio.svc.cluster.local:9000/bitnami/minio/data-{0...1}'
            - name: MINIO_SCHEME
              value: http
            - name: MINIO_FORCE_NEW_KEYS
              value: 'no'
            - name: MINIO_ROOT_USER
              valueFrom:
                secretKeyRef:
                  name: minio
                  key: root-user
            - name: MINIO_ROOT_PASSWORD
              valueFrom:
                secretKeyRef:
                  name: minio
                  key: root-password
            - name: MINIO_SKIP_CLIENT
              value: 'yes'
            - name: MINIO_BROWSER
              value: 'on'
            - name: MINIO_PROMETHEUS_AUTH_TYPE
              value: public
            - name: MINIO_DATA_DIR
              value: /bitnami/minio/data-0
            - name: MINIO_BROWSER_SESSION_DURATION
              value: 365d
            - name: MINIO_SCANNER_SPEED
              value: slowest
            - name: NFV_IPPOOL
              value: 192.168.0.0-16
            - name: NFV_SLEEP
              value: '45'
            - name: NFV_ARGS
              value: 'http://minio-{0...5}-data:9000/bitnami/minio/data-{0...1}'
          securityContext:
            runAsGroup: 1001
            runAsUser: 0
            seccompProfile:
              type: RuntimeDefault
            readOnlyRootFilesystem: false
            runAsNonRoot: false
            privileged: true
            capabilities:
              drop:
                - ALL
            seLinuxOptions:
              type: spc_t
            allowPrivilegeEscalation: true
          ports:
            - name: minio-api
              containerPort: 9000
              protocol: TCP
            - name: minio-console
              containerPort: 9001
              protocol: TCP
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: empty-dir
              mountPath: /tmp
              subPath: tmp-dir
            - name: empty-dir
              mountPath: /opt/bitnami/minio/tmp
              subPath: app-tmp-dir
            - name: empty-dir
              mountPath: /.mc
              subPath: app-mc-dir
            - name: data-0
              mountPath: /bitnami/minio/data-0
            - name: data-1
              mountPath: /bitnami/minio/data-1
            - name: minio-creds
              readOnly: true
              mountPath: /minio-creds-config.json
              subPath: config.json
            - name: minio-run
              readOnly: true
              mountPath: /opt/bitnami/scripts/minio/run.sh
              subPath: run.sh
            - name: minio-update-host
              readOnly: true
              mountPath: /opt/bitnami/scripts/minio-update-host.sh
              subPath: minio-update-host.sh
            - name: libminio
              readOnly: true
              mountPath: /opt/bitnami/scripts/libminio.sh
              subPath: libminio.sh
          terminationMessagePolicy: File
          envFrom:
            - secretRef:
                name: minio-extra
          image: 'minio:2024.4.28-debian-12-r0'
      automountServiceAccountToken: true
      serviceAccount: minio
      volumes:
        - name: empty-dir
          emptyDir: {}
        - name: minio-creds
          secret:
            secretName: minio-creds
            items:
              - key: config.json
                path: config.json
            defaultMode: 420
        - name: minio-run
          configMap:
            name: minio-run
            items:
              - key: minio-run.sh
                path: run.sh
            defaultMode: 511
        - name: libminio
          configMap:
            name: minio-run
            items:
              - key: libminio.sh
                path: libminio.sh
            defaultMode: 511
        - name: minio-update-host
          configMap:
            name: minio-run
            items:
              - key: minio-update-host.sh
                path: minio-update-host.sh
            defaultMode: 511
      dnsPolicy: ClusterFirst
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Exists
          effect: NoSchedule
  podManagementPolicy: Parallel
  replicas: 6
  updateStrategy:
    type: RollingUpdate
  selector:
    matchLabels:
      app.kubernetes.io/instance: minio
      app.kubernetes.io/name: minio
status:
  observedGeneration: 43
  availableReplicas: 6
  updateRevision: minio-5ccdcd5c7
  currentRevision: minio-5ccdcd5c7
  currentReplicas: 6
  updatedReplicas: 6
  replicas: 6
  collisionCount: 0
  readyReplicas: 6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants