Kafka Pod Stuck in Terminating State During Manual Rolling Update with OpenEBS-LVM-LocalPV #10826
Replies: 2 comments 1 reply
-
I'm not sure what exactly from the StackOverflow solution help you, it is not really clear from the answer. But in general, you would need to dig into the logs and events to understand what makes it stuck. That is the only way to solve it. |
Beta Was this translation helpful? Give feedback.
-
Thank you for your response. I have done the following command : [root@master-1]/# kubectl annotate strimzipodset kafka-kafka strimzi.io/manual-rolling-update=true
strimzipodset.core.strimzi.io/kafka-kafka annotated The pod stay in terminating state : the differents logs : [root@master-1]/# k logs kafka-kafka-0
unable to retrieve container logs for cri-o://1772614287ed089724095f4da7be80b4ba979b4ac9101d0a55030c56bbf5f6b8# [root@master-1]/# k logs strimzi-cluster-operator-6ff69d57b-czdh9
....
024-11-13 07:42:37 INFO StrimziPodSetController:414 - Reconciliation #26(watch) StrimziPodSet(ot/kafka-zookeeper): reconciled
2024-11-13 07:42:40 INFO AbstractOperator:400 - Reconciliation #4(timer) Kafka(ot/kafka): Reconciliation is in progress
2024-11-13 07:42:49 INFO KafkaRoller:396 - Reconciliation #4(timer) Kafka(ot/kafka): Will temporarily skip verifying pod kafka-kafka-0/0 is up-to-date due to ForceableProblem: Pod kafka-kafka-0 is the active controller and there are other pods to verify first, retrying after at least 32000ms
2024-11-13 07:42:50 INFO KafkaRoller:476 - Reconciliation #4(timer) Kafka(ot/kafka): Rolling Pod kafka-kafka-2/2 due to [Pod was manually annotated to be rolled]
2024-11-13 07:42:50 INFO PodOperator:54 - Reconciliation #4(timer) Kafka(ot/kafka): Rolling pod kafka-kafka-2
2024-11-13 07:42:50 INFO StrimziPodSetController:378 - Reconciliation #27(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:42:50 INFO StrimziPodSetController:414 - Reconciliation #27(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:42:50 INFO StrimziPodSetController:378 - Reconciliation #28(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:42:50 INFO StrimziPodSetController:414 - Reconciliation #28(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:42:51 INFO StrimziPodSetController:378 - Reconciliation #29(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:42:51 INFO StrimziPodSetController:414 - Reconciliation #29(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:42:51 INFO StrimziPodSetController:378 - Reconciliation #30(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:42:51 INFO StrimziPodSetController:414 - Reconciliation #30(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:42:51 INFO StrimziPodSetController:378 - Reconciliation #31(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:42:51 INFO StrimziPodSetController:414 - Reconciliation #31(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:42:53 INFO StrimziPodSetController:378 - Reconciliation #32(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:42:53 INFO StrimziPodSetController:414 - Reconciliation #32(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:42:53 INFO StrimziPodSetController:378 - Reconciliation #33(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:42:53 INFO StrimziPodSetController:414 - Reconciliation #33(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:42:53 INFO StrimziPodSetController:378 - Reconciliation #34(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:42:53 INFO StrimziPodSetController:414 - Reconciliation #34(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:42:53 INFO StrimziPodSetController:378 - Reconciliation #35(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:42:53 INFO StrimziPodSetController:414 - Reconciliation #35(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:42:53 INFO StrimziPodSetController:378 - Reconciliation #36(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:42:53 INFO StrimziPodSetController:414 - Reconciliation #36(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:42:53 INFO StrimziPodSetController:378 - Reconciliation #37(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:42:53 INFO StrimziPodSetController:414 - Reconciliation #37(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:42:53 INFO StrimziPodSetController:378 - Reconciliation #38(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:42:53 INFO StrimziPodSetController:414 - Reconciliation #38(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:42:54 INFO StrimziPodSetController:378 - Reconciliation #39(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:42:54 INFO StrimziPodSetController:414 - Reconciliation #39(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:42:58 INFO StrimziPodSetController:378 - Reconciliation #40(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:42:58 INFO StrimziPodSetController:414 - Reconciliation #40(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:42:59 INFO StrimziPodSetController:378 - Reconciliation #41(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:42:59 INFO StrimziPodSetController:414 - Reconciliation #41(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:43:23 INFO StrimziPodSetController:378 - Reconciliation #42(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:43:23 INFO StrimziPodSetController:414 - Reconciliation #42(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:43:23 INFO StrimziPodSetController:378 - Reconciliation #43(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:43:23 INFO StrimziPodSetController:414 - Reconciliation #43(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:43:23 INFO KafkaAvailability:135 - Reconciliation #4(timer) Kafka(ot/kafka): xxxx.topics.metadata/0 will be under-replicated (ISR={0,1}, replicas=[0,1,2], min.insync.replicas=2) if broker 0 is restarted.
2024-11-13 07:43:23 INFO KafkaRoller:396 - Reconciliation #4(timer) Kafka(ot/kafka): Will temporarily skip verifying pod kafka-kafka-0/0 is up-to-date due to io.strimzi.operator.cluster.operator.resource.KafkaRoller$UnforceableProblem: Pod kafka-kafka-0 cannot be updated right now., retrying after at least 64000ms
2024-11-13 07:43:40 INFO ClusterOperator:142 - Triggering periodic reconciliation for namespace ot
2024-11-13 07:43:40 INFO AbstractOperator:400 - Reconciliation #4(timer) Kafka(ot/kafka): Reconciliation is in progress
2024-11-13 07:44:27 INFO KafkaRoller:476 - Reconciliation #4(timer) Kafka(ot/kafka): Rolling Pod kafka-kafka-0/0 due to [Pod was manually annotated to be rolled]
2024-11-13 07:44:27 INFO PodOperator:54 - Reconciliation #4(timer) Kafka(ot/kafka): Rolling pod kafka-kafka-0
2024-11-13 07:44:27 INFO StrimziPodSetController:378 - Reconciliation #45(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:44:27 INFO StrimziPodSetController:414 - Reconciliation #45(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:44:27 INFO StrimziPodSetController:378 - Reconciliation #46(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:44:27 INFO StrimziPodSetController:414 - Reconciliation #46(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:44:28 INFO StrimziPodSetController:378 - Reconciliation #47(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:44:28 INFO StrimziPodSetController:414 - Reconciliation #47(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:44:29 INFO StrimziPodSetController:378 - Reconciliation #48(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:44:29 INFO StrimziPodSetController:414 - Reconciliation #48(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:44:29 INFO StrimziPodSetController:378 - Reconciliation #49(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:44:29 INFO StrimziPodSetController:414 - Reconciliation #49(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:44:30 INFO StrimziPodSetController:378 - Reconciliation #50(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:44:30 INFO StrimziPodSetController:414 - Reconciliation #50(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:44:30 INFO StrimziPodSetController:378 - Reconciliation #51(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:44:30 INFO StrimziPodSetController:414 - Reconciliation #51(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:44:30 INFO StrimziPodSetController:378 - Reconciliation #52(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:44:30 INFO StrimziPodSetController:414 - Reconciliation #52(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:44:30 INFO StrimziPodSetController:378 - Reconciliation #53(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:44:30 INFO StrimziPodSetController:414 - Reconciliation #53(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:44:30 INFO StrimziPodSetController:378 - Reconciliation #54(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:44:30 INFO StrimziPodSetController:414 - Reconciliation #54(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:44:32 INFO StrimziPodSetController:378 - Reconciliation #55(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:44:32 INFO StrimziPodSetController:414 - Reconciliation #55(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:44:35 INFO StrimziPodSetController:378 - Reconciliation #56(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:44:35 INFO StrimziPodSetController:414 - Reconciliation #56(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:44:36 INFO StrimziPodSetController:378 - Reconciliation #57(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:44:36 INFO StrimziPodSetController:414 - Reconciliation #57(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:44:40 INFO AbstractOperator:400 - Reconciliation #4(timer) Kafka(ot/kafka): Reconciliation is in progress
2024-11-13 07:45:00 INFO StrimziPodSetController:378 - Reconciliation #58(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:45:00 INFO StrimziPodSetController:414 - Reconciliation #58(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:45:00 INFO StrimziPodSetController:378 - Reconciliation #59(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:45:00 INFO StrimziPodSetController:414 - Reconciliation #59(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:45:00 INFO StrimziPodSetController:378 - Reconciliation #60(watch) StrimziPodSet(ot/kafka-kafka): StrimziPodSet will be reconciled
2024-11-13 07:45:00 INFO StrimziPodSetController:414 - Reconciliation #60(watch) StrimziPodSet(ot/kafka-kafka): reconciled
2024-11-13 07:45:01 INFO KafkaRoller:748 - Reconciliation #4(timer) Kafka(ot/kafka): Dynamic update of pod kafka-kafka-0/0 was successful.
2024-11-13 07:45:01 INFO KafkaRoller:396 - Reconciliation #4(timer) Kafka(ot/kafka): Will temporarily skip verifying pod kafka-kafka-1/1 is up-to-date due to ForceableProblem: Pod kafka-kafka-1 is the active controller and there are other pods to verify first, retrying after at least 250ms
2024-11-13 07:45:02 INFO KafkaRoller:748 - Reconciliation #4(timer) Kafka(ot/kafka): Dynamic update of pod kafka-kafka-2/2 was successful.
2024-11-13 07:45:02 INFO KafkaRoller:748 - Reconciliation #4(timer) Kafka(ot/kafka): Dynamic update of pod kafka-kafka-1/1 was successful.
2024-11-13 07:45:03 INFO AbstractOperator:537 - Reconciliation #4(timer) Kafka(ot/kafka): reconciled [root@master-1]/# journalctl -u crio.service -f
-- Logs begin at Tue 2024-11-12 11:59:08 CST. --
Nov 13 01:52:17 master-1 crio[16091]: time="2024-11-13 01:52:17.067916126-06:00" level=info msg="Checking image status: registry-nexus.yyyy.xxxx.com/yyyy-u/keycloak:84.4206.1" id=b97ee157-258f-4c2e-8b48-3c42654e90ed name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 13 01:52:17 master-1 crio[16091]: time="2024-11-13 01:52:17.069699872-06:00" level=info msg="Image status: &{0xc0005aa930 map[]}" id=b97ee157-258f-4c2e-8b48-3c42654e90ed name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 13 01:52:17 master-1 crio[16091]: time="2024-11-13 01:52:17.101280131-06:00" level=info msg="Checking image status: registry-nexus.yyyy.xxxx.com/yyyy-u/keycloak:84.4206.1" id=5a731955-5225-459e-9f74-5fe4891e5318 name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 13 01:52:17 master-1 crio[16091]: time="2024-11-13 01:52:17.102620335-06:00" level=info msg="Image status: &{0xc0000372d0 map[]}" id=5a731955-5225-459e-9f74-5fe4891e5318 name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 13 01:52:17 master-1 crio[16091]: time="2024-11-13 01:52:17.103366963-06:00" level=info msg="Creating container: ot/authentication-7696567bc4-n7tg9/keycloak" id=f7419989-dd10-49d2-903d-91f02bfc23f5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 13 01:52:17 master-1 crio[16091]: time="2024-11-13 01:52:17.902683769-06:00" level=info msg="Removing container: 222f7f2543a418edcc0cd9570562fffb036b901d9ce7045d8dbb7be509a93535" id=9219679a-5b6b-40be-8596-39376b4e0ee2 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
Nov 13 01:52:18 master-1 crio[16091]: time="2024-11-13 01:52:18.620779385-06:00" level=info msg="Removed container 222f7f2543a418edcc0cd9570562fffb036b901d9ce7045d8dbb7be509a93535: ot/authentication-7696567bc4-n7tg9/keycloak" id=9219679a-5b6b-40be-8596-39376b4e0ee2 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
Nov 13 01:52:18 master-1 crio[16091]: time="2024-11-13 01:52:18.689895592-06:00" level=info msg="Created container 3c866f8eab8f6deb186bd4df302998d98f8278b3a74beb9a31d4fb25d6b0003c: ot/authentication-7696567bc4-n7tg9/keycloak" id=f7419989-dd10-49d2-903d-91f02bfc23f5 name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 13 01:52:18 master-1 crio[16091]: time="2024-11-13 01:52:18.690331071-06:00" level=info msg="Starting container: 3c866f8eab8f6deb186bd4df302998d98f8278b3a74beb9a31d4fb25d6b0003c" id=0640d8a4-90f1-4b78-9ab2-3f7cd40c59b4 name=/runtime.v1alpha2.RuntimeService/StartContainer
Nov 13 01:52:18 master-1 crio[16091]: time="2024-11-13 01:52:18.695721436-06:00" level=info msg="Started container" PID=545849 containerID=3c866f8eab8f6deb186bd4df302998d98f8278b3a74beb9a31d4fb25d6b0003c description=ot/authentication-7696567bc4-n7tg9/keycloak id=0640d8a4-90f1-4b78-9ab2-3f7cd40c59b4 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=05f7a311717cf3a60462b9b619d440edfedb58042d4cfa9e26e392a85fe799c6
Nov 13 01:52:38 master-1 crio[16091]: time="2024-11-13 01:52:38.976452536-06:00" level=info msg="Stopping container: e5935caffaca90acfec6a316713533ce83fece7fa5b6dbae4015f243a4ffe1f0 (timeout: 30s)" id=cbcd77d3-42ee-47ad-822e-1af7da69e327 name=/runtime.v1alpha2.RuntimeService/StopContainer
Nov 13 01:52:40 master-1 crio[16091]: time="2024-11-13 01:52:40.708147829-06:00" level=info msg="Stopped container e5935caffaca90acfec6a316713533ce83fece7fa5b6dbae4015f243a4ffe1f0: kube-system/coredns-5c4ddcdb86-665ll/coredns" id=cbcd77d3-42ee-47ad-822e-1af7da69e327 name=/runtime.v1alpha2.RuntimeService/StopContainer
Nov 13 01:52:40 master-1 crio[16091]: time="2024-11-13 01:52:40.708776671-06:00" level=info msg="Checking image status: registry-nexus.yyyy.xxxx.com/coredns:v1.8.4" id=d8ad04d3-9bff-403e-bb54-473361dba2e5 name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 13 01:52:40 master-1 crio[16091]: time="2024-11-13 01:52:40.709481021-06:00" level=info msg="Image status: &{0xc0003021c0 map[]}" id=d8ad04d3-9bff-403e-bb54-473361dba2e5 name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 13 01:52:40 master-1 crio[16091]: time="2024-11-13 01:52:40.709894696-06:00" level=info msg="Checking image status: registry-nexus.yyyy.xxxx.com/coredns:v1.8.4" id=1b49c6a6-63e4-4f8b-bde4-26c2a39bdcbb name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 13 01:52:40 master-1 crio[16091]: time="2024-11-13 01:52:40.710289966-06:00" level=info msg="Image status: &{0xc000302c40 map[]}" id=1b49c6a6-63e4-4f8b-bde4-26c2a39bdcbb name=/runtime.v1alpha2.ImageService/ImageStatus
Nov 13 01:52:40 master-1 crio[16091]: time="2024-11-13 01:52:40.710807179-06:00" level=info msg="Creating container: kube-system/coredns-5c4ddcdb86-665ll/coredns" id=33ec7d9d-ba35-4e8d-9ebf-76ab0077fc0a name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 13 01:52:40 master-1 crio[16091]: time="2024-11-13 01:52:40.735073367-06:00" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/vfs/dir/529a95f79ce6581824281103b156ea4add50ae36e223ad5b0ae841f317b4903c/etc/passwd: no such file or directory"
Nov 13 01:52:40 master-1 crio[16091]: time="2024-11-13 01:52:40.735110739-06:00" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/vfs/dir/529a95f79ce6581824281103b156ea4add50ae36e223ad5b0ae841f317b4903c/etc/group: no such file or directory"
Nov 13 01:52:40 master-1 crio[16091]: time="2024-11-13 01:52:40.800646553-06:00" level=info msg="Created container 9391c082bbb05949745338ad9190862075024077fcbb026efe6861205030ff10: kube-system/coredns-5c4ddcdb86-665ll/coredns" id=33ec7d9d-ba35-4e8d-9ebf-76ab0077fc0a name=/runtime.v1alpha2.RuntimeService/CreateContainer
Nov 13 01:52:40 master-1 crio[16091]: time="2024-11-13 01:52:40.801002361-06:00" level=info msg="Starting container: 9391c082bbb05949745338ad9190862075024077fcbb026efe6861205030ff10" id=4539b3a5-7e71-4004-bca5-7293fcd5dc20 name=/runtime.v1alpha2.RuntimeService/StartContainer
Nov 13 01:52:40 master-1 crio[16091]: time="2024-11-13 01:52:40.804786296-06:00" level=info msg="Started container" PID=547078 containerID=9391c082bbb05949745338ad9190862075024077fcbb026efe6861205030ff10 description=kube-system/coredns-5c4ddcdb86-665ll/coredns id=4539b3a5-7e71-4004-bca5-7293fcd5dc20 name=/runtime.v1alpha2.RuntimeService/StartContainer sandboxID=ccc149961c13cd1fd11de06cc2da4195a2e15281276390a9e3918ce15c4f0416
Nov 13 01:52:40 master-1 crio[16091]: time="2024-11-13 01:52:40.950215995-06:00" level=info msg="Removing container: 325883d3bf9b14ced9dc8645da9d98225b0dd060ddcb932988225fd228e3d7f0" id=01f8856e-db72-4599-a3e6-59f7d412bcc5 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
Nov 13 01:52:40 master-1 crio[16091]: time="2024-11-13 01:52:40.974979171-06:00" level=info msg="Removed container 325883d3bf9b14ced9dc8645da9d98225b0dd060ddcb932988225fd228e3d7f0: kube-system/coredns-5c4ddcdb86-665ll/coredns" id=01f8856e-db72-4599-a3e6-59f7d412bcc5 name=/runtime.v1alpha2.RuntimeService/RemoveContainer
Nov 13 01:52:46 master-1 crio[16091]: time="2024-11-13 01:52:46.232556398-06:00" level=info msg="Stopping container: 3a2efac1d3ae8a1e83b6c8585afd1376274d9f6cad188a51832568ba90b7b8e5 (timeout: 30s)" id=490d51c7-823f-4248-9413-6d02590735e5 name=/runtime.v1alpha2.RuntimeService/StopContainer [root@master-1]/# journalctl -u kubelet.service -b
Nov 12 12:02:13 master-1 systemd[1]: Started kubelet: The Kubernetes Node Agent.
Nov 12 12:02:13 master-1 kubelet[16073]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet>
Nov 12 12:02:13 master-1 kubelet[16073]: Flag --kubelet-cgroups has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubel>
Nov 12 12:02:13 master-1 kubelet[16073]: I1112 12:02:13.306881 16073 server.go:199] "Warning: For remote container runtime, --pod-infra-container-image is ignored in kubelet, which should be set in that remote runtime instead"
Nov 12 12:02:13 master-1 kubelet[16073]: Flag --cgroup-driver has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet>
Nov 12 12:02:13 master-1 kubelet[16073]: Flag --kubelet-cgroups has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubel>
Nov 12 12:02:13 master-1 kubelet[16073]: I1112 12:02:13.329941 16073 server.go:440] "Kubelet version" kubeletVersion="v1.22.2"
Nov 12 12:02:13 master-1 kubelet[16073]: I1112 12:02:13.330170 16073 server.go:868] "Client rotation is on, will bootstrap in background"
Nov 12 12:02:13 master-1 kubelet[16073]: I1112 12:02:13.331671 16073 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Nov 12 12:02:13 master-1 kubelet[16073]: W1112 12:02:13.332317 16073 manager.go:159] Cannot detect current cgroup on cgroup v2
Nov 12 12:02:13 master-1 kubelet[16073]: I1112 12:02:13.332317 16073 dynamic_cafile_content.go:155] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt"
Nov 12 12:02:18 master-1 kubelet[16073]: W1112 12:02:18.337049 16073 fs.go:214] stat failed on /dev/mapper/luks-dcceefb4-487b-4489-9e5b-66d4e08881bd with error: no such file or directory
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.343594 16073 server.go:687] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.343746 16073 container_manager_linux.go:280] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.343809 16073 container_manager_linux.go:285] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName:/systemd/system.slice SystemCgroupsName: Kubel>
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.343824 16073 topology_manager.go:133] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.343832 16073 container_manager_linux.go:320] "Creating device plugin manager" devicePluginEnabled=true
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.343851 16073 state_mem.go:36] "Initialized new in-memory state store"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.343977 16073 kubelet.go:418] "Attempting to sync node with API server"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.343991 16073 kubelet.go:279] "Adding static pod path" path="/etc/kubernetes/manifests"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.344006 16073 kubelet.go:290] "Adding apiserver pod source"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.344021 16073 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.346699 16073 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="cri-o" version="1.22.0" apiVersion="v1alpha2"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.347174 16073 server.go:1213] "Started kubelet"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.347212 16073 server.go:149] "Starting to listen" address="0.0.0.0" port=10250
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.347212 16073 kubelet.go:1343] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memor>
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.348657 16073 server.go:409] "Adding debug handlers to kubelet server"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.349149 16073 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.350120 16073 volume_manager.go:291] "Starting Kubelet Volume Manager"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.350161 16073 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.350837 16073 kubelet.go:2332] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI conf>
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.380929 16073 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv4
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.394816 16073 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode13ad08d5519ddff403971867e9487c0.slice/crio-26aea>
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.395231 16073 manager.go:1123] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4119c90f57ca5c751b2682a8dbca328c.slice/crio->
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.398427 16073 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb84888fb480b903adc739957041e3f0.slice/crio-d8658>
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.398723 16073 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07ef45f1b8d7967256f3402f964e80dd.slice/crio-739b6>
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.398954 16073 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4119c90f57ca5c751b2682a8dbca328c.slice/crio-9e748>
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.400581 16073 manager.go:1123] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode13ad08d5519ddff403971867e9487c0.slice/crio->
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.403842 16073 manager.go:1123] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07ef45f1b8d7967256f3402f964e80dd.slice/crio->
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.404538 16073 manager.go:1123] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb84888fb480b903adc739957041e3f0.slice/crio->
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.408680 16073 kubelet_network_linux.go:56] "Initialized protocol iptables rules." protocol=IPv6
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.408704 16073 status_manager.go:158] "Starting to sync pod status with apiserver"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.408719 16073 kubelet.go:1967] "Starting kubelet main sync loop"
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.408896 16073 kubelet.go:1991] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.451803 16073 kubelet_node_status.go:71] "Attempting to register node" node="master-1"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.457458 16073 kubelet_node_status.go:109] "Node was previously registered" node="master-1"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.457544 16073 kubelet_node_status.go:74] "Successfully registered node" node="master-1"
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.509567 16073 kubelet.go:1991] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.709725 16073 kubelet.go:1991] "Skipping pod synchronization" err="container runtime status check may not have completed yet"
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.726251 16073 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode13ad08d5519ddff403971867e9487c0.slice/crio-26aea>
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.726906 16073 manager.go:1123] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4119c90f57ca5c751b2682a8dbca328c.slice/crio->
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.727655 16073 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07ef45f1b8d7967256f3402f964e80dd.slice/crio-739b6>
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.728079 16073 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb84888fb480b903adc739957041e3f0.slice/crio-d8658>
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.728368 16073 manager.go:1123] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode13ad08d5519ddff403971867e9487c0.slice/crio->
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.728644 16073 manager.go:1123] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb84888fb480b903adc739957041e3f0.slice/crio->
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.728795 16073 manager.go:1123] Failed to create existing container: /pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07ef45f1b8d7967256f3402f964e80dd.slice/crio->
Nov 12 12:02:18 master-1 kubelet[16073]: E1112 12:02:18.729050 16073 manager.go:1123] Failed to create existing container: /kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4119c90f57ca5c751b2682a8dbca328c.slice/crio-9e748>
Nov 12 12:02:18 master-1 kubelet[16073]: W1112 12:02:18.729274 16073 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07ef45f1b8d7967256f3402f964e80dd.slice>
Nov 12 12:02:18 master-1 kubelet[16073]: W1112 12:02:18.729462 16073 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4119c90f57ca5c751b2682a8dbca328c.slice>
Nov 12 12:02:18 master-1 kubelet[16073]: W1112 12:02:18.729648 16073 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode13ad08d5519ddff403971867e9487c0.slice>
Nov 12 12:02:18 master-1 kubelet[16073]: W1112 12:02:18.729818 16073 manager.go:1176] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb84888fb480b903adc739957041e3f0.slice>
Nov 12 12:02:18 master-1 kubelet[16073]: W1112 12:02:18.729991 16073 manager.go:1176] Failed to process watch event {EventType:0 Name:/pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07ef45f1b8d7967256f3402f964e80dd.>
Nov 12 12:02:18 master-1 kubelet[16073]: W1112 12:02:18.730168 16073 manager.go:1176] Failed to process watch event {EventType:0 Name:/pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4119c90f57ca5c751b2682a8dbca328c.>
Nov 12 12:02:18 master-1 kubelet[16073]: W1112 12:02:18.730359 16073 manager.go:1176] Failed to process watch event {EventType:0 Name:/pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode13ad08d5519ddff403971867e9487c0.>
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.730398 16073 cpu_manager.go:209] "Starting CPU manager" policy="none"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.730421 16073 cpu_manager.go:210] "Reconciling" reconcilePeriod="10s"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.730438 16073 state_mem.go:36] "Initialized new in-memory state store"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.730567 16073 state_mem.go:88] "Updated default CPUSet" cpuSet=""
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.730579 16073 state_mem.go:96] "Updated CPUSet assignments" assignments=map[]
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.730585 16073 policy_none.go:49] "None policy: Start"
Nov 12 12:02:18 master-1 kubelet[16073]: W1112 12:02:18.730586 16073 manager.go:1176] Failed to process watch event {EventType:0 Name:/pids/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb84888fb480b903adc739957041e3f0.>
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.730930 16073 memory_manager.go:168] "Starting memorymanager" policy="None"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.730949 16073 state_mem.go:35] "Initializing new in-memory state store"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.731064 16073 state_mem.go:75] "Updated machine memory state"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.733790 16073 manager.go:607] "Failed to retrieve checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
Nov 12 12:02:18 master-1 kubelet[16073]: I1112 12:02:18.734148 16073 plugin_manager.go:114] "Starting Kubelet Plugin Manager"
Nov 12 12:02:19 master-1 kubelet[16073]: I1112 12:02:19.110922 16073 topology_manager.go:200] "Topology Admit Handler"
Nov 12 12:02:19 master-1 kubelet[16073]: I1112 12:02:19.111029 16073 topology_manager.go:200] "Topology Admit Handler"
Nov 12 12:02:19 master-1 kubelet[16073]: I1112 12:02:19.111070 16073 topology_manager.go:200] "Topology Admit Handler"
Nov 12 12:02:19 master-1 kubelet[16073]: I1112 12:02:19.111120 16073 topology_manager.go:200] "Topology Admit Handler"
Nov 12 12:02:19 master-1 kubelet[16073]: E1112 12:02:19.116035 16073 kubelet.go:1701] "Failed creating a mirror pod for" err="pods \"etcd-master-1\" already exists" pod="kube-system/etcd-master-1"
Nov 12 12:02:19 master-1 kubelet[16073]: I1112 12:02:19.154267 16073 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-pki\" (UniqueName: \"kubernetes.io/host-path/e13ad08d5519ddff403971867e9>
Nov 12 12:02:19 master-1 kubelet[16073]: I1112 12:02:19.154300 16073 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb84888fb480b903adc7399570>
Nov 12 12:02:19 master-1 kubelet[16073]: I1112 12:02:19.154319 16073 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/eb84888fb480b903adc7>
Nov 12 12:02:19 master-1 kubelet[16073]: I1112 12:02:19.154337 16073 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb84888fb480b903adc739957>
Nov 12 12:02:19 master-1 kubelet[16073]: I1112 12:02:19.154366 16073 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb84888fb480b903adc73995>
Nov 12 12:02:19 master-1 kubelet[16073]: I1112 12:02:19.154381 16073 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/07ef45f1b8d7967256f3402f>
Nov 12 12:02:19 master-1 kubelet[16073]: I1112 12:02:19.154395 16073 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/07ef45f1b8d7967256f3402f9>
Nov 12 12:02:19 master-1 kubelet[16073]: I1112 12:02:19.154414 16073 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-pki\" (UniqueName: \"kubernetes.io/host-path/eb84888fb480b903adc73995704>
Nov 12 12:02:19 master-1 kubelet[16073]: I1112 12:02:19.154443 16073 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4119c90f57ca5c751b2682a8>
Nov 12 12:02:19 master-1 kubelet[16073]: I1112 12:02:19.154457 16073 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e13ad08d5519ddff403971867e>
Nov 12 12:02:19 master-1 kubelet[16073]: I1112 12:02:19.154471 16073 reconciler.go:224] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e13ad08d5519ddff403971867>
lines 180-231/231 (END) [root@master-1]/# kubectl get pods -l strimzi.io/name=kafka-kafka -o jsonpath='{range .items[*]}{@.metadata.name}{" finalizers: "}{@.metadata.finalizers}{"\n"}{end}'
kafka-kafka-0 finalizers:
kafka-kafka-1 finalizers:
kafka-kafka-2 finalizers: [root@master-1]/# kubectl get kafka kafka -o jsonpath='{.metadata.finalizers}'
[root@master-1]/# kubectl get strimzipodset kafka-kafka -o jsonpath='{.metadata.finalizers}'
The rolling update only works if I force delete kafka pods. Thanks again for your help. |
Beta Was this translation helpful? Give feedback.
-
Hello,
I need your assistance with an issue we're encountering. We operate a multi-node cluster with 3 Kafka replicas and 3 Zookeeper replicas, where volumes are provisioned by OpenEBS-LVM-LocalPV. Recently, we triggered a manual rolling update to test our upgrade procedures.
Upon initiating the update, we observed logs in the Strimzi operator indicating the start of the process. However, the first Kafka pod remains stuck in a terminating state, preventing the other pods from restarting as expected.
We faced a similar issue with another tool, which we managed to resolve by following this link : https://stackoverflow.com/a/65792674/7652996 . Unfortunately, the parameters mentioned in that solution aren’t configurable in Strimzi, or we’re unsure of how to set them within this context. Additionally, we haven’t found any relevant logs in either the Kafka pods or the kubelet.service to diagnose further.
Could you provide guidance on resolving this issue?
Thanks in advance for your help!
Beta Was this translation helpful? Give feedback.
All reactions