Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/helm-kafka] Pod CrashLoopBack State: While enable Sasl_ssl Configuration #26567

Closed
zeeshan018 opened this issue May 30, 2024 · 6 comments
Assignees
Labels
kafka solved stale 15 days without activity tech-issues The user has a technical issue about an application triage Triage is needed

Comments

@zeeshan018
Copy link

zeeshan018 commented May 30, 2024

Name and Version

bitnami helm chart : 29.0.3

What architecture are you using?

None

What steps will reproduce the bug?

When I set up the sasl_ssl configuration in the values.yaml file, the pods go into a CrashLoopBackOff state. Upon investigation using the command:

kubectl logs pod-name -n namespace -c kafka-init

it indicates that the keystore file is not found.

Are you using any custom parameters or values?

kafka:
  install: true
  listeners:
    client: 
      protocol: SASL_SSL
      sslClientAuth: "required"
    controller:
      protocol: SASL_PLAINTEXT
    interbroker:
      protocol: SASL_SSL
    external:
      protocol: SASL_SSL
  sasl:
    interBrokerMechanism: PLAIN
    controllerMechanism: PLAIN
    interbroker:
      user: broker_user
      password: qawsedrf!
    controller:  
      user: controller_user
      password: frdeswaq!
    client:
      users:
        - user1
      passwords: password
  tls:
    keystorePassword: 1q2w3e4r
    truststorePassword: 1q2w3e4r
    keyPassword: 1q2w3e4r
    existingSecret: kafka-secret
    jksKeystoreKey: "<base64_encoded_keystore_data>"
    endpointIdentificationAlgorithm: ""
    jksTruststoreKey: "<base64_encoded_truststore_data>"
  controllerQuorumVoters: 0@kafka:9093
  controller:
    replicaCount: 1
    controllerOnly: true
    automountServiceAccountToken: true
  broker:
    replicaCount: 1
    automountServiceAccountToken: true
  externalAccess:
    enabled: true
    autoDiscovery:
      enabled: true
  rbac:
    create: true

Additional information

If someone is facing this issue, please guide me ASAP. I am stuck and have checked the documentation but did not find information related to this error. I am using a load balancer for external access. You can see my configuration in the values.yaml file below. If I am missing something, please guide me

@zeeshan018 zeeshan018 added the tech-issues The user has a technical issue about an application label May 30, 2024
@github-actions github-actions bot added the triage Triage is needed label May 30, 2024
@zeeshan018
Copy link
Author

@carrodher i already checked the documentation but did not find information about this

@carrodher carrodher added the kafka label Jun 2, 2024
@carrodher
Copy link
Member

Can you check the logs of the pod and the description of the pod?

@zeeshan018
Copy link
Author

zeeshan018 commented Jun 2, 2024

@carrodher Yes I checked, kindly see the following

$ kubectl get po -n <namespace>

my-release-kafka-broker-0              0/1     Init:CrashLoopBackOff   42 (3m ago)     3h13m

Describe:

Warning  BackOff  7m43s (x852 over 3h12m)  kubelet  Back-off restarting failed container kafka-init in pod my-release-kafka-broker-0_kafka(f56f31bf-7409-435d-85b1-0e3b4c585a4d)

I'm using this to access Kafka outside the cluster:

Using LoadBalancer services:
Use random load balancer IPs using an initContainer that waits for the IPs to be ready and discovers them automatically.

when I investigate using this command:

$ kubectl logs pod -n namespace -c kafka-init :

Error: keystore file not found

@zeeshan018
Copy link
Author

@carrodher
I have a thought in my mind but I haven't tried it yet. Can adding a keystore in the init container's volume and then adding the name of the secret in the pullsecret environment variable
can it work???

Copy link

This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.

@github-actions github-actions bot added the stale 15 days without activity label Jun 19, 2024
Copy link

Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.

@bitnami-bot bitnami-bot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kafka solved stale 15 days without activity tech-issues The user has a technical issue about an application triage Triage is needed
Projects
None yet
Development

No branches or pull requests

3 participants