You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Whenever I set a existingSecret for externalPostgresql in my values file the sentry web and worker pods do not start and fail the health checks. No logs whatsoever are written out from these pods.
Expected behavior
The web and worker pods should eventually come up and then the rest of the hooks should fire.
I appreciate it might be hard to re-create this setup exactly but running kubectl apply -f sentry.yaml in a cluster with ArgoCD should reproduce it? I am more looking for tips to try and get any kind of debug information out of the pods or anything else I can try.
Screenshots
No response
Logs
There are no logs from the web or worker pods. This is the description of the web pod:
Name: sentry-web-c7d558d99-68ffwNamespace: sentryPriority: 0Service Account: sentry-webNode: ip-10-224-52-214.eu-west-1.compute.internal/10.224.52.214Start Time: Tue, 10 Dec 2024 23:32:56 +0000Labels: app=sentrypod-template-hash=c7d558d99release=sentryrole=webAnnotations: checksum/config.yaml: 9ea15b23df10c4f4ea41d56073417b5b04dd08f92adb876fdcaa2484052487e4checksum/configYml: 44136fa355b3678a1146ad16f7e8649e94fb4fc21fe77e8310c060f61caaff8achecksum/sentryConfPy: d5f85a6a8afbc55eebe23801e1a51a0fb4c0428c9a73ef6708d8dc83e079cd49Status: RunningIP: 10.224.52.189IPs:
IP: 10.224.52.189Controlled By: ReplicaSet/sentry-web-c7d558d99Containers:
sentry-web:
Container ID: containerd://13c1c0b77539aced575dc64bbd5d65542e23c9efe6c6aeed99607137c80bc815Image: getsentry/sentry:24.9.0Image ID: docker.io/getsentry/sentry@sha256:1830c64c38383ff8e317bc7ba2274d27d176a113987fdc67e8a5202a67a70badPort: 9000/TCPHost Port: 0/TCPCommand:
sentryArgs:
runwebState: RunningStarted: Tue, 10 Dec 2024 23:40:57 +0000Last State: TerminatedReason: ErrorExit Code: 137Started: Tue, 10 Dec 2024 23:39:37 +0000Finished: Tue, 10 Dec 2024 23:40:57 +0000Ready: FalseRestart Count: 6Limits:
ephemeral-storage: 2Gimemory: 3GiRequests:
cpu: 400mephemeral-storage: 1Gimemory: 2GiLiveness: http-get http://:9000/_health/ delay=10s timeout=2s period=10s #success=1 #failure=5Readiness: http-get http://:9000/_health/ delay=10s timeout=2s period=10s #success=1 #failure=5Environment:
SNUBA: http://sentry-snuba:1218VROOM: http://sentry-vroom:8085SENTRY_SECRET_KEY: <set to the key 'key' in secret 'sentry-sentry-secret'> Optional: falsePOSTGRES_PASSWORD: <set to the key 'password' in secret 'db-user-pass'> Optional: falsePOSTGRES_USER: sentryPOSTGRES_NAME: sentryPOSTGRES_HOST: <set to the key 'host' in secret 'db-user-pass'> Optional: falsePOSTGRES_PORT: 3306AWS_STS_REGIONAL_ENDPOINTS: regionalAWS_DEFAULT_REGION: eu-west-1AWS_REGION: eu-west-1AWS_ROLE_ARN: arn:aws:iam::999999999999:role/SentryRoleAWS_WEB_IDENTITY_TOKEN_FILE: /var/run/secrets/eks.amazonaws.com/serviceaccount/tokenMounts:
/etc/sentry from config (ro)/var/lib/sentry/files from sentry-data (rw)/var/run/secrets/eks.amazonaws.com/serviceaccount from aws-iam-token (ro)/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-mjmxv (ro)Conditions:
Type StatusPodReadyToStartContainers True Initialized True Ready False ContainersReady False PodScheduled True Volumes:
aws-iam-token:
Type: Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds: 86400config:
Type: ConfigMap (a volume populated by a ConfigMap)Name: sentry-sentryOptional: falsesentry-data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)Medium:
SizeLimit: <unset>kube-api-access-mjmxv:
Type: Projected (a volume that contains injected data from multiple sources)TokenExpirationSeconds: 3607ConfigMapName: kube-root-ca.crtConfigMapOptional: <nil>DownwardAPI: trueQoS Class: BurstableNode-Selectors: <none>Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300snode.kubernetes.io/unreachable:NoExecute op=Exists for 300sEvents:
Type Reason Age From Message---- ------ ---- ---- -------Normal Scheduled 9m1s default-scheduler Successfully assigned sentry/sentry-web-c7d558d99-68ffw to ip-10-224-52-214.eu-west-1.compute.internalNormal Killing 8m10s kubelet Container sentry-web failed liveness probe, will be restartedNormal Pulled 7m40s (x2 over 9m) kubelet Container image "getsentry/sentry:24.9.0" already present on machineNormal Created 7m40s (x2 over 9m) kubelet Created container sentry-webNormal Started 7m40s (x2 over 9m) kubelet Started container sentry-webWarning Unhealthy 7m20s (x7 over 8m50s) kubelet Liveness probe failed: Get "http://10.224.52.189:9000/_health/": dial tcp 10.224.52.189:9000: connect: connection refusedWarning Unhealthy 3m50s (x35 over 8m50s) kubelet Readiness probe failed: Get "http://10.224.52.189:9000/_health/": dial tcp 10.224.52.189:9000: connect: connection refused
Additional context
I can jump into the web/worker pods before they are killed and the right values from the secret are present in the env. If I try to run sentry run worker in the pod then it just hangs. Turning on debug level logs does spit out some lines like def send_activity_notifications_to_slack_threads... but nothing helpful and then also just hangs.
I have tried both using the ExternalSecret provider and with just a pre-created Secret object the outcome is the same.
The only thing that helps is if I just hardcode the externalPostgresql values then the web and worker pods will eventually start and the db-init hook is fired as expected.
The text was updated successfully, but these errors were encountered:
Issue submitter TODO list
Describe the bug (actual behavior)
Whenever I set a
existingSecret
forexternalPostgresql
in my values file the sentry web and worker pods do not start and fail the health checks. No logs whatsoever are written out from these pods.Expected behavior
The web and worker pods should eventually come up and then the rest of the hooks should fire.
values.yaml
sentry.yaml
Helm chart version
26.8.0
Steps to reproduce
I appreciate it might be hard to re-create this setup exactly but running
kubectl apply -f sentry.yaml
in a cluster with ArgoCD should reproduce it? I am more looking for tips to try and get any kind of debug information out of the pods or anything else I can try.Screenshots
No response
Logs
There are no logs from the web or worker pods. This is the description of the web pod:
Additional context
I can jump into the web/worker pods before they are killed and the right values from the secret are present in the env. If I try to run
sentry run worker
in the pod then it just hangs. Turning on debug level logs does spit out some lines likedef send_activity_notifications_to_slack_threads...
but nothing helpful and then also just hangs.I have tried both using the
ExternalSecret
provider and with just a pre-createdSecret
object the outcome is the same.The only thing that helps is if I just hardcode the
externalPostgresql
values then the web and worker pods will eventually start and the db-init hook is fired as expected.The text was updated successfully, but these errors were encountered: