-
Notifications
You must be signed in to change notification settings - Fork 9.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cilium agent cannot start if installing in the fresh cluster (Ubuntu 24.04.1 nodes) #30231
Comments
At the vendor's helm chart there is a workaround container here, called It uses the binary utillity, called |
Could this security update be the issue? |
As a workaround, I'd started the pod in the privileged mode. Also, I had to adjust the tolerations for the operator as well. ---
extraDeploy: []
clusterName: poc-cluster
azure:
enabled: false
aws:
enabled: false
gcp:
enabled: false
agent:
cniPlugin:
install: true
uninstall: false
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 375m
memory: 384Mi
tolerations:
- operator: Exists
containerSecurityContext:
enabled: true
## Running with spc_t type (designed for privileged operations, check capabilities)
## since the container is not executed as a privileged container by default. This
## should prevent issues with SELinux policies.
seLinuxOptions:
level: 's0'
type: 'spc_t'
runAsUser: 0
runAsGroup: 0
runAsNonRoot: false
readOnlyRootFilesystem: true
# > Changed
privileged: true
# > Changed
allowPrivilegeEscalation: true
capabilities:
add:
- BPF
- CHOWN
- DAC_OVERRIDE
- FOWNER
- KILL
- NET_ADMIN
- NET_RAW
- IPC_LOCK
- PERFMON
- SETGID
- SETUID
- SYS_ADMIN
- SYS_MODULE
- SYS_RESOURCE
drop: [ "ALL" ]
seccompProfile:
type: "RuntimeDefault"
operator:
replicaCount: 1
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 375m
memory: 384Mi
tolerations:
- operator: Exists
envoy:
useDaemonSet: false
resources:
requests:
cpu: 250m
memory: 256Mi
limits:
cpu: 375m
memory: 384Mi
hubble:
tls:
enabled: true
relay:
enabled: true
replicaCount: 1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 150m
memory: 192Mi
ui:
enabled: true
replicaCount: 1
service:
#type: LoadBalancer
type: ClusterIP
annotations: {}
ingress:
enabled: false
pathType: ImplementationSpecific
hostname: hubble.local
ingressClassName: ""
path: /
annotations: {}
tls: false
selfSigned: false
frontend:
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 150m
memory: 192Mi
backend:
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 150m
memory: 192Mi
etcd:
enabled: true
replicaCount: 3
|
Hi @zentavr Did you check if this works for you? In that case, it would be great if you could create a pull request adding this. The Bitnami team would be excited to review your submission and offer feedback. You can find the contributing guidelines here. |
Hello @dgomezleon The container should contain that certain binary and I have no idea if it is present in Bitnami’s images. |
That binary is present inside the container, so technically what is needed is to adjust the helm chart:
|
Sorry @zentavr , I did not notice that. I will create a task to add this logic. |
@dgomezleon it’s there actually as I had said. |
Yes @zentavr . I created a task to update the chart logic for these cases. |
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback. |
The issue is not resolved. |
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback. |
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary. |
Name and Version
bitnami/cilium 1.2.5
What architecture are you using?
None
What steps will reproduce the bug?
I have an Ubuntu-based K8S cluster Installed using kubeadm:
If I deploy
bitnami/cilium
helm chart, nothing happens and the nodes stay inNotReady
stateAre you using any custom parameters or values?
What is the expected behavior?
Cluster up and running
What do you see instead?
The logs from
cilium-agent-spmcc
pod:Additional information
The text was updated successfully, but these errors were encountered: