-
Notifications
You must be signed in to change notification settings - Fork 608
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker-rootful
: Increase inotify limits by default
#1179
base: master
Are you sure you want to change the base?
Conversation
docker-rootful
: Increase inotify limits by default
Thanks, but please sign the commit for DCO (run |
# from crash looping. | ||
echo 'fs.inotify.max_user_watches = 524288' >> /etc/sysctl.conf | ||
echo 'fs.inotify.max_user_instances = 512' >> /etc/sysctl.conf | ||
sysctl --system |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we replicate this to docker.yaml, podman*.yaml, k8s.yaml, k3s.yaml too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good idea!
sure; i'll try to make these changes some time between tomorrow and friday-- CarlosOn Nov 19, 2022, at 19:04, Akihiro Suda ***@***.***> wrote:
@AkihiroSuda commented on this pull request.
In examples/docker-rootful.yaml:
@@ -54,6 +54,14 @@ provision:
fi
export DEBIAN_FRONTEND=noninteractive
curl -fsSL https://get.docker.com | sh
+- mode: system
+ script: |
+ #!/bin/bash
+ # Increase inotify limits to prevent nested Kubernetes control planes
+ # from crash looping.
+ echo 'fs.inotify.max_user_watches = 524288' >> /etc/sysctl.conf
+ echo 'fs.inotify.max_user_instances = 512' >> /etc/sysctl.conf
+ sysctl --system
Can we replicate this to docker.yaml, podman*.yaml, k8s.yaml, k3s.yaml too?
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you authored the thread.Message ID: ***@***.***>
|
Seems needlessly complicated for the k3s and k8s examples, since they would have VMs as nodes (not containers) ? If I understand correctly, it is only for running containerd-in-docker or containerd-in-podman - as part of "kind" |
|
This resolves lima-vm#1178 and allows users to create multiple local Kubernetes clusters through Kind or the Cluster API Docker provider. Signed-off-by: Carlos Nunez <[email protected]>
0ff03e9
to
047e703
Compare
✅ Please sign off the commit for DCO: https://github.com/apps/dco I'm not sure if Podman needs this treatment, as it uses Can that be a separate pull request, given that this behavior is known for containerd-based engines? |
script: | | ||
#!/bin/bash | ||
# Increase inotify limits to prevent nested Kubernetes control planes | ||
# from crash looping. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this needed for k3s? If so, it should be needed for k8s.yaml
too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As far as I know, it is only needed for k3d and kind - not for k3s and k8s
Ah this looks great, I've been doing something similar for ages. |
This resolves #1178.