You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Regardless to the DNS issue, an unprivileged user can't mount NFS currently.
Probably we need to have some privileged helper daemon for persistent volume stuff.
Or maybe we can use some FUSE implementation of NFS.
I would say the best bed for this is a fuse based nfs, since it is not likely that user namespace root is going to be allowed to mount an nfs share any time soon. Also NFS and User Namespace does not work well together if you are going to have multiple a process changing uids inside of an environment.
Having UID 1234 chowning a file to UID 5678 is blocked on the server side inside of a user namespace. NFS enforces at the server side and has no concept of USERNS CAP_CHOWN or CAP_DAC_OVERRIDE.
Another potential option would be to setup automounter then the host kernel could mount directories on demand when a containerrized process entered the mount point.
Hey, great project! I tried to get the helm nfs server provisioner running and ran into some roadblocks on a GCE centOS 8 machine.
First, just starting the helm chart with this config: (run through envsubst)
Afterwards, the necessary PV is created and a PVC is bound to nfs:
$nfspvcname
is set to the PVC created by NFS.Now the pod for nfs crashes continously:
Since the error seems to be related to missing DNS services I tried to setup kube dns via https://github.com/kelseyhightower/kubernetes-the-hard-way/blob/master/docs/12-dns-addon.md
I had to change the IP from 10.32.0.10 to 10.0.0.10, but the dns pod also fails:
Stderr of run.sh:
The text was updated successfully, but these errors were encountered: