Skip to content

cpouthier/kasten-nfs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 

Repository files navigation

Setup NFS to create an NFS export location for Veeam Kasten

The aim of this procedure is to configure NFS to be used as an export location for Veeam Kasten.

Below we'll assume that the NFS server will run on the IP 10.10.10.10 and reachable from your worker nodes.

This procedure needs to be adapted for your own environement.

Install NFS server

sudo apt-get update
sudo apt-get install nfs-common nfs-kernel-server -y

Create and configure directory to share

sudo mkdir -p /data/nfs
sudo chown nobody:nogroup /data/nfs
sudo chmod 2770 /data/nfs

Configure exports

Be sure to change the IP below:

echo -e "/data/nfs\t10.10.10.10/24(rw,sync,no_subtree_check,no_root_squash)" | sudo tee -a /etc/exports

And automate mount:

echo "10.10.10.10:/srv/nfs_share /mnt/nfs_share nfs defaults 0 0" | tee -a /etc/fstab

Apply modification and restart service

sudo exportfs -av
sudo systemctl restart nfs-kernel-server
sudo systemctl status nfs-kernel-server

Check your export details

Do not forget to change IP by the NFS server one below:

/sbin/showmount -e 10.10.10.10

Install NFS client packages on K8s nodes

Reminder: all nodes must have the NFS client packages installed.

sudo apt update
sudo apt install nfs-common -y

Install NFS provisioner

Do not forget to change IP by the NFS server one below:

helm repo add nfs-subdir-external-provisioner https://kubernetes-sigs.github.io/nfs-subdir-external-provisioner
kubectl create namespace nfs-storage
helm upgrade --install -n nfs-storage --create-namespace nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
    --set nfs.server=10.10.10.10 \
    --set nfs.path=/data/nfs \
    --set storageClass.name=nfs \
    --set storageClass.archiveOnDelete=false

Create a PV on the exported NFS share

Do not forget to change IP by the NFS server one below:

echo | kubectl apply -f - << EOF
apiVersion: v1
kind: PersistentVolume
metadata:
   name: nfs-pv
spec:
   capacity:
      storage: 10Gi
   volumeMode: Filesystem
   accessModes:
      - ReadWriteMany
   persistentVolumeReclaimPolicy: Retain
   storageClassName: nfs
   mountOptions:
      - hard
      - nfsvers=4.1
   nfs:
      path: /data/nfs
      server: 10.10.10.10
EOF

Create the NFS PVC

echo | kubectl apply -f - << EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
   name: nfs-pvc
   namespace: kasten-io
spec:
   storageClassName: nfs
   accessModes:
      - ReadWriteMany
   resources:
      requests:
         storage: 10Gi
EOF

Create the NFS location profile for Veeam Kasten

echo | kubectl apply -f - << EOF
kind: Profile
apiVersion: config.kio.kasten.io/v1alpha1
metadata:
  name: nfs
  namespace: kasten-io
spec:
  locationSpec:
    type: FileStore
    fileStore:
      claimName: nfs-pvc
      path: /
    credential:
      secretType: ""
      secret:
        apiVersion: ""
        kind: ""
        name: ""
        namespace: ""
  type: Location
EOF

Troubleshooting

If you want to ensure you can access your NFS storage and write in the exported share, you can check by starting a pod which will mount the nfs-pvc:

echo | kubectl apply -f - << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: basic-app
  namespace: kasten-io
  labels:
    app: basic-app
spec:
  strategy:
    type: Recreate
  replicas: 1
  selector:
    matchLabels:
      app: basic-app
  template:
    metadata:
      labels:
        app: basic-app
    spec:
      containers:
      - name: basic-app-container   
        image: docker.io/alpine:latest
        resources:
            requests:
              memory: 256Mi
              cpu: 100m
        command: ["tail"]
        args: ["-f", "/dev/null"]         
        volumeMounts:
        - name: data
          mountPath: /data        
      volumes:
      - name: data
        persistentVolumeClaim:
          claimName: nfs-pvc
EOF

Then connect to it and navigate to /data, create and delete a file there:

pod=$(kubectl get po -n kasten-io |grep basic-app | awk '{print $1}' )
kubectl exec -n kasten-io -it $pod prhb -- sh

Type "exit" to exit the pod and do the cleanup:

pod=$(kubectl get po -n kasten-io |grep basic-app | awk '{print $1}' )
kubectl delete po $pod -n kasten-io

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published