Skip to content

Latest commit

 

History

History
214 lines (184 loc) · 6.42 KB

flex-snapshot-for-flasharray.md

File metadata and controls

214 lines (184 loc) · 6.42 KB

Using FlashArray Snapshots with Kubernetes / OpenShift

Introduction

The Pure Service Orchestrator Kubernetes FlexVolume driver integration includes support for FlashArray snapshots. This allows Kubernetes end users to capture point-in-time copies of their FlashArray backed persistent volume claims, and mount those copies in other Kubernetes Pods. This enables several use cases, some of which are :

  1. Test / Develop against copies of production data quickly (no need to copy large amounts of data)
  2. Backup/Restore production volumes.

Kubernetes native snapshot API is not available yet, and is currently under development. Pure's snapshot integration, therefore, is an extra command-line tool that enables developers to manage snapshots. See examples below.

The "snapshot" CLI

The snapshot CLI has the following format :

snapshot create -n <namespace> <pvc-name>

Inputs:namespace: Kubernetes namespace in which the pvc is created.
     pvc-name: Name of the PVC whose backend volumes you need to snapshot.

Output:  Name of the snapshot.

snapshot delete <snapshotname>

Inputs:snapshotname : String returned from the output of snapshot create command.

Output :  None.  Exit code 0 means success, otherwise you will see an error message.

Running the snapshot CLI

The snapshot CLI is deployed as a binary in Pure's dynamic provisioner pod. You do not need to download this binary to your computer. Instead you will use kubectl exec to run this binary. To create a snapshot you must get the name of the pure provisioner pod in the Kubernetes cluster. This name is randomly generated by Kubernetes and can be retrieved by running:

# kubectl get -o name -l app=pure-provisioner pod | cut -d/ -f2

Now you can execute the snapshot command using the <pure-podname> just discovered:

# kubectl exec <pure-podname> -- snapshot create -n <namespace> <pvc-name>

To delete a snapshot use the following command:

# kubectl exec <pure-podname> -- snapshot delete <snapshotname>

Examples

Creating snapshots

# kubectl exec pure-provisioner-6d9878fd47-wp41b -- snapshot create -n k8s_ns1 pvc1

Output : k8s-pvc-b9dd0972-c8b3-11e7-9ee8-fa163eb1e272.883661

where k8s-pvc-b9dd0972-c8b3-11e7-9ee8-fa163eb1e272.883661 is the new snapshot name.

Deleting snapshots

# kubectl exec pure-provisioner-6d9878fd47-wp41b -- snapshot delete

Example workflow :

Step 1 : Running your app with data on a FlashArray volume

There are 2 steps to run an app with data on a FlashArray volume:

  1. Create persistent volume claim

 Example of a FlashArray PVC yaml file (e.g. pvc-fa.yaml):

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 # Referenced in pod.yaml for the volume spec
 name: pure-fa-claim
spec:
 accessModes:
   - ReadWriteOnce
 resources:
   requests:
     storage: 10Gi
 # Matches the name defined in deployment/storageclass.yaml
 storageClassName: pure-block

 Create the PVC and make sure it is bound

  # kubectl create -f pvc-fa.yaml
  # kubectl get pvc pure-fa-claim
  1. Create app with the PVC

 Example of a yaml file for app (e.g. nginx) with PVC (nginx.yaml)

apiVersion: v1
kind: Pod
metadata:
 name: nginx
 namespace: default
spec:
 # Specify a volume that uses the claim defined in pvc-fa.yaml
 volumes:
 - name: pure-vol
   persistentVolumeClaim:
       claimName: pure-fa-claim
 containers:
 - name: nginx
   image: nginx
   # Configure a mount for the volume We define above
   volumeMounts:
   - name: pure-vol
     mountPath: /data
   ports:
   - containerPort: 80

Create an app of nginx:

 # kubectl create -f nginx.yaml

Step 2 : Creating a snapshot of your data

 # kubectl exec pure-provisioner-6d9878fd47-wp41b -- snapshot create -n default pure-fa-claim

Output for success: (snapshot name)

k8s-pvc-b9dd0972-c8b3-11e7-9ee8-fa163eb1e272.883661

Step 3 : Mounting the snapshot volume

There are 2 steps to mount the snapshot volume:

  1. Create a FlashArray volume (PVC) from a snapshot

 Example of a yaml file (e.g. pure-fa-snapshot-pvc.yaml)

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 # Referenced in nginx-snapshot.yaml for the volume spec
 name: pure-fa-snapshot-claim
 annotations:
   snapshot.beta.purestorage.com/name: k8s-pvc-b9dd0972-c8b3-11e7-9ee8-fa163eb1e272.883661
spec:
 accessModes:
   - ReadWriteOnce
# storage size must be exact same as snapshot
 resources:
   requests:
     storage: 10Gi
 # Matches the name defined in deployment/storageclass.yaml
 storageClassName: pure-block

 Create PVC and make sure it is bound

  # kubectl create -f pure-fa-snapshot-pvc.yaml
  # kubectl get pvc pure-fa-snapshot-claim
  1. Mount the snapshot volume into an app

 Example of a yaml file (e.g nginx-snapshot.yaml) for an app with snapshot volume (PVC)

apiVersion: v1
kind: Pod
metadata:
 name: nginx-snapshot
 namespace: default
spec:
 # Specify a volume that uses the claim defined in pure-fa-snapshot-pvc.yaml
 volumes:
 - name: pure-vol-snapshot
   persistentVolumeClaim:
       claimName: pure-fa-snapshot-claim
 containers:
 - name: nginx-snapshot
   image: nginx
   # Configure a mount for the volume We define above
   volumeMounts:
   - name: pure-vol-snapshot
     mountPath: /data
   ports:
   - containerPort: 80

 Create nginx app

# kubectl create -f nginx-snapshot.yaml

Step 4 : Cleaning up the snapshot volume

There are 2 steps to clean up the snapshot volume:

  1. Delete the app which mounts the snapshot volume
# kubectl delete pod nginx-snapshot
  1. Delete the snapshot volume
 # kubectl delete pvc pure-fa-snapshot-claim

Step 5 : Cleaning up the snapshot

 # kubectl exec pure-provisioner-6d9878fd47-wp41b -- snapshot delete k8s-pvc-b9dd0972-c8b3-11e7-9ee8-fa163eb1e272.883661

Notes:

  1. Application consistency: The snapshot CLI does not have any app consistency functionality. If an application consistent snapshot is needed, the application pods need to be frozen/quiesced from an IO perspective before the snapshot CLI is called. The application then needs to be unquiesced after the snapshot CLI has been used.

  2. Migration to native Kubernetes snapshots API : After Kubernetes releases native snapshot support, Pure will provide a non-disruptive path to migration from the current snapshot CLI to the native Kubernetes interface.