copyright | lastupdated | keywords | subcollection | ||
---|---|---|---|---|---|
|
2021-03-22 |
kubernetes, iks, local persistent storage |
containers |
{:DomainName: data-hd-keyref="APPDomain"} {:DomainName: data-hd-keyref="DomainName"} {:android: data-hd-operatingsystem="android"} {:api: .ph data-hd-interface='api'} {:apikey: data-credential-placeholder='apikey'} {:app_key: data-hd-keyref="app_key"} {:app_name: data-hd-keyref="app_name"} {:app_secret: data-hd-keyref="app_secret"} {:app_url: data-hd-keyref="app_url"} {:authenticated-content: .authenticated-content} {:beta: .beta} {:c#: data-hd-programlang="c#"} {:cli: .ph data-hd-interface='cli'} {:codeblock: .codeblock} {:curl: .ph data-hd-programlang='curl'} {:deprecated: .deprecated} {:dotnet-standard: .ph data-hd-programlang='dotnet-standard'} {:download: .download} {:external: target="_blank" .external} {:faq: data-hd-content-type='faq'} {:fuzzybunny: .ph data-hd-programlang='fuzzybunny'} {:generic: data-hd-operatingsystem="generic"} {:generic: data-hd-programlang="generic"} {:gif: data-image-type='gif'} {:go: .ph data-hd-programlang='go'} {:help: data-hd-content-type='help'} {:hide-dashboard: .hide-dashboard} {:hide-in-docs: .hide-in-docs} {:important: .important} {:ios: data-hd-operatingsystem="ios"} {:java: .ph data-hd-programlang='java'} {:java: data-hd-programlang="java"} {:javascript: .ph data-hd-programlang='javascript'} {:javascript: data-hd-programlang="javascript"} {:new_window: target="_blank"} {:note .note} {:note: .note} {:objectc data-hd-programlang="objectc"} {:org_name: data-hd-keyref="org_name"} {:php: data-hd-programlang="php"} {:pre: .pre} {:preview: .preview} {:python: .ph data-hd-programlang='python'} {:python: data-hd-programlang="python"} {:route: data-hd-keyref="route"} {:row-headers: .row-headers} {:ruby: .ph data-hd-programlang='ruby'} {:ruby: data-hd-programlang="ruby"} {:runtime: architecture="runtime"} {:runtimeIcon: .runtimeIcon} {:runtimeIconList: .runtimeIconList} {:runtimeLink: .runtimeLink} {:runtimeTitle: .runtimeTitle} {:screen: .screen} {:script: data-hd-video='script'} {:service: architecture="service"} {:service_instance_name: data-hd-keyref="service_instance_name"} {:service_name: data-hd-keyref="service_name"} {:shortdesc: .shortdesc} {:space_name: data-hd-keyref="space_name"} {:step: data-tutorial-type='step'} {:subsection: outputclass="subsection"} {:support: data-reuse='support'} {:swift: .ph data-hd-programlang='swift'} {:swift: data-hd-programlang="swift"} {:table: .aria-labeledby="caption"} {:term: .term} {:tip: .tip} {:tooling-url: data-tooling-url-placeholder='tooling-url'} {:troubleshoot: data-hd-content-type='troubleshoot'} {:tsCauses: .tsCauses} {:tsResolve: .tsResolve} {:tsSymptoms: .tsSymptoms} {:tutorial: data-hd-content-type='tutorial'} {:ui: .ph data-hd-interface='ui'} {:unity: .ph data-hd-programlang='unity'} {:url: data-credential-placeholder='url'} {:user_ID: data-hd-keyref="user_ID"} {:vbnet: .ph data-hd-programlang='vb.net'} {:video: .video}
{: #getting-started-with-portworx}
Review the following information to verify your Portworx installation and get started with adding highly available local persistent storage to your containerized apps. {: shortdesc}
{: #px-verify-installation}
Verify that your Portworx installation completed successfully and that all your local disks were recognized and added to the Portworx storage layer. {: shortdesc}
Before you begin:
- Make sure that you installed the latest version of the {{site.data.keyword.cloud_notm}} CLI and the {{site.data.keyword.containerlong_notm}} CLI plug-in.
- Log in to your account. If applicable, target the appropriate resource group. Set the context for your cluster.
To verify your installation:
- From the {{site.data.keyword.cloud_notm}} resource list, find the Portworx service that you created.
- Review the Status column to see if the installation succeeded or failed. The status might take a few minutes to update.
- If the Status changes to
Provision failure
, follow the instructions to start troubleshooting why your installation failed. - If the Status changes to
Provisioned
, verify that your Portworx installation completed successfully and that all your local disks were recognized and added to the Portworx storage layer.-
List the Portworx pods in the
kube-system
namespace. The installation is successful when you see one or moreportworx
,stork
, andstork-scheduler
pods. The number of pods equals the number of worker nodes that are included in your Portworx cluster. All pods must be in aRunning
state.kubectl get pods -n kube-system | grep 'portworx\|stork'
{: pre}
Example output:
portworx-594rw 1/1 Running 0 20h portworx-rn6wk 1/1 Running 0 20h portworx-rx9vf 1/1 Running 0 20h stork-6b99cf5579-5q6x4 1/1 Running 0 20h stork-6b99cf5579-slqlr 1/1 Running 0 20h stork-6b99cf5579-vz9j4 1/1 Running 0 20h stork-scheduler-7dd8799cc-bl75b 1/1 Running 0 20h stork-scheduler-7dd8799cc-j4rc9 1/1 Running 0 20h stork-scheduler-7dd8799cc-knjwt 1/1 Running 0 20h
{: screen}
-
Log in to one of your
portworx
pods and list the status of your Portworx cluster.kubectl exec <portworx_pod> -it -n kube-system -- /opt/pwx/bin/pxctl status
{: pre}
Example output:
Status: PX is operational License: Enterprise Node ID: 10.176.48.67 IP: 10.176.48.67 Local Storage Pool: 1 pool POOL IO_PRIORITY RAID_LEVEL USABLE USED STATUS ZONE REGION 0 LOW raid0 20 GiB 3.0 GiB Online dal10 us-south Local Storage Devices: 1 device Device Path Media Type Size Last-Scan 0:1 /dev/mapper/3600a09803830445455244c4a38754c66 STORAGE_MEDIUM_MAGNETIC 20 GiB 17 Sep 18 20:36 UTC total - 20 GiB Cluster Summary Cluster ID: mycluster Cluster UUID: a0d287ba-be82-4aac-b81c-7e22ac49faf5 Scheduler: kubernetes Nodes: 2 node(s) with storage (2 online), 1 node(s) without storage (1 online) IP ID StorageNode Used Capacity Status StorageStatus Version Kernel OS 10.184.58.11 10.184.58.11 Yes 3.0 GiB 20 GiB Online Up 1.5.0.0-bc1c580 4.4.0-133-generic Ubuntu 16.04.5 LTS 10.176.48.67 10.176.48.67 Yes 3.0 GiB 20 GiB Online Up (This node) 1.5.0.0-bc1c580 4.4.0-133-generic Ubuntu 16.04.5 LTS 10.176.48.83 10.176.48.83 No 0 B 0 B Online No Storage 1.5.0.0-bc1c580 4.4.0-133-generic Ubuntu 16.04.5 LTS Global Storage Pool Total Used : 6.0 GiB Total Capacity : 40 GiB
{: screen}
-
Verify that all worker nodes that you wanted to include in your Portworx storage layer are included by reviewing the StorageNode column in the Cluster Summary section of your CLI output. Worker nodes that are included in the storage layer are displayed with
Yes
in the StorageNode column.Because Portworx runs as a daemon set in your cluster, new worker nodes that you add to your cluster are automatically inspected for raw block storage and added to the Portworx data layer. {: note}
-
Verify that each storage node is listed with the correct amount of raw block storage by reviewing the Capacity column in the Cluster Summary section of your CLI output.
-
Review the Portworx I/O classification that was assigned to the disks that are part of the Portworx cluster. During the setup of your Portworx cluster, every disk is inspected to determine the performance profile of the device. The profile classification depends on how fast the network is that your worker node is connected to and the type of storage device that you have. Disks of SDS worker nodes are classified as
high
. If you manually attach disks to a virtual worker node, then these disks are classified aslow
due to the lower network speed that comes with virtual worker nodes.kubectl exec -it <portworx_pod> -n kube-system -- /opt/pwx/bin/pxctl cluster provision-status
{: pre}
Example output:
NODE NODE STATUS POOL POOL STATUS IO_PRIORITY SIZE AVAILABLE USED PROVISIONED RESERVEFACTOR ZONE REGION RACK 10.184.58.11 Up 0 Online LOW 20 GiB 17 GiB 3.0 GiB 0 B 0 dal12 us-south default 10.176.48.67 Up 0 Online LOW 20 GiB 17 GiB 3.0 GiB 0 B 0 dal10 us-south default 10.176.48.83 Up 0 Online HIGH 3.5 TiB 3.5 TiB 10 GiB 0 B 0 dal10 us-south default
{: screen}
-
{: #px-add-storage}
Start creating Portworx volumes by using Kubernetes dynamic provisioning. {: shortdesc}
-
List available storage classes in your cluster and check whether you can use an existing Portworx storage class that was set up during the Portworx installation. The pre-defined storage classes are optimized for database usage and to share data across pods.
kubectl get storageclasses | grep portworx
{: pre}
To view the details of a storage class, run
kubectl describe storageclass <storageclass_name>
. {: tip} -
If you don't want to use an existing storage class, create a customized storage class. For a full list of supported options that you can specify in your storage class, see Using Dynamic Provisioning{: external}.
-
Create a configuration file for your storage class.
kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: <storageclass_name> provisioner: kubernetes.io/portworx-volume parameters: repl: "<replication_factor>" secure: "<true_or_false>" priority_io: "<io_priority>" shared: "<true_or_false>"
{: codeblock}
Understanding the YAML file components Parameter Description metadata.name
Enter a name for your storage class. parameters.repl
Enter the number of replicas for your data that you want to store across different worker nodes. Allowed numbers are `1`,`2`, or `3`. For example, if you enter `3`, then your data is replicated across three different worker nodes in your Portworx cluster. To store your data highly available, use a multizone cluster and replicate your data across three worker nodes in different zones. Note: You must have enough worker nodes to fulfill your replication requirement. For example, if you have two worker nodes, but you specify three replicas, then the creation of the PVC with this storage class fails. parameters.secure
Specify whether you want to encrypt the data in your volume with {{site.data.keyword.keymanagementservicelong_notm}}. Choose between the following options: - true: Enter
true
to enable encryption for your Portworx volumes. To encrypt volumes, you must have an {{site.data.keyword.keymanagementservicelong_notm}} service instance and a Kubernetes secret that holds your customer root key. For more information about how to set up encryption for Portworx volumes, see [Encrypting your Portworx volumes](/docs/containers?topic=containers-portworx#encrypt_volumes). - false: When you enter
false
, your Portworx volumes are not encrypted.
parameters.priority_io
Enter the Portworx I/O priority that you want to request for your data. Available options are `high`, `medium`, and `low`. During the setup of your Portworx cluster, every disk is inspected to determine the performance profile of the device. The profile classification depends on the network bandwidth of your worker node and the type of storage device that you have. Disks of SDS worker nodes are classified as `high`. If you manually attach disks to a virtual worker node, then these disks are classified as `low` due to the lower network speed that comes with virtual worker nodes.
When you create a PVC with a storage class, the number of replicas that you specify inparameters/repl
takes precedence over the I/O priority. For example, when you specify three replicas that you want to store on high-speed disks, but you have only one worker node with a high-speed disk in your cluster, then your PVC creation still succeeds. Your data is replicated across both high and low speed disks.parameters.shared
Define whether you want to allow multiple pods to access the same volume. Choose between the following options: - True: If you set this option to
true
, then you can access the same volume by multiple pods that are distributed across worker nodes in different zones. - False: If you set this option to
false
, you can access the volume from multiple pods only if the pods are deployed onto the worker node that attaches the physical disk that backs the volume. If your pod is deployed onto a different worker node, the pod cannot access the volume.
- true: Enter
-
Create the storage class.
kubectl apply -f storageclass.yaml
{: pre}
-
Verify that the storage class is created.
kubectl get storageclasses
{: pre}
-
-
Create a persistent volume claim (PVC).
-
Create a configuration file for your PVC.
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mypvc spec: accessModes: - <access_mode> resources: requests: storage: <size> storageClassName: portworx-shared-sc
{: codeblock}
Understanding the YAML file components Parameter Description metadata.name
Enter a name for your PVC, such as mypvc
.spec.accessModes
Enter the [Kubernetes access mode ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes) that you want to use. resources.requests.storage
Enter the amount of storage in gigabytes that you want to assign from your Portworx cluster. For example, to assign 2 gigabytes from your Portworx cluster, enter `2Gi`. The amount of storage that you can specify is limited by the amount of storage that is available in your Portworx cluster. If you specified a replication factor in your storage class that is higher than 1, then the amount of storage that you specify in your PVC is reserved on multiple worker nodes. spec.storageClassName
Enter the name of the storage class that you chose or created earlier and that you want to use to provision your PV. The example YAML file uses the portworx-shared-sc
storage class. -
Create your PVC.
kubectl apply -f pvc.yaml
{: pre}
-
Verify that your PVC is created and bound to a persistent volume (PV). This process might take a few minutes.
kubectl get pvc
{: pre}
-
{: #px-mount-pvc}
To access the storage from your app, you must mount the PVC to your app. {: shortdesc}
-
Create a configuration file for a deployment that mounts the PVC.
For tips on how to deploy a stateful set with Portworx, see StatefulSets{: external}. The Portworx documentation also includes examples for how to deploy Cassandra{: external}, Kafka{: external}, ElasticSearch with Kibana{: external}, and WordPress with MySQL{: external}. {: tip}
apiVersion: apps/v1 kind: Deployment metadata: name: <deployment_name> labels: app: <deployment_label> spec: selector: matchLabels: app: <app_name> template: metadata: labels: app: <app_name> spec: schedulerName: stork containers: - image: <image_name> name: <container_name> volumeMounts: - name: <volume_name> mountPath: /<file_path> volumes: - name: <volume_name> persistentVolumeClaim: claimName: <pvc_name> securityContext: fsGroup: <group_ID>
{: codeblock}
Understanding the YAML file components Parameter Description metadata.labels.app
A label for the deployment. spec.selector.matchLabels.app
spec.template.metadata.labels.app
A label for your app. template.metadata.labels.app
A label for the deployment. spec.schedulerName
Use [Stork ![External link icon](../icons/launch-glyph.svg "External link icon")](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/stork/) as the scheduler for your Portworx cluster. With Stork, you can co-locate pods with their data, provides seamless migration of pods in case of storage errors and makes it easier to create and restore snapshots of Portworx volumes. spec.containers.image
The name of the image that you want to use. To list available images in your {{site.data.keyword.registrylong_notm}} account, run ibmcloud cr image-list
.spec.containers.name
The name of the container that you want to deploy to your cluster. spec.containers.securityContext.fsGroup
Optional: To access your storage with a non-root user, specify the [security context ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) for your pod and define the set of users that you want to grant access in the `fsGroup` section on your deployment YAML. For more information, see [Accessing Portworx volumes with a non-root user ![External link icon](../icons/launch-glyph.svg "External link icon")](https://docs.portworx.com/portworx-install-with-kubernetes/storage-operations/create-pvcs/access-via-non-root-users/). spec.containers.volumeMounts.mountPath
The absolute path of the directory to where the volume is mounted inside the container. If you want to share a volume between different apps, you can specify [volume sub paths ![External link icon](../icons/launch-glyph.svg "External link icon")](https://kubernetes.io/docs/concepts/storage/volumes/#using-subpath) for each of your apps. spec.containers.volumeMounts.name
The name of the volume to mount to your pod. volumes.name
The name of the volume to mount to your pod. Typically this name is the same as volumeMounts/name
.volumes.persistentVolumeClaim.claimName
The name of the PVC that binds the PV that you want to use. -
Create your deployment.
kubectl apply -f deployment.yaml
{: pre}
-
Verify that the PV is successfully mounted to your app.
kubectl describe deployment <deployment_name>
{: pre}
The mount point is in the Volume Mounts field and the volume is in the Volumes field.
Volume Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-tqp61 (ro) /volumemount from myvol (rw) ... Volumes: myvol: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: mypvc ReadOnly: false
{: screen}
-
Verify that you can write data to your Portworx cluster.
-
Log in to the pod that mounts your PV.
kubectl exec <pod_name> -it bash
{: pre}
-
Navigate to your volume mount path that you defined in your app deployment.
-
Create a text file.
echo "This is a test" > test.txt
{: pre}
-
Read the file that you created.
cat test.txt
{: pre}
-
{: #portworx_cleanup}
Remove a Portworx volume, a storage node, or the entire Portworx cluster if you do not need it anymore. {: shortdesc}
{: #remove_pvc}
When you added storage from your Portworx cluster to your app, you have three main components: the Kubernetes persistent volume claim (PVC) that requested the storage, the Kubernetes persistent volume (PV) that is mounted to your pod and described in the PVC, and the Portworx volume that blocks space on the physical disks of your Portworx cluster. To remove storage from your app, you must remove all components. {: shortdesc}
-
List the PVCs in your cluster and note the NAME of the PVC, and the name of the PV that is bound to the PVC and shown as VOLUME.
kubectl get pvc
{: pre}
Example output:
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE px-pvc Bound pvc-06886b77-102b-11e8-968a-f6612bb731fb 20Gi RWO px-high 78d
{: screen}
-
Review the
ReclaimPolicy
for the storage class.kubectl describe storageclass <storageclass_name>
{: pre}
If the reclaim policy says
Delete
, your PV and the data on your physical storage in your Portworx cluster are removed when you remove the PVC. If the reclaim policy saysRetain
, or if you provisioned your storage without a storage class, then your PV and your data are not removed when you remove the PVC. You must remove the PVC, PV, and the data separately. -
Remove any pods that mount the PVC.
-
List the pods that mount the PVC.
kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.volumes[*]}{.persistentVolumeClaim.claimName}{" "}{end}{end}' | grep "<pvc_name>"
{: pre}
Example output:
blockdepl-12345-prz7b: claim1-block-bronze
{: screen}
If no pod is returned in your CLI output, you do not have a pod that uses the PVC.
-
Remove the pod that uses the PVC.
If the pod is part of a deployment, remove the deployment. {: tip}
kubectl delete pod <pod_name>
{: pre}
-
Verify that the pod is removed.
kubectl get pods
{: pre}
-
-
Remove the PVC.
kubectl delete pvc <pvc_name>
{: pre}
-
Review the status of your PV. Use the name of the PV that you retrieved earlier as VOLUME.
kubectl get pv <pv_name>
{: pre}
When you remove the PVC, the PV that is bound to the PVC is released. Depending on how you provisioned your storage, your PV goes into a
Deleting
state if the PV is deleted automatically, or into aReleased
state, if you must manually delete the PV. Note: For PVs that are automatically deleted, the status might briefly sayReleased
before it is deleted. Rerun the command after a few minutes to see whether the PV is removed. -
If your PV is not deleted, manually remove the PV.
kubectl delete pv <pv_name>
{: pre}
-
Verify that the PV is removed.
kubectl get pv
{: pre}
-
Verify that your Portworx volume is removed. Log in to one of your Portworx pods in your cluster to list your volumes. To find available Portworx pods, run
kubectl get pods -n kube-system | grep portworx
.kubectl exec <portworx-pod> -it -n kube-system -- /opt/pwx/bin/pxctl volume list
{: pre}
-
If your Portworx volume is not removed, manually remove the volume.
kubectl exec <portworx-pod> -it -n kube-system -- /opt/pwx/bin/pxctl volume delete <volume_ID>
{: pre}
{: #remove_storage_node_cluster}
You can exclude worker nodes from your Portworx cluster or remove the entire Portworx cluster if you do not want to use Portworx anymore. {: shortdesc}
Removing your Portworx cluster removes all the data from your Portworx cluster. Make sure to create a snapshot for your data and save this snapshot to the cloud{: external}. {: important}
- Remove a worker node from the Portworx cluster: If you want to remove a worker node that runs Portworx and stores data in your Portworx cluster, you must migrate existing pods to remaining worker nodes and then uninstall Portworx from the node. For more information, see Decommission a Portworx node in Kubernetes{: external}.
- Remove the entire Portworx cluster: When you remove a Portworx cluster, you can decide if you want to remove all your data at the same time. For more information, see Uninstall from Kubernetes cluster{: external}.
{: #portworx_help}
If you run into an issue with using Portworx, you can open an issue in the Portworx Service Portal{: external}. You can also submit a request by sending an e-mail to [email protected]
. If you do not have an account on the Portworx Service Portal, send an e-mail to [email protected]
.