diff --git a/README.md b/README.md index ad039253..7c034cf1 100644 --- a/README.md +++ b/README.md @@ -1,8 +1,8 @@ -## HPE Docker Volume Plugin for HPE 3PAR StoreServ +## HPE 3PAR Volume Plugin for Docker HPE Docker Volume Plugin is an open source project that provides persistent storage and features for your containerized applications using HPE 3PAR StoreServ Storage arrays. -The HPE Docker Volume Plugin supports popular container platforms like Docker, Kubernetes, OpenShift and SuSE CaaS/CAP +The HPE Docker Volume Plugin supports popular container platforms like Docker, Kubernetes, OpenShift ## HPE Docker Volume Plugin Overview @@ -40,6 +40,10 @@ Here is an example of the HPE Docker Volume plugin being used in an OpenShift en * snapshot mount * mount_conflict_delay * concurrent volume access + * replication + * snapshot schedule + * file system permissions and ownership + * multiple backends ## Usage @@ -48,3 +52,53 @@ See the [usage guide](/docs/usage.md) for details on the supported operations an ## Troubleshooting Troubleshooting issues with the plugin can be performed using these [tips](/docs/troubleshooting.md) + + +## SPOCK Link for HPE 3PAR Volume Plugin for Docker + +* [SPOCK Link](https://spock.corp.int.hpe.com/spock/utility/document.aspx?docurl=Shared%20Documents/hw/3par/3par_volume_plugin_for_docker.pdf) + +## Limitations +- List of issues around the containerized version of the plugin/Managed plugin is present in https://github.com/hpe-storage/python-hpedockerplugin/issues + +- ``$ docker volume prune`` is not supported for volume plugin, instead use ``$docker volume rm $(docker volume ls -q -f "dangling=true") `` to clean up orphaned volumes. + +- Shared volume support is present for containers running on the same host. + +- For upgrading the plugin from older version to the current released version, user needs to unmount all the volumes and follow the standard + upgrade procedure described in docker guide. + +- Volumes created using older plugins (2.0.2 or below) do not have snap_cpg associated with them, hence when the plugin is upgraded to 2.1 and user wants to perform clone/snapshot operations on these old volumes, he/she must set the snap_cpg for the + corresponding volumes using 3par cli or any tool before performing clone/snapshot operations. + +- While inspecting a snapshot, its provisioning field is set to that of parent volume's provisioning type. In 3PAR however, it is shown as 'snp'. + +- Mounting a QoS enabled volume can take longer than a volume without QoS for both FC and iSCSI protocol. + +- For a cloned volume with the same size as source volume, comment field won’t be populated on 3PAR. + +- User not allowed to import a 3PAR legacy volume when it is under use(Active VLUN). + +- User needs to explicitly manage all the child snapshots, until which managed parent volume cannot be deleted + +- User cannot manage already managed volume by other docker host(i.e. volume thats start with 'dcv-') + +- It is recommended for a user to avoid importing legacy volume which has schedules associated with it. If this volume needs to be imported please remove existing schedule on 3PAR and import the legacy volume. + +- "Snapshot schedule creation can take more time resulting into Docker CLI timeout. However, snapshot schedule may still get created in the background. User can follow below two steps in case of time out issue from docker daemon while creating snapshot schedule." + +```Inspect the snapshot to verify if the snapshot schedule got created +docker volume inspect . This should display snapshot details with snapshot schedule information. + +Verify if schedule got created on the array using 3PAR CLI command: +$ showsched +``` + +- If a mount fails due to dangling LUN use this section of troubleshooting guide [Removing Dangling LUN](https://github.com/hpe-storage/python-hpedockerplugin/blob/master/docs/troubleshooting.md#removing-dangling-lun) + +- If two or more backends are defined with the same name then the last backend is picked up and rest ignored. + +- after doing scsi rescan if the symlinks for the device are not populated in /dev/disk/by-path, Plugin will not function correctly during mount operation. + +- For volume uppersize limiation, please do refer 3PAR's documentation. + diff --git a/ansible_3par_docker_plugin/README.md b/ansible_3par_docker_plugin/README.md index 5daf7b72..14d0b056 100644 --- a/ansible_3par_docker_plugin/README.md +++ b/ansible_3par_docker_plugin/README.md @@ -1,79 +1,164 @@ # Automated Installer for 3PAR Docker Volume plugin (Ansible) -These are Ansible playbooks to automate the install of the HPE 3PAR Docker Volume Plug-in for Docker for use within Kubernetes/OpenShift environments. +These are Ansible playbooks to automate the install of the HPE 3PAR Docker Volume Plug-in for Docker for use within standalone docker environment or Kubernetes/OpenShift environments. -If you are not using Kubernetes or OpenShift, we recommend you take a look at the [Quick Start guide](/docs/quick_start_guide.md) for using the HPE 3PAR Docker Volume Plug-in in a standalone Docker environment. - ->**NOTE:** The Ansible installer only supports RHEL/CentOS. If you are using another distribution of Linux, you will need to modify the playbooks to support your application manager (apt, etc.) and the pre-requisite packages. +>**NOTE:** The Ansible installer only supports Ubuntu/RHEL/CentOS. If you are using another distribution of Linux, you will need to modify the playbooks to support your application manager (apt, etc.) and the pre-requisite packages. ### Getting Started These playbooks perform the following tasks on the Master/Worker nodes as defined in the Ansible [hosts](/ansible_3par_docker_plugin/hosts) file. * Configure the Docker Services for the HPE 3PAR Docker Volume Plug-in -* Deploys a 3-node Highly Available etcd cluster * Deploys the config files (iSCSI or FC) to support your environment -* Installs the HPE 3PAR Docker Volume Plug-in (Containerized version) for Kubernetes/OpenShift -* Deploys the HPE FlexVolume Drivers +* Installs the HPE 3PAR Docker Volume Plug-in (Containerized version) +* For standalone docker environment, + * Deploys an etcd cluster +* For Kubernetes/OpenShift, + * Deploys a Highly Available etcd cluster used by the HPE 3PAR Docker Volume plugin + * Supports single node (Use only for testing purposes) or multi-node deployment (HA) as defined in the Ansible hosts file + * Deploys the HPE FlexVolume Driver ### Prerequisites: - - - Install Ansible per [Installation Guide](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) - - Login to 3PAR to create known_hosts file - > **Note:** Entries for the Master and Worker nodes should already exist within the //.ssh/known_hosts file from the OpenShift installation. If not, you will need to log into each of the Master and Worker nodes as well to prevent connection errors from Ansible. - - - modify files/hpe.conf ([iSCSI](/ansible_3par_docker_plugin/files/iSCSI_hpe.conf) or [FC](/ansible_3par_docker_plugin/files/FC_hpe.conf)) based on your HPE 3PAR Storage array configuration. An example can be found here: [sample_hpe.conf](/ansible_3par_docker_plugin/files/sample_hpe.conf) + - Install Ansible 2.5 or above as per [Installation Guide](https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html) + + - Login to 3PAR via SSH to create entry in /\\/.ssh/known_hosts file + > **Note:** Entries for the Master and Worker nodes should already exist within the /\\/.ssh/known_hosts file from the OpenShift installation. If not, you will need to log into each of the Master and Worker nodes as well to prevent connection errors from Ansible. + + - Clone the python-hpedockerplugin repository + ``` + git clone https://github.com/hpe-storage/python-hpedockerplugin + cd python-hpedockerplugin/ansible_3par_docker_plugin + ``` + + - Add [plugin configuration properties - sample](/ansible_3par_docker_plugin/properties/plugin_configuration_properties_sample.yml) at `properties/plugin_configuration_properties.yml` based on your HPE 3PAR Storage array configuration. Some of the properties are mandatory and must be specified in the properties file while others are optional. + + | Property | Mandatory | Default Value | Description | + | ------------- | ------------- | ------------- | ------------- | + | ```hpedockerplugin_driver``` | Yes | No default value | ISCSI/FC driver (hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver/hpedockerplugin.hpe.hpe_3par_fc.HPE3PARFCDriver) | + | ```hpe3par_ip``` | Yes | No default value | IP address of 3PAR array | + | ```hpe3par_username``` | Yes | No default value | 3PAR username | + | ```hpe3par_password``` | Yes | No default value | 3PAR password | + | ```hpe3par_cpg``` | Yes | No default value | Primary user CPG | + | ```volume_plugin``` | Yes | No default value | Name of the docker volume image (only required with DEFAULT backend) | + | ```encryptor_key``` | No | No default value | Encryption key string for 3PAR password | + | ```logging``` | No | ```INFO``` | Log level | + | ```hpe3par_debug``` | No | No default value | 3PAR log level | + | ```suppress_requests_ssl_warning``` | No | ```True``` | Suppress request SSL warnings | + | ```hpe3par_snapcpg``` | No | ```hpe3par_cpg``` | Snapshot CPG | + | ```hpe3par_iscsi_chap_enabled``` | No | ```False``` | ISCSI chap toggle | + | ```hpe3par_iscsi_ips``` | No |No default value | Comma separated iscsi port IPs (only required if driver is ISCSI based) | + | ```use_multipath``` | No | ```False``` | Mutltipath toggle | + | ```enforce_multipath``` | No | ```False``` | Forcefully enforce multipath | + | ```ssh_hosts_key_file``` | No | ```/root/.ssh/known_hosts``` | Path to hosts key file | + | ```quorum_witness_ip``` | No | No default value | Quorum witness IP | + | ```mount_prefix``` | No | No default value | Alternate mount path prefix | + | ```hpe3par_iscsi_ips``` | No | No default value | Comma separated iscsi IPs. If not provided, all iscsi IPs will be read from the array and populated in hpe.conf | + | ```vlan_tag``` | No | False | Populates the iscsi_ips which are vlan tagged, only applicable if ```hpe3par_iscsi_ips``` is not specified | + | ```replication_device``` | No | No default value | Replication backend properties | + + - The Etcd ports can be modified in [etcd cluster properties](/ansible_3par_docker_plugin/properties/etcd_cluster_properties.yml) as follows: + + | Property | Mandatory | Default Value | + | ------------- | ------------- | ------------- | + | ```etcd_peer_port``` | Yes | 23800 | + | ```etcd_client_port_1``` | Yes | 23790 | + | ```etcd_client_port_2``` | Yes | 40010 | + + > **Note:** Please ensure that the ports specified above are unoccupied before installation. If the ports are not available on a particular node, etcd installation will fail. + + > **Limitation:** The installer, in the current state does not have the capability to add or remove nodes in the etcd cluster. In case an etcd node is not responding or goes down, it is beyond the current scope to admit it back into the cluster. Please follow the [etcd documentation](https://coreos.com/etcd/docs/latest/etcd-live-cluster-reconfiguration.html) to do so manually. + + - It is recommended that the properties file is [encrypted using Ansible Vault](/ansible_3par_docker_plugin/encrypt_properties.md). - Modify [hosts](/ansible_3par_docker_plugin/hosts) file to define your Master/Worker nodes as well as where you want to deploy your etcd cluster + +### Working with proxies: + +Set `http_proxy` and `https_proxy` in the [inventory hosts file](/ansible_3par_docker_plugin/hosts) while installing plugin on Kubernetes/Openshift setup. For setting proxies in the standalone plugin installation, see [inventory hosts file for standalone plugin installation](/ansible_3par_docker_plugin/hosts_standalone_nodes) ### Usage Once the prerequisites are complete, run the following command: +- Fresh installation on standalone docker environment: ``` -$ ansible-playbook -i hosts install_hpe_3par_volume_driver.yml -``` - -Once complete you will be ready to start using the HPE 3PAR Docker Volume Plug-in within Kubernetes/OpenShift. - -Please refer to the Kubernetes/OpenShift section in the [Usage Guide](/docs/usage.md#k8_usage) on how to create and deploy some sample SCs, PVCs, and Pods with persistent volumes using the HPE 3PAR Docker Volume Plug-in. - - -

- - -### Known Issues - -Ansible on some Linux Distros (i.e. CentOS and Ubuntu) may throw an error about missing the `docker` module. - -``` -TASK [run etcd container] ****************************************************************************************************************************************** -fatal: [192.168.1.35]: FAILED! => {"changed": false, "msg": "Failed to import docker-py - No module named docker. Try `pip install docker-py`"} -``` - -Run: - -``` -pip install docker +$ cd python-hpedockerplugin/ansible_3par_docker_plugin +$ ansible-playbook -i hosts_standalone_nodes install_standalone_hpe_3par_volume_driver.yml --ask-vault-pass ``` ------------------------------------------------------------------------------------ - -On Ansible 2.6 and later, per https://github.com/ansible/ansible/issues/42162, `docker-py` has been deprecated and when running the Ansible playbook, you may see the following error: - +- Fresh installation on Openshift/Kubernetes environment: ``` -docker_container: create_host_config() got an unexpected keyword argument 'init' +$ cd python-hpedockerplugin/ansible_3par_docker_plugin +$ ansible-playbook -i hosts install_hpe_3par_volume_driver.yml --ask-vault-pass ``` +> **Note:** ```--ask-vault-pass``` is required only when the properties file is encrypted + + +Once complete you will be ready to start using the HPE 3PAR Docker Volume Plug-in. + +- Update the array backends in Standalone/Openshift/Kubernetes environment: + * Modify the [plugin configuration properties - sample](/ansible_3par_docker_plugin/properties/plugin_configuration_properties_sample.yml) at `properties/plugin_configuration_properties.yml` based on the updated HPE 3PAR Storage array configuration. Additional backends may be added or removed from the existing configuration. Individual attributes of the existing array configuration may also be modified. + + * Update array backend on standalone docker environment: + ``` + $ cd python-hpedockerplugin/ansible_3par_docker_plugin + $ ansible-playbook -i hosts_standalone_nodes install_standalone_hpe_3par_volume_driver.yml --ask-vault-pass + ``` + + * Update array backend on Openshift/Kubernetes environment: + ``` + $ cd python-hpedockerplugin/ansible_3par_docker_plugin + $ ansible-playbook -i hosts install_hpe_3par_volume_driver.yml --ask-vault-pass + ``` + > **Note:** It is not recommended to change the Etcd information and array encryption password during the backend update process + +- Upgrade the docker volume plugin + * Modify the `volume_plugin` in [plugin configuration properties - sample](/ansible_3par_docker_plugin/properties/plugin_configuration_properties_sample.yml) and point it to the latest image from docker hub + * Update plugin on standalone docker environment: + ``` + $ cd python-hpedockerplugin/ansible_3par_docker_plugin + $ ansible-playbook -i hosts_standalone_nodes install_standalone_hpe_3par_volume_driver.yml --ask-vault-pass + ``` + * Update plugin on Openshift/Kubernetes environment: + ``` + $ cd python-hpedockerplugin/ansible_3par_docker_plugin + $ ansible-playbook -i hosts install_hpe_3par_volume_driver.yml --ask-vault-pass + ``` + > **Note:** + - Ensure that all the nodes in the cluster are present in the inventory [hosts](/ansible_3par_docker_plugin/hosts) file + - The docker volume plugin will be restarted and the user will not be able to create the volume during the process + + * Successful upgrade will remove the old plugin container and replace it with the new plugin container which is specified in the plugin properties file + +- Install docker volume plugin to additional nodes in the cluster + * Add the new nodes in the respective sections in the inventory [hosts](/ansible_3par_docker_plugin/hosts) file + * Only new nodes IP or hostnames must be present in the hosts file + * Do not change the etcd hosts from the existing setup. Do not add or remove nodes in the etcd section + + * Install plugin on new nodes on Openshift/Kubernetes environment: + ``` + $ cd python-hpedockerplugin/ansible_3par_docker_plugin + $ ansible-playbook -i hosts install_hpe_3par_volume_driver.yml --ask-vault-pass + ``` + + * Uninstall plugin on nodes on Openshift/Kubernetes environment: + ``` + $ cd python-hpedockerplugin/ansible_3par_docker_plugin + $ ansible-playbook -i hosts uninstall/uninstall_hpe_3par_volume_driver.yml --ask-vault-pass + ``` + + * Uninstall plugin along with etcd on nodes on Openshift/Kubernetes environment: + ``` + $ cd python-hpedockerplugin/ansible_3par_docker_plugin + $ ansible-playbook -i hosts uninstall/uninstall_hpe_3par_volume_driver_etcd.yml --ask-vault-pass + ``` + + > **Note:** This process only adds or removes docker volume plugin and/or etcd in nodes in an existing cluster. It does not add or remove nodes in Kubernetes/Openshift cluster + * On success after adding plugin on new nodes, the additional nodes will have a running docker volume plugin container + * On success after removing plugin from specified nodes, docker volume plugin container will be removed + +Please refer to [Usage Guide](/docs/usage.md) on how to perform volume related actions on the standalone docker environment. -`docker-py` is no longer supported and has been deprecated in favor of the `docker` module. - -If `docker-py` is installed, run: - -``` -pip uninstall docker-py -``` +Please refer to the Kubernetes/OpenShift section in the [Usage Guide](/docs/usage.md#k8_usage) on how to create and deploy some sample SCs, PVCs, and Pods with persistent volumes using the HPE 3PAR Docker Volume Plug-in. -Run: -``` -pip install docker -``` +

diff --git a/ansible_3par_docker_plugin/container/Dockerfile b/ansible_3par_docker_plugin/container/Dockerfile new file mode 100644 index 00000000..bec01dd3 --- /dev/null +++ b/ansible_3par_docker_plugin/container/Dockerfile @@ -0,0 +1,29 @@ +FROM alpine:3.8 + +MAINTAINER Farhan Nomani + +RUN echo "===> Installing sudo to emulate normal OS behavior..." && \ + apk --update add sudo && \ + \ + \ + echo "===> Adding Python runtime..." && \ + apk --no-cache add ca-certificates && \ + apk --update add python py-pip openssl unzip && \ + apk --update add --virtual build-dependencies wget \ + openssh-keygen openssh-server openssh-client \ + python-dev libffi-dev openssl-dev build-base && \ + pip install --upgrade pip pycrypto cffi && \ + \ + \ + echo "===> Installing Ansible..." && \ + pip install ansible && \ + \ + wget https://github.com/hpe-storage/python-hpedockerplugin/archive/master.zip + +RUN unset http_proxy +RUN unset https_proxy +RUN unzip master.zip + +RUN rm -f master.zip +RUN mv python-hpedockerplugin-master/ansible_3par_docker_plugin/ . +RUN rm -rf python-hpedockerplugin-master/ diff --git a/ansible_3par_docker_plugin/container/README.md b/ansible_3par_docker_plugin/container/README.md new file mode 100644 index 00000000..14b8a7c2 --- /dev/null +++ b/ansible_3par_docker_plugin/container/README.md @@ -0,0 +1,14 @@ +# Docker Volume Plugin Installer - Docker Image + +This is an Alpine based image with Ansible and its dependency installed along with the latest HPE 3PAR docker volume plugin ansible installer tasks and playbooks. + +Usage: + +- Run the latest docker image from docker hub, the command below will run the pre built container and open shell + - `docker run -it hpestorage/legacyvolumeplugininstaller /bin/sh` + +- Set the proxy (if required) + - `export http_proxy=:` + - `export https_proxy=:` + +- Follow [this link](/ansible_3par_docker_plugin/README.md) to set the node information, backend properties and run the installation playbook diff --git a/ansible_3par_docker_plugin/container/plugin_installer.sh b/ansible_3par_docker_plugin/container/plugin_installer.sh new file mode 100755 index 00000000..c8a04bef --- /dev/null +++ b/ansible_3par_docker_plugin/container/plugin_installer.sh @@ -0,0 +1,6 @@ +docker rmi -f hpestorage/legacyvolumeplugininstaller:3.1 +docker rmi -f container +docker image build --build-arg http_proxy=$1 --build-arg https_proxy=$2 --no-cache -t hpestorage/legacyvolumeplugininstaller:3.1 -t hpestorage/legacyvolumeplugininstaller:latest . +#docker tag container hpestorage/legacyvolumeplugininstaller:3.1 +docker push hpestorage/legacyvolumeplugininstaller:3.1 +docker push hpestorage/legacyvolumeplugininstaller:latest diff --git a/ansible_3par_docker_plugin/encrypt_properties.md b/ansible_3par_docker_plugin/encrypt_properties.md new file mode 100644 index 00000000..c86195ac --- /dev/null +++ b/ansible_3par_docker_plugin/encrypt_properties.md @@ -0,0 +1,27 @@ +# Encrypting the ansible inventory properties file before installing the docker volume plugin + +Currently, the properties file is a plain text YAML wherein the settings and properties exist which would then be used to create the hpe.conf file on each Kubernetes/Openshift node. + +Though the array passwords could be encrypted in respective hpe.conf using py-3parencryptor utility, the property file would still contain the credentials in plain text. + +To solve this problem, ansible vault is used. This vault has the capability to encrypt the complete file. In this case, the properties file could be encrypted and password could be set on it. Once encrypted, the contents cannot be viewed without obtaining the password to decrypt the content. + +How to create the properties file: +``` +ansible-vault create ansible_3par_docker_plugin/properties/plugin_configuration_properties.yml +``` +This will prompt to set the password and will encrypt the file. The contents can be written to the file and saved. It will no longer be possible to view the contents of the file through an editor + + +How to edit the properties file: +``` +ansible-vault edit ansible_3par_docker_plugin/properties/plugin_configuration_properties.yml +``` + +The contents of the properties file including the array credentials are now encrypted and safe. + +How to execute the playbook with the vault-ed properties file +The playbooks can be run by adding ```--ask-vault-pass``` in the playbook execution command +``` +ansible-playbook -i hosts install_script.yml –ask-vault-pass +``` diff --git a/ansible_3par_docker_plugin/files/dory/doryd b/ansible_3par_docker_plugin/files/dory/doryd deleted file mode 100644 index 52ef4e3d..00000000 Binary files a/ansible_3par_docker_plugin/files/dory/doryd and /dev/null differ diff --git a/ansible_3par_docker_plugin/files/dory/hpe b/ansible_3par_docker_plugin/files/dory/hpe deleted file mode 100644 index 859e37a7..00000000 Binary files a/ansible_3par_docker_plugin/files/dory/hpe and /dev/null differ diff --git a/ansible_3par_docker_plugin/files/dory/hpe.json b/ansible_3par_docker_plugin/files/dory/hpe.json deleted file mode 100644 index 86ecd377..00000000 --- a/ansible_3par_docker_plugin/files/dory/hpe.json +++ /dev/null @@ -1,8 +0,0 @@ -{ - "dockerVolumePluginSocketPath": "/run/docker/plugins/hpe.sock", - "logDebug": true, - "supportsCapabilities": true, - "stripK8sFromOptions": true, - "createVolumes": true, - "listOfStorageResourceOptions": [ "size" ] -} diff --git a/ansible_3par_docker_plugin/files/dory/install b/ansible_3par_docker_plugin/files/dory/install deleted file mode 100644 index 437058aa..00000000 --- a/ansible_3par_docker_plugin/files/dory/install +++ /dev/null @@ -1,29 +0,0 @@ -#!/bin/bash - -#TODO dump the EULA using more or less - -while true; do - read -p "Proceed with installation of dory/doryd binaries ? " yn - case $yn in - [Yy]* ) break;; - [Nn]* ) exit;; - * ) echo "Please answer yes or no.";; - esac -done - -installDir=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/ - -if [ -d ${installDir} ] -then - echo "Upgrading flexvolumedriver." -else - echo "Installing flexvolumedriver." - mkdir -p $installDir -fi - -mv ./* "${installDir}" - -rm ${installDir}/install - -exit 0 - diff --git a/ansible_3par_docker_plugin/hosts b/ansible_3par_docker_plugin/hosts index 3ba4beb2..a4c54b3d 100644 --- a/ansible_3par_docker_plugin/hosts +++ b/ansible_3par_docker_plugin/hosts @@ -1,7 +1,7 @@ -[all] -192.168.1.51 -192.168.1.52 -192.168.1.53 +#Enable and populate the proxies here +#[all:vars] +#http_proxy= +#https_proxy= [masters] 192.168.1.51 diff --git a/ansible_3par_docker_plugin/hosts_standalone_nodes b/ansible_3par_docker_plugin/hosts_standalone_nodes new file mode 100644 index 00000000..eb37af00 --- /dev/null +++ b/ansible_3par_docker_plugin/hosts_standalone_nodes @@ -0,0 +1,8 @@ +[all] +10.10.10.1 +10.10.10.2 +10.10.10.3 + +#[all:vars] +#http_proxy= +#https_proxy= diff --git a/ansible_3par_docker_plugin/install_hpe_3par_volume_driver.yml b/ansible_3par_docker_plugin/install_hpe_3par_volume_driver.yml index 01acef5c..b0308d1c 100644 --- a/ansible_3par_docker_plugin/install_hpe_3par_volume_driver.yml +++ b/ansible_3par_docker_plugin/install_hpe_3par_volume_driver.yml @@ -1,8 +1,48 @@ --- -- name: Set MountFlags in docker service - hosts: all +- name: Install sshpass locally + hosts: localhost + become: root + environment: + http_proxy: "{{ http_proxy | default('')}}" + https_proxy: "{{ https_proxy | default('') }}" + tasks: + - name: install sshpass + package: + name: sshpass + state: present + +- name: Install prerequisites + hosts: masters,workers,etcd become: root + environment: + http_proxy: "{{ http_proxy | default('') }}" + https_proxy: "{{ https_proxy | default('') }}" + tasks: + - name: load plugin settings + include_vars: 'properties/plugin_configuration_properties.yml' + + - name: Install prerequisites + include: tasks/install_prerequisites_on_all.yml + + - name: Install prerequisites + include: tasks/install_prerequisites.yml +- name: Install prerequisites + hosts: etcd + become: root + environment: + http_proxy: "{{ http_proxy | default('') }}" + https_proxy: "{{ https_proxy | default('') }}" + tasks: + - name: load plugin settings + include_vars: 'properties/plugin_configuration_properties.yml' + + - name: Install prerequisites + include: tasks/install_prerequisites_on_all.yml + +- name: Set MountFlags in docker service + hosts: masters,workers,etcd + become: root tasks: - name: Configure docker service include: tasks/configure_docker_service.yml @@ -15,26 +55,25 @@ - name: Create etcd cluster for 3PAR Docker Volume plugin include: tasks/create_etcd_container.yml - - name: Install HPE 3PAR Volume Driver for Kubernetes/OpenShift - hosts: all + hosts: masters,workers,etcd become: root vars: driver_path: "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/" - vars_prompt: - - name: "storage_config" - prompt: "Configure for FC or iSCSI? Type: 'fc' or 'iscsi'." - private: no - default: "fc" - tasks: - name: Configure multipath include: tasks/configure_multipath.yml - - name: Copy hpe.conf - include: tasks/copy_hpe_conf.yml + - name: load plugin settings + include_vars: 'properties/plugin_configuration_properties.yml' + + - name: load etcd settings + include_vars: 'properties/etcd_cluster_properties.yml' + + - name: Create hpe.conf + include: tasks/create_conf_file.yml - name: Create 3PAR Docker Volume plugin include: tasks/create_3par_docker_volume_plugin.yml @@ -42,6 +81,25 @@ - name: Create the hpe_sock files include: tasks/hpe_sock.yml +- name: Copy config file into admin.conf + hosts: masters + become: root + + vars: + driver_path: "/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/" + + tasks: + - name: Copy config file into admin.conf + include: tasks/copy_doryd_config.yml + +- name: Deploy FlexVolume drivers + hosts: masters,workers,etcd + become: root + environment: + http_proxy: "{{ http_proxy | default('') }}" + https_proxy: "{{ https_proxy | default('') }}" + + tasks: - name: Deploy FlexVolume drivers include: tasks/deploy_FlexVolume_driver.yml @@ -52,3 +110,4 @@ tasks: - name: Start Dynamic Provisioner (doryd) on Master node include: tasks/configure_doryd_service.yml + diff --git a/ansible_3par_docker_plugin/install_standalone_hpe_3par_volume_driver.yml b/ansible_3par_docker_plugin/install_standalone_hpe_3par_volume_driver.yml new file mode 100644 index 00000000..84ed8605 --- /dev/null +++ b/ansible_3par_docker_plugin/install_standalone_hpe_3par_volume_driver.yml @@ -0,0 +1,153 @@ +- hosts: localhost + become: root + environment: + http_proxy: "{{ http_proxy | default('')}}" + https_proxy: "{{ https_proxy | default('')}}" + tasks: + - name: install sshpass + package: + name: sshpass + state: present + +- hosts: all + + environment: + http_proxy: "{{ http_proxy | default('')}}" + https_proxy: "{{ https_proxy | default('')}}" + + tasks: + + - name: load plugin settings + include_vars: 'properties/plugin_configuration_properties.yml' + + - name: Install prerequistes + include: tasks/install_prerequisites_on_all.yml + + - name: Install prerequistes + include: tasks/install_prerequisites.yml + + - name: Install packages on Ubuntu + package: + name: "{{ item }}" + state: present + with_items: + - open-iscsi + - multipath-tools + when: ansible_distribution == 'Ubuntu' + become: yes + + - name: Install packages on CentOS/RedHat + package: + name: "{{ item }}" + state: present + with_items: + - iscsi-initiator-utils + - device-mapper-multipath + when: ansible_distribution == 'CentOS' or ansible_distribution == 'RedHat' + become: yes + + - name: Copy multipath configuration file + copy: + src: multipath.conf + dest: /etc/multipath.conf + when: ansible_distribution == 'CentOS' or ansible_distribution == 'RedHat' + become: yes + + - name: Change MountFlags + ini_file: + dest: /usr/lib/systemd/system/docker.service + section: Service + option: MountFlags + value: shared + no_extra_spaces: true + backup: yes + when: ansible_distribution == 'CentOS' or ansible_distribution == 'RedHat' + become: yes + + - name: Reload sytemd daemon + systemd: + daemon_reload: yes + become: yes + + - name: Restart Services + systemd: + name: "{{ item }}" + state: restarted + with_items: + - open-iscsi + - multipath-tools + - docker + when: ansible_distribution == 'Ubuntu' + become: yes + + - name: Restart Services + systemd: + name: docker.service + state: restarted + when: ansible_distribution == 'CentOS' or ansible_distribution == 'RedHat' + become: yes + + - name: Enable the services + systemd: + name: "{{ item }}" + state: started + enabled: True + with_items: + - iscsid + - multipathd + when: ansible_distribution == 'CentOS' or ansible_distribution == 'RedHat' + become: yes + + - name: load etcd settings + include_vars: 'properties/etcd_cluster_properties.yml' + + - name: run etcd container + docker_container: + name: etcd + image: "{{ etcd_image }}" + state: started + detach: true + ports: + - "{{ etcd_peer_port }}:{{ etcd_peer_port }}" + - "{{ etcd_client_port_1 }}:{{ etcd_client_port_1 }}" + - "{{ etcd_client_port_2 }}:{{ etcd_client_port_2 }}" + env: + ETCD_NAME: etcd0 + ETCD_ADVERTISE_CLIENT_URLS: "{{ etcd_advertise_client_url_1 }},{{ etcd_advertise_client_url_2 }}" + ETCD_LISTEN_CLIENT_URLS: "{{ etcd_listen_client_url_1 }},{{ etcd_listen_client_url_2 }}" + ETCD_INITIAL_ADVERTISE_PEER_URLS: "{{ etcd_initial_advertise_peer_urls }}" + ETCD_LISTEN_PEER_URLS: "{{ etcd_listen_peer_urls }}" + ETCD_INITIAL_CLUSTER: "etcd0=http://{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}:{{ etcd_peer_port }}" + ETCD_INITIAL_CLUSTER_TOKEN: "{{ etcd_initial_cluster_token }}" + ETCD_INITIAL_CLUSTER_STATE: "{{ etcd_initial_cluster_state }}" + restart_policy: always + become: yes + + - name: Create conf file + include: tasks/create_conf_file.yml + + - name: create hpedockerplugin container + docker_container: + name: plugin_container + image: "{{ INVENTORY['DEFAULT']['volume_plugin'] }}" + privileged: true + network_mode: host + state: started + detach: true + volumes: + - /dev:/dev + - /run/lock:/run/lock + - /var/lib:/var/lib + - /var/run/docker/plugins:/var/run/docker/plugins:rw + - /etc:/etc + - /root/.ssh:/root/.ssh + - /sys:/sys + - /root/plugin/certs:/root/plugin/certs + - /sbin/iscsiadm:/sbin/ia + - /lib/modules:/lib/modules + - /lib64:/lib64 + - /var/run/docker.sock:/var/run/docker.sock + - /opt/hpe/data:/opt/hpe/data:rshared + restart_policy: on-failure + become: yes + diff --git a/ansible_3par_docker_plugin/properties/etcd_cluster_properties.yml b/ansible_3par_docker_plugin/properties/etcd_cluster_properties.yml index 8dec0810..61de0903 100644 --- a/ansible_3par_docker_plugin/properties/etcd_cluster_properties.yml +++ b/ansible_3par_docker_plugin/properties/etcd_cluster_properties.yml @@ -29,5 +29,4 @@ etcd_listen_client_url_2: "{{ etcd_url_scheme }}://0.0.0.0:{{ etcd_client_port_2 etcd_initial_advertise_peer_urls: "{{ etcd_url_scheme }}://{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}:{{ etcd_peer_port }}" etcd_listen_peer_urls: "{{ etcd_url_scheme }}://0.0.0.0:{{ etcd_peer_port }}" etcd_initial_cluster_token: etcd-cluster-1 -etcd_initial_cluster: "{{ hostvars[groups['etcd'][0]]['ansible_default_ipv4']['address'] }}={{ etcd_url_scheme }}://{{ hostvars[groups['etcd'][0]]['ansible_default_ipv4']['address'] }}:{{ etcd_peer_port }},{{ hostvars[groups['etcd'][1]]['ansible_default_ipv4']['address'] }}={{ etcd_url_scheme }}://{{ hostvars[groups['etcd'][1]]['ansible_default_ipv4']['address'] }}:{{ etcd_peer_port }},{{ hostvars[groups['etcd'][2]]['ansible_default_ipv4']['address'] }}={{ etcd_url_scheme }}://{{ hostvars[groups['etcd'][2]]['ansible_default_ipv4']['address'] }}:{{ etcd_peer_port }}" etcd_initial_cluster_state: "new" diff --git a/ansible_3par_docker_plugin/properties/etcd_properties.yml b/ansible_3par_docker_plugin/properties/etcd_properties.yml deleted file mode 100644 index fc5fabdf..00000000 --- a/ansible_3par_docker_plugin/properties/etcd_properties.yml +++ /dev/null @@ -1,35 +0,0 @@ -#--- -# defaults for etcd -etcd_version: 'v2.2.0' -etcd_image: "quay.io/coreos/etcd:{{ etcd_version }}" - -# Default etcd "docker run" command per HPE 3PAR Docker Volume plugin -#------------------------------------------------------------------------------------------------------------------- -# docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 40010:40010 -p 23800:23800 -p 23790:23790 \ -# --name etcd quay.io/coreos/etcd:v2.2.0 \ -# -name etcd0 \ -# -advertise-client-urls http://${HostIP}:23790,http://${HostIP}:40010 \ -# -listen-client-urls http://0.0.0.0:23790,http://0.0.0.0:40010 \ -# -initial-advertise-peer-urls http://${HostIP}:23800 \ -# -listen-peer-urls http://0.0.0.0:23800 \ -# -initial-cluster-token etcd-cluster-1 \ -# -initial-cluster etcd0=http://${HostIP}:23800 \ -# -initial-cluster-state new - -etcd_client_port_1: 23790 -etcd_client_port_2: 40010 -etcd_url_scheme: http -etcd_peer_port: 23800 -etcd_initial_cluster_name: etcd0 - -#etcd_name: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}" -etcd_name: "{{ etcd_initial_cluster_name }}" -etcd_advertise_client_url_1: "{{ etcd_url_scheme }}://{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}:{{ etcd_client_port_1 }}" -etcd_advertise_client_url_2: "{{ etcd_url_scheme }}://{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}:{{ etcd_client_port_2 }}" -etcd_listen_client_url_1: "{{ etcd_url_scheme }}://0.0.0.0:{{ etcd_client_port_1 }}" -etcd_listen_client_url_2: "{{ etcd_url_scheme }}://0.0.0.0:{{ etcd_client_port_2 }}" -etcd_initial_advertise_peer_urls: "{{ etcd_url_scheme }}://{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}:{{ etcd_peer_port }}" -etcd_listen_peer_urls: "{{ etcd_url_scheme }}://0.0.0.0:{{ etcd_peer_port }}" -etcd_initial_cluster_token: etcd-cluster-1 -etcd_initial_cluster: "{{ etcd_initial_cluster_name }}={{ etcd_url_scheme }}://{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}:{{ etcd_peer_port }}" -etcd_initial_cluster_state: "new" diff --git a/ansible_3par_docker_plugin/properties/plugin_configuration_properties_sample.yml b/ansible_3par_docker_plugin/properties/plugin_configuration_properties_sample.yml new file mode 100644 index 00000000..85117476 --- /dev/null +++ b/ansible_3par_docker_plugin/properties/plugin_configuration_properties_sample.yml @@ -0,0 +1,68 @@ +INVENTORY: + DEFAULT: +#Mandatory Parameters----------------------------------------------------------------------------------- + + # Specify the port to be used by HPE 3PAR plugin etcd cluster + host_etcd_port_number: 23790 + # Plugin Driver - iSCSI + hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver + hpe3par_ip: 192.168.1.50 + hpe3par_username: 3paradm + hpe3par_password: 3pardata + hpe3par_cpg: FC_r6 + + # Plugin version - Required only in DEFAULT backend + volume_plugin: hpestorage/legacyvolumeplugin:3.0 + +#Optional Parameters------------------------------------------------------------------------------------ + + # Uncomment to encrypt passwords in hpe.conf using defined passphrase + #encryptor_key: < encrypt_key1 > + + #ssh_hosts_key_file: '/root/.ssh/known_hosts' + logging: DEBUG + #hpe3par_debug: True + #suppress_requests_ssl_warning: True + #hpe3par_snapcpg: FC_r6 + #hpe3par_iscsi_chap_enabled: True + #use_multipath: False + #enforce_multipath: False + #vlan_tag: True + +#Optional Replication Parameters------------------------------------------------------------------------ + #replication_device: + # backend_id: remote_3PAR + # replication_mode: synchronous + # cpg_map: "local_CPG:remote_CPG" + # snap_cpg_map: "local_copy_CPG:remote_copy_CPG" + # hpe3par_ip: 192.168.2.50 + # hpe3par_username: 3paradm + # hpe3par_password: 3pardata + # vlan_tag: False + +#Additional Backend (Optional)-------------------------------------------------------------------------- + + 3PAR1: +#Mandatory Parameters----------------------------------------------------------------------------------- + + # Specify the port to be used by HPE 3PAR plugin etcd cluster + host_etcd_port_number: 23790 + # Plugin Driver - Fibre Channel + hpedockerplugin_driver: hpedockerplugin.hpe.hpe_3par_fc.HPE3PARFCDriver + hpe3par_ip: 192.168.2.50 + hpe3par_username: 3paradm + hpe3par_password: 3pardata + hpe3par_cpg: FC_r6 + +#Optional Parameters------------------------------------------------------------------------------------ + + # Uncomment to encrypt passwords in hpe.conf using defined passphrase + #encryptor_key: < encrypt_key2 > + + #ssh_hosts_key_file: '/root/.ssh/known_hosts' + logging: DEBUG + #hpe3par_debug: True + #suppress_requests_ssl_warning: True + hpe3par_snapcpg: FC_r6 + #use_multipath: False + #enforce_multipath: False diff --git a/ansible_3par_docker_plugin/properties/sample_etcd_properties.yml b/ansible_3par_docker_plugin/properties/sample_etcd_properties.yml deleted file mode 100644 index 839128c2..00000000 --- a/ansible_3par_docker_plugin/properties/sample_etcd_properties.yml +++ /dev/null @@ -1,52 +0,0 @@ ---- -# defaults file for zookeeper -etcd_version: 'latest' -etcd_image: "quay.io/coreos/etcd:{{etcd_version}}" -etcd_peers_group: etcd_servers - -#docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 -p 2380:2380 -p 2379:2379 \ -# --name etcd quay.io/coreos/etcd \ -# -name etcd0 \ -# -advertise-client-urls http://${HOST_1}:2379,http://${HOST_1}:4001 \ -# -listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \ -# -initial-advertise-peer-urls http://${HOST_1}:2380 \ -# -listen-peer-urls http://0.0.0.0:2380 \ -# -initial-cluster-token etcd-cluster-1 \ -# -initial-cluster etcd0=http://${HOST_1}:2380,etcd1=http://${HOST_2}:2380,etcd2=http://${HOST_3}:2380 \ -# -initial-cluster-state new - -etcd_client_port: 2379 -etcd_url_scheme: http -etcd_peer_port: 2380 -etcd_interface: eth0 -etcd_client_interface: "{{ etcd_interface }}" -etcd_peer_interface: "{{ etcd_interface }}" - -etcd_name: "{{ hostvars[inventory_hostname]['ansible_' + etcd_peer_interface]['ipv4']['address'] }}" -etcd_advertise_client_urls: "{{ etcd_client_url_scheme }}://{{ hostvars[inventory_hostname]['ansible_' + etcd_client_interface]['ipv4']['address'] }}:{{ etcd_client_port }}" -#etcd_listen_client_urls: "{{ etcd_client_url_scheme }}://{{ hostvars[inventory_hostname]['ansible_' + etcd_client_interface]['ipv4']['address'] }}:{{ etcd_client_port }}" -etcd_listen_client_urls: "{{ etcd_client_url_scheme }}://0.0.0.0:{{ etcd_client_port }}" -etcd_client_url_scheme: "{{ etcd_url_scheme }}" -etcd_peer_url_scheme: "{{ etcd_url_scheme }}" -etcd_initial_advertise_peer_urls: "{{ etcd_peer_url_scheme }}://{{ hostvars[inventory_hostname]['ansible_' + etcd_peer_interface]['ipv4']['address'] }}:{{ etcd_peer_port }}" -#etcd_listen_peer_urls: "{{ etcd_peer_url_scheme }}://{{ hostvars[inventory_hostname]['ansible_' + etcd_peer_interface]['ipv4']['address'] }}:{{ etcd_peer_port }}" -etcd_listen_peer_urls: "{{ etcd_peer_url_scheme }}://0.0.0.0:{{ etcd_peer_port }}" -etcd_initial_cluster_token: etcd-cluster-1 - -etcd_initial_cluster: " - {%- if etcd_peers is defined -%} - {% for host in etcd_peers %}{{ host }}={{ etcd_peer_url_scheme }}://{{ host }}:{{ etcd_peer_port }}{% if not loop.last %},{% endif %}{% endfor %} - {%- else -%} - {% for host in groups[etcd_peers_group] -%}{{ host }}={{ etcd_peer_url_scheme }}://{{ hostvars[host]['ansible_' + etcd_peer_interface]['ipv4']['address'] }}:{{ etcd_peer_port }}{% if not loop.last %},{% endif %}{% endfor %} - {%- endif -%} -" - -#etcd_initial_cluster: "{% for host in groups[etcd_peers_group] -%} -# {% if loop.last -%} -#{{ host }}={{ etcd_peer_url_scheme }}://{{ hostvars[host]['ansible_' + etcd_peer_interface]['ipv4']['address'] }}:{{ etcd_peer_port }} -# {%- else -%} -#{{ host }}={{ etcd_peer_url_scheme }}://{{ hostvars[host]['ansible_' + etcd_peer_interface]['ipv4']['address'] }}:{{ etcd_peer_port }}, -# {%- endif -%} -#{% endfor -%}" - -etcd_initial_cluster_state: "new" diff --git a/ansible_3par_docker_plugin/samples/pod_static_exmaple.yml b/ansible_3par_docker_plugin/samples/pod_static_exmaple.yml new file mode 100644 index 00000000..85741c92 --- /dev/null +++ b/ansible_3par_docker_plugin/samples/pod_static_exmaple.yml @@ -0,0 +1,28 @@ +--- +apiVersion: v1 +kind: Pod +metadata: + name: pod-first +spec: + containers: + - name: minio + image: minio/minio:latest + args: + - server + - /export + env: + - name: MINIO_ACCESS_KEY + value: minio + - name: MINIO_SECRET_KEY + value: doryspeakswhale + ports: + - containerPort: 9000 + volumeMounts: + - name: export + mountPath: /export + volumes: + - name: export + persistentVolumeClaim: + claimName: pvc-first + nodeSelector: + node: master diff --git a/ansible_3par_docker_plugin/samples/pv_static_example.yml b/ansible_3par_docker_plugin/samples/pv_static_example.yml new file mode 100644 index 00000000..b0521a8a --- /dev/null +++ b/ansible_3par_docker_plugin/samples/pv_static_example.yml @@ -0,0 +1,14 @@ +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pv-first +spec: + capacity: + storage: 10Gi + accessModes: + - ReadWriteOnce + flexVolume: + driver: hpe.com/hpe + options: + size: "10" diff --git a/ansible_3par_docker_plugin/samples/pvc_static_example.yml b/ansible_3par_docker_plugin/samples/pvc_static_example.yml new file mode 100644 index 00000000..107a0b66 --- /dev/null +++ b/ansible_3par_docker_plugin/samples/pvc_static_example.yml @@ -0,0 +1,11 @@ +--- +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: pvc-first +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi diff --git a/ansible_3par_docker_plugin/tasks/configure_docker_service.yml b/ansible_3par_docker_plugin/tasks/configure_docker_service.yml index c1199957..a2f6b54f 100644 --- a/ansible_3par_docker_plugin/tasks/configure_docker_service.yml +++ b/ansible_3par_docker_plugin/tasks/configure_docker_service.yml @@ -1,10 +1,18 @@ --- - name: Change MountFlags - ini_file: dest=/usr/lib/systemd/system/docker.service section=Service option=MountFlags value=shared no_extra_spaces=True backup=yes + ini_file: + dest: /usr/lib/systemd/system/docker.service + section: Service + option: MountFlags + value: shared + no_extra_spaces: True + backup: yes tags: configuration + register: mount_flag_result - name: restart docker service, also issue daemon-reload to pick up config changes systemd: state: restarted daemon_reload: yes name: docker.service + when: mount_flag_result.changed diff --git a/ansible_3par_docker_plugin/tasks/configure_multipath.yml b/ansible_3par_docker_plugin/tasks/configure_multipath.yml index 1321c351..c02ce5d5 100644 --- a/ansible_3par_docker_plugin/tasks/configure_multipath.yml +++ b/ansible_3par_docker_plugin/tasks/configure_multipath.yml @@ -1,9 +1,8 @@ --- - name: install multipath dependencies - yum: pkg={{ item }} state=installed - with_items: - - iscsi-initiator-utils - - device-mapper-multipath + yum: + pkg: iscsi-initiator-utils,device-mapper-multipath + state: installed - name: configure multipath.conf copy: diff --git a/ansible_3par_docker_plugin/tasks/copy_doryd_config.yml b/ansible_3par_docker_plugin/tasks/copy_doryd_config.yml new file mode 100644 index 00000000..f6729ede --- /dev/null +++ b/ansible_3par_docker_plugin/tasks/copy_doryd_config.yml @@ -0,0 +1,52 @@ +--- + - name: load etcd settings + include_vars: '../properties/etcd_cluster_properties.yml' + + - name: Create FlexVolume driver directory + file: + path: "{{ item }}" + state: directory + mode: 0644 + recurse: yes + with_items: + - "{{ driver_path }}" + - /etc/kubernetes/ + + - local_action: file path=/tmp/config state=absent + + - name: Check that the config file exists + stat: + path: /root/.kube/config + register: config_stat_result + + - name: Check that the admin.conf file exists + stat: + path: /etc/kubernetes/admin.conf + register: admin_stat_result + when: inventory_hostname in groups['masters'][0] + + - fail: + msg: "The config file does not exist either at /root/.kube/config or /etc/kubernetes/admin.conf" + when: inventory_hostname in groups['masters'][0] and config_stat_result.stat.exists == False and admin_stat_result.stat.exists == False + + - name: Copy over the kube config file into /tmp + fetch: + src: /root/.kube/config + dest: /tmp/config + flat: yes + when: inventory_hostname in groups['masters'][0] and config_stat_result.stat.exists + + - name: Copy over the admin.conf file into /tmp + fetch: + src: /etc/kubernetes/admin.conf + dest: /tmp/config + flat: yes + when: inventory_hostname in groups['masters'][0] and admin_stat_result.stat.exists + + - name: Verify /etc/kubernetes/admin.conf exists + copy: + src: /tmp/config + dest: /etc/kubernetes/admin.conf + owner: "root" + mode: 0755 + diff --git a/ansible_3par_docker_plugin/tasks/create_3par_docker_volume_plugin.yml b/ansible_3par_docker_plugin/tasks/create_3par_docker_volume_plugin.yml index 9a421ee7..063160b7 100644 --- a/ansible_3par_docker_plugin/tasks/create_3par_docker_volume_plugin.yml +++ b/ansible_3par_docker_plugin/tasks/create_3par_docker_volume_plugin.yml @@ -1,12 +1,17 @@ --- + - name: Set the mount prefix + set_fact: + mount_prefix: "{{ INVENTORY['DEFAULT']['mount_prefix'] + ':' + INVENTORY['DEFAULT']['mount_prefix'] + ':rshared' if INVENTORY['DEFAULT']['mount_prefix'] is defined else '/opt/hpe/data:/opt/hpe/data:rshared' }}" + - name: create hpedockerplugin container docker_container: name: plugin_container - image: hpestorage/legacyvolumeplugin:2.1 + image: "{{ INVENTORY['DEFAULT']['volume_plugin'] }}" privileged: true network_mode: host state: started detach: true + pull: yes volumes: - /dev:/dev - /run/lock:/run/lock @@ -20,5 +25,6 @@ - /lib/modules:/lib/modules - /lib64:/lib64 - /var/run/docker.sock:/var/run/docker.sock - - /opt/hpe/data:/opt/hpe/data:rshared + - "{{ mount_prefix }}" restart_policy: on-failure + diff --git a/ansible_3par_docker_plugin/tasks/create_conf_file.yml b/ansible_3par_docker_plugin/tasks/create_conf_file.yml new file mode 100644 index 00000000..99e1d8c5 --- /dev/null +++ b/ansible_3par_docker_plugin/tasks/create_conf_file.yml @@ -0,0 +1,343 @@ +--- + - name: Stop docker container + docker_container: + name: plugin_container + image: "{{ INVENTORY['DEFAULT']['volume_plugin'] }}" + state: stopped + + - name: remove the existing configuration file + file: + path: /etc/hpedockerplugin/hpe.conf + state: absent + + - name: Populate the etcd cluster IP in hpe.conf + ini_file: + path: /etc/hpedockerplugin/hpe.conf + section: "{{ item }}" + option: 'host_etcd_ip_address' + value: "{{ (':' + etcd_client_port_1 | string + ',').join(groups['etcd']) + ':' + etcd_client_port_1 | string }}" + no_extra_spaces: true + with_items: + - "{{ INVENTORY.keys()}}" + become: yes + when: groups['etcd'] is defined and groups['etcd'] | length > 1 + + - name: Populate the etcd IP in hpe.conf + ini_file: + path: /etc/hpedockerplugin/hpe.conf + section: "{{ item }}" + option: 'host_etcd_ip_address' + value: "{{ groups['etcd'][0] }}" + no_extra_spaces: true + with_items: + - "{{ INVENTORY.keys()}}" + become: yes + when: groups['etcd'] is defined and groups['etcd'] | length == 1 + + - name: Populate the etcd IP in hpe.conf + ini_file: + path: /etc/hpedockerplugin/hpe.conf + section: "{{ item }}" + option: 'host_etcd_ip_address' + value: "{{ hostvars[inventory_hostname]['ansible_default_ipv4']['address'] }}" + no_extra_spaces: true + with_items: + - "{{ INVENTORY.keys()}}" + become: yes + when: groups['etcd'] is not defined + + - name: get the iscsi ports + local_action: >- + shell /usr/bin/sshpass -p {{ INVENTORY[item]['hpe3par_password'] }} ssh -oStrictHostKeyChecking=no {{ INVENTORY[item]['hpe3par_username'] }}@{{ INVENTORY[item]['hpe3par_ip'] }} "showport -iscsi" | grep ready | awk '{print $3}' + with_items: "{{ INVENTORY.keys() }}" + register: primary_iscsi_ports + when: >- + INVENTORY[item]['hpe3par_iscsi_ips'] is not defined and INVENTORY[item]['hpedockerplugin_driver'] == 'hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver' and + not (INVENTORY[item]['vlan_tag'] is defined and INVENTORY[item]['vlan_tag']) + no_log: True + + - name: get the iscsi ports (vlan tagged) + local_action: >- + shell /usr/bin/sshpass -p {{ INVENTORY[item]['hpe3par_password'] }} ssh -oStrictHostKeyChecking=no {{ INVENTORY[item]['hpe3par_username'] }}@{{ INVENTORY[item]['hpe3par_ip'] }} "showport -iscsivlans" | grep -v '-' | awk '{print $3}' | sed -n '1!p' + with_items: "{{ INVENTORY.keys() }}" + register: primary_iscsi_ports_vlan + when: >- + INVENTORY[item]['hpe3par_iscsi_ips'] is not defined and + INVENTORY[item]['hpedockerplugin_driver'] == 'hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver' and + INVENTORY[item]['vlan_tag'] is defined and INVENTORY[item]['vlan_tag'] + no_log: True + + - name: Creates HPE Docker plugin directory + file: + path: /etc/hpedockerplugin + state: directory + mode: 0644 + recurse: yes + become: yes + + - name: Populate mandatory parameters in hpe.conf + ini_file: + path: /etc/hpedockerplugin/hpe.conf + section: "{{ item[0] }}" + option: "{{ item[1]['option'] }}" + value: "{{ INVENTORY[item[0]][item[1].value]}}" + no_extra_spaces: true + with_nested: + - "{{ INVENTORY.keys()}}" + - [ + { option: 'hpe3par_username', value: 'hpe3par_username' }, + { option: 'hpe3par_password', value: 'hpe3par_password' }, + { option: 'hpe3par_cpg', value: 'hpe3par_cpg' }, + { option: 'hpedockerplugin_driver', value: 'hpedockerplugin_driver' } + ] + become: yes + + - name: Populate etcd port parameter in hpe.conf + ini_file: + path: /etc/hpedockerplugin/hpe.conf + section: "{{ item }}" + option: host_etcd_port_number + value: "{{ etcd_client_port_1 }}" + no_extra_spaces: true + with_items: + - "{{ INVENTORY.keys()}}" + become: yes + + - name: Populate SSH host key parameters in hpe.conf + ini_file: + path: /etc/hpedockerplugin/hpe.conf + section: "{{ item }}" + option: ssh_hosts_key_file + value: "{{ INVENTORY[item]['ssh_hosts_key_file'] | default('/root/.ssh/known_hosts')}}" + no_extra_spaces: true + with_items: + - "{{ INVENTORY.keys()}}" + become: yes + + - name: Populate optional parameters in hpe.conf + ini_file: + path: /etc/hpedockerplugin/hpe.conf + section: "{{ item[0] }}" + option: "{{ item[1]['option'] }}" + value: "{{ INVENTORY[item[0]][item[1].value] | default(None)}}" + no_extra_spaces: true + with_nested: + - "{{ INVENTORY.keys()}}" + - [{ option: 'logging', value: 'logging' }, + { option: 'hpe3par_debug', value: 'hpe3par_debug' }, + { option: 'suppress_requests_ssl_warning', value: 'suppress_requests_ssl_warning' }, + { option: 'hpe3par_snapcpg', value: 'hpe3par_snapcpg' }, + { option: 'hpe3par_iscsi_chap_enabled', value: 'hpe3par_iscsi_chap_enabled' }, + { option: 'use_multipath', value: 'use_multipath' }, + { option: 'enforce_multipath', value: 'enforce_multipath' }, + { option: 'san_ip', value: 'hpe3par_ip' }, + { option: 'san_login', value: 'hpe3par_username' }, + { option: 'san_password', value: 'hpe3par_password' }, + { option: 'backend_id', value: 'backend_id' } + ] + become: yes + when: item.1.value in INVENTORY[item.0].keys() + + - name: Populate optional parameters in hpe.conf + ini_file: + path: /etc/hpedockerplugin/hpe.conf + section: DEFAULT + option: mount_prefix + value: "{{ INVENTORY['DEFAULT']['mount_prefix'] }}" + no_extra_spaces: true + when: INVENTORY['DEFAULT']['mount_prefix'] is defined + + - name: Populate WSAPI URL in hpe.conf + ini_file: + path: /etc/hpedockerplugin/hpe.conf + section: "{{ item[0] }}" + option: "{{ item[1]['option'] }}" + value: "{{ 'https://' + INVENTORY[item[0]][item[1].value] + ':8080/api/v1' }}" + no_extra_spaces: true + with_nested: + - "{{ INVENTORY.keys()}}" + - [{ option: 'hpe3par_api_url', value: 'hpe3par_ip' }] + become: yes + + - name: Populate iscsi IPs in hpe.conf + ini_file: + path: /etc/hpedockerplugin/hpe.conf + section: "{{ item['item'] }}" + option: 'hpe3par_iscsi_ips' + value: "{{ ','.join(item['stdout_lines']) }}" + no_extra_spaces: true + with_items: + - "{{ primary_iscsi_ports.results }}" + become: yes + when: >- + INVENTORY[item['item']]['hpe3par_iscsi_ips'] is not defined and + INVENTORY[item['item']]['hpedockerplugin_driver'] == 'hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver' and + not (INVENTORY[item['item']]['vlan_tag'] is defined and INVENTORY[item['item']]['vlan_tag']) + no_log: True + + - name: Populate iscsi IPs in hpe.conf (vlan tagged) + ini_file: + path: /etc/hpedockerplugin/hpe.conf + section: "{{ item['item'] }}" + option: 'hpe3par_iscsi_ips' + value: "{{ ','.join(item['stdout_lines']) }}" + no_extra_spaces: true + with_items: + - "{{ primary_iscsi_ports_vlan.results }}" + become: yes + when: >- + INVENTORY[item['item']]['hpe3par_iscsi_ips'] is not defined and + INVENTORY[item['item']]['hpedockerplugin_driver'] == 'hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver' and + INVENTORY[item['item']]['vlan_tag'] is defined and INVENTORY[item['item']]['vlan_tag'] + no_log: True + + - name: Populate iscsi IPs in hpe.conf + ini_file: + path: /etc/hpedockerplugin/hpe.conf + section: "{{ item }}" + option: 'hpe3par_iscsi_ips' + value: "{{ INVENTORY[item]['hpe3par_iscsi_ips'] }}" + no_extra_spaces: true + with_items: + - "{{ INVENTORY.keys()}}" + when: INVENTORY[item]['hpe3par_iscsi_ips'] is defined + + - name: Get the encrypted passwords + shell: >- + hpe3parencryptor --backend {{ item[0] }} -a {{INVENTORY[item[0]]['encryptor_key']}} "{{ INVENTORY[item[0]][(item[1].value)] }}" + register: passwords + with_nested: + - "{{ INVENTORY.keys()}}" + - [ + { option: 'hpe3par_password', value: 'hpe3par_password' }, + { option: 'san_password', value: 'hpe3par_password' } + ] + when: INVENTORY[item[0]]['encryptor_key'] is defined + + - name: Populate the encrypted passwords in hpe.conf + ini_file: + path: /etc/hpedockerplugin/hpe.conf + section: "{{ passwords['results'][item | int]['item'][0] }}" + option: "{{ passwords['results'][item | int]['item'][1]['option'] }}" + value: "{{ passwords['results'][item | int]['stdout'].split('password: ')[1] }} " + no_extra_spaces: true + with_items: + - "{{ lookup('sequence','start=0 end='+((INVENTORY.keys() | count)*2 -1) |string,wantlist=True) }}" + become: yes + when: passwords['results'][item | int]['stdout'] is defined + + - name: get the iscsi ports of replication device + local_action: >- + shell /usr/bin/sshpass -p {{ INVENTORY[item]['replication_device']['hpe3par_password'] }} ssh -oStrictHostKeyChecking=no {{ INVENTORY[item]['replication_device']['hpe3par_username'] }}@{{ INVENTORY[item]['replication_device']['hpe3par_ip'] }} "showport -iscsi" | grep ready | awk '{print $3}' + with_items: "{{ INVENTORY.keys() }}" + register: secondary_iscsi_ports + when: >- + INVENTORY[item]['hpedockerplugin_driver'] == 'hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver' and + INVENTORY[item]['replication_device'] is defined and + INVENTORY[item]['replication_device']['hpe3par_iscsi_ips'] is not defined and + not (INVENTORY[item]['replication_device']['vlan_tag'] is defined and + INVENTORY[item]['replication_device']['vlan_tag']) + no_log: True + + - name: get the iscsi ports of replication device (vlan tagged) + local_action: >- + shell /usr/bin/sshpass -p {{ INVENTORY[item]['replication_device']['hpe3par_password'] }} ssh -oStrictHostKeyChecking=no {{ INVENTORY[item]['replication_device']['hpe3par_username'] }}@{{ INVENTORY[item]['replication_device']['hpe3par_ip'] }} "showport -iscsivlans" | grep -v '-' | awk '{print $3}' | sed -n '1!p' + with_items: "{{ INVENTORY.keys() }}" + register: secondary_iscsi_ports_vlan + when: >- + INVENTORY[item]['hpedockerplugin_driver'] == 'hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver' and + INVENTORY[item]['replication_device'] is defined and + INVENTORY[item]['replication_device']['hpe3par_iscsi_ips'] is not defined and + INVENTORY[item]['replication_device']['vlan_tag'] is defined and + INVENTORY[item]['replication_device']['vlan_tag'] + no_log: True + + - name: Set the replication device information + set_fact: + backend_id: "backend_id:{{ INVENTORY[item]['replication_device']['backend_id'] }}" + replication_mode: "replication_mode:{{ INVENTORY[item]['replication_device']['replication_mode'] }}" + cpg_map: "cpg_map:{{ INVENTORY[item]['replication_device']['cpg_map'] }}" + snap_cpg_map: "snap_cpg_map:{{ INVENTORY[item]['replication_device']['snap_cpg_map'] }}" + hpe3par_username: "hpe3par_username:{{ INVENTORY[item]['replication_device']['hpe3par_username'] }}" + hpe3par_api_url: "hpe3par_api_url: https://{{ INVENTORY[item]['replication_device']['hpe3par_ip'] }}:8080/api/v1" + san_ip: "san_ip: {{ INVENTORY[item]['replication_device']['hpe3par_ip'] }}" + san_login: "san_login: {{ INVENTORY[item]['replication_device']['hpe3par_username'] }}" + with_items: "{{ INVENTORY.keys() }}" + register: replication_device + when: INVENTORY[item]['replication_device'] is defined + + - name: Get the encrypted passwords for replication device + shell: >- + hpe3parencryptor --backend {{ item }} -a {{INVENTORY[item]['encryptor_key']}} "{{ INVENTORY[item]['replication_device']['hpe3par_password'] }}" + register: replication_device_passwords + with_items: + - "{{ INVENTORY.keys()}}" + when: >- + INVENTORY[item]['encryptor_key'] is defined and + INVENTORY[item]['replication_device'] is defined + + - name: Set the mandatory replication device values + set_fact: >- + {{ replication_device['results'][item | int]['ansible_facts']['backend_id'] }}, + {{ replication_device['results'][item | int]['ansible_facts']['replication_mode'] }}, + {{ replication_device['results'][item | int]['ansible_facts']['cpg_map'] }}, + {{ replication_device['results'][item | int]['ansible_facts']['snap_cpg_map'] }}, + {{ replication_device['results'][item | int]['ansible_facts']['hpe3par_username'] }}, + hpe3par_password: + {{ replication_device_passwords['results'][item | int]['stdout'].split('password: ')[1] + if + INVENTORY[replication_device['results'][item | int]['item']]['encryptor_key'] is defined + else + INVENTORY[replication_device['results'][item | int]['item']]['replication_device']['hpe3par_password'] }}, + {{ replication_device['results'][item | int]['ansible_facts']['hpe3par_api_url'] }}, + {{ replication_device['results'][item | int]['ansible_facts']['san_ip'] }}, + {{ replication_device['results'][item | int]['ansible_facts']['san_login'] }}, + san_password: {{ replication_device_passwords['results'][item | int]['stdout'].split('password: ')[1] + if + INVENTORY[replication_device['results'][item | int]['item']]['encryptor_key'] is defined + else + INVENTORY[replication_device['results'][item | int]['item']]['replication_device']['hpe3par_password'] }} + with_items: "{{ lookup('sequence', 'start=0 end='+((replication_device['results'] | length)-1) |string, wantlist=True) }}" + register: mandatory_replication_device_values + when: INVENTORY[replication_device['results'][item | int]['item']]['replication_device'] is defined + + - name: Populate the replication device information (ISCSI Driver) + ini_file: + path: /etc/hpedockerplugin/hpe.conf + section: "{{ replication_device['results'][item | int]['item'] }}" + option: replication_device + value: >- + {{ mandatory_replication_device_values['results'][item | int]['ansible_facts']['_raw_params'] }}, + hpe3par_iscsi_ips: + {{ ';'.join(INVENTORY[replication_device['results'][item | int]['item']]['replication_device']['hpe3par_iscsi_ips'].split(',')) + if + INVENTORY[replication_device['results'][item | int]['item']]['replication_device']['hpe3par_iscsi_ips'] is defined + else + ( + ';'.join(secondary_iscsi_ports_vlan['results'][item | int]['stdout_lines']) + if + INVENTORY[replication_device['results'][item | int]['item']]['replication_device']['vlan_tag'] is defined and + INVENTORY[replication_device['results'][item | int]['item']]['replication_device']['vlan_tag'] + else + ';'.join(secondary_iscsi_ports['results'][item | int]['stdout_lines']) + ) + }}{{', quorum_witness_ip: ' + INVENTORY[replication_device['results'][item | int]['item']]['replication_device']['quorum_witness_ip'] if INVENTORY[replication_device['results'][item | int]['item']]['replication_device']['quorum_witness_ip'] is defined else ''}} + no_extra_spaces: true + with_items: "{{ lookup('sequence', 'start=0 end='+((replication_device['results'] | length)-1) |string, wantlist=True) }}" + when: >- + INVENTORY[replication_device['results'][item | int]['item']]['replication_device'] is defined and + INVENTORY[replication_device['results'][item | int]['item']]['hpedockerplugin_driver'] == 'hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver' + + - name: Populate the replication device information (FC Driver) + ini_file: + path: /etc/hpedockerplugin/hpe.conf + section: "{{ replication_device['results'][item | int]['item'] }}" + option: replication_device + value: >- + {{ mandatory_replication_device_values['results'][item | int]['ansible_facts']['_raw_params'] }}{{', quorum_witness_ip: ' + INVENTORY[replication_device['results'][item | int]['item']]['replication_device']['quorum_witness_ip'] if INVENTORY[replication_device['results'][item | int]['item']]['replication_device']['quorum_witness_ip'] is defined else ''}} + no_extra_spaces: true + with_items: "{{ lookup('sequence', 'start=0 end='+((replication_device['results'] | length)-1) |string, wantlist=True) }}" + when: >- + INVENTORY[replication_device['results'][item | int]['item']]['replication_device'] is defined and + INVENTORY[replication_device['results'][item | int]['item']]['hpedockerplugin_driver'] == 'hpedockerplugin.hpe.hpe_3par_fc.HPE3PARFCDriver' + diff --git a/ansible_3par_docker_plugin/tasks/create_etcd_container.yml b/ansible_3par_docker_plugin/tasks/create_etcd_container.yml index 8ddf7822..3b5b18d7 100644 --- a/ansible_3par_docker_plugin/tasks/create_etcd_container.yml +++ b/ansible_3par_docker_plugin/tasks/create_etcd_container.yml @@ -2,23 +2,33 @@ - name: load etcd settings include_vars: '../properties/etcd_cluster_properties.yml' + - name: Initialize the etcd cluster var + set_fact: + etcd_initial_cluster: '' + + - name: Create the etcd cluster var + set_fact: + etcd_initial_cluster: "{{ etcd_initial_cluster }},{{ hostvars[groups['etcd'][item | int]]['ansible_default_ipv4']['address'] }}={{ etcd_url_scheme }}://{{ hostvars[groups['etcd'][item | int]]['ansible_default_ipv4']['address'] }}:{{ etcd_peer_port }}" + with_items: "{{ lookup('sequence','start=0 end='+((groups['etcd'] | count) -1) |string,wantlist=True) }}" + - name: run etcd container docker_container: name: etcd_hpe image: "{{ etcd_image }}" state: started detach: true + pull: yes ports: - - "23800:23800" - - "23790:23790" - - "40010:40010" + - "{{ etcd_peer_port }}:{{ etcd_peer_port }}" + - "{{ etcd_client_port_1 }}:{{ etcd_client_port_1 }}" + - "{{ etcd_client_port_2 }}:{{ etcd_client_port_2 }}" env: ETCD_NAME: "{{ etcd_name }}" ETCD_ADVERTISE_CLIENT_URLS: "{{ etcd_advertise_client_url_1 }},{{ etcd_advertise_client_url_2 }}" ETCD_LISTEN_CLIENT_URLS: "{{ etcd_listen_client_url_1 }},{{ etcd_listen_client_url_2 }}" ETCD_INITIAL_ADVERTISE_PEER_URLS: "{{ etcd_initial_advertise_peer_urls }}" ETCD_LISTEN_PEER_URLS: "{{ etcd_listen_peer_urls }}" - ETCD_INITIAL_CLUSTER: "{{ etcd_initial_cluster }}" + ETCD_INITIAL_CLUSTER: "{{ etcd_initial_cluster[1:] }}" ETCD_INITIAL_CLUSTER_TOKEN: "{{ etcd_initial_cluster_token }}" ETCD_INITIAL_CLUSTER_STATE: "{{ etcd_initial_cluster_state }}" restart_policy: always diff --git a/ansible_3par_docker_plugin/tasks/deploy_FlexVolume_driver.yml b/ansible_3par_docker_plugin/tasks/deploy_FlexVolume_driver.yml index 73e80c44..772e4f76 100644 --- a/ansible_3par_docker_plugin/tasks/deploy_FlexVolume_driver.yml +++ b/ansible_3par_docker_plugin/tasks/deploy_FlexVolume_driver.yml @@ -1,31 +1,16 @@ --- - - name: load etcd settings - include_vars: '../properties/etcd_properties.yml' + - name: Download dory installer + get_url: + url: https://github.com/hpe-storage/python-hpedockerplugin/raw/master/dory_installer_v31 + dest: /tmp + mode: 0755 - - name: Create FlexVolume driver directory - file: - path: "{{ item }}" - state: directory - mode: 0644 - recurse: yes - with_items: - - "{{ driver_path }}" - - /etc/kubernetes/ + - name: Install dory, doryd + shell: >- + yes yes | /tmp/dory_installer_v31 - - name: Verify /etc/kubernetes/admin.conf exists - copy: - src: /root/.kube/config - dest: /etc/kubernetes/admin.conf - owner: "root" - mode: 0755 - ignore_errors: yes + - name: Remove the dory installer + file: + path: /tmp/dory_installer_v31 + state: absent - - name: Install FlexVolume drivers - copy: - src: "{{ item }}" - dest: "{{ driver_path }}" - owner: "root" - mode: 0755 - remote_src: yes - with_fileglob: - - "../files/dory/*" diff --git a/ansible_3par_docker_plugin/tasks/install_prerequisites.yml b/ansible_3par_docker_plugin/tasks/install_prerequisites.yml new file mode 100644 index 00000000..2c9d6a5a --- /dev/null +++ b/ansible_3par_docker_plugin/tasks/install_prerequisites.yml @@ -0,0 +1,49 @@ +--- + - name: Get all the backend array keys from inventory + set_fact: + array_keys: "{{ INVENTORY.keys() }}" + install_encryptor_packages: false + + - name: Check if the encryptor specific packages must be installed + set_fact: + install_encryptor_packages: true + with_items: "{{ array_keys }}" + when: INVENTORY[item]['encryptor_key'] is defined + + - name: install gcc + package: + name: gcc + state: present + become: yes + when: install_encryptor_packages + + - name: install python-devel + package: + name: python-devel + state: present + become: yes + when: install_encryptor_packages and (ansible_distribution == 'CentOS' or ansible_distribution == 'RedHat') + + - name: install python-dev + package: + name: python-dev + state: present + become: yes + when: install_encryptor_packages and ansible_distribution == 'Ubuntu' + + - name: update setuptools + pip: + name: setuptools + state: latest + extra_args: --upgrade + become: yes + when: install_encryptor_packages + + - name: install py-3parencryptor + pip: + name: py-3parencryptor + state: present + become: yes + when: install_encryptor_packages + + diff --git a/ansible_3par_docker_plugin/tasks/install_prerequisites_on_all.yml b/ansible_3par_docker_plugin/tasks/install_prerequisites_on_all.yml new file mode 100644 index 00000000..bb3aae2a --- /dev/null +++ b/ansible_3par_docker_plugin/tasks/install_prerequisites_on_all.yml @@ -0,0 +1,60 @@ +--- + - name: Force fail if DEFAULT backend is not present + fail: + msg: "The plugin properties file does not have the [DEFAULT] back end" + when: INVENTORY['DEFAULT'] is not defined + + - name: Check if pip exists + command: which pip + register: pip_result + failed_when: pip_result.rc != 0 and pip_result.rc != 1 + + - name: download get-pip.py + get_url: + url: https://bootstrap.pypa.io/get-pip.py + dest: /tmp + when: pip_result.rc != 0 + + - name: install pip + command: "python /tmp/get-pip.py" + become: yes + when: pip_result.rc != 0 + + - name: delete get-pip.py + file: + state: absent + path: /tmp/get-pip.py + when: pip_result.rc != 0 + + - name: delete requests library + shell: rm -rf /usr/lib/python2.7/site-packages/requests* + become: yes + + - name: uninstall docker-py if ansible version >= 2.6 + pip: + name: docker-py + state: absent + become: yes + when: (ansible_version.major == 2 and ansible_version.minor >= 6) or ansible_version.major > 2 + + - name: install docker if ansible version >= 2.6 + pip: + name: docker + state: present + become: yes + when: (ansible_version.major == 2 and ansible_version.minor >= 6) or ansible_version.major > 2 + + - name: uninstall docker if ansible version < 2.6 + pip: + name: docker + state: absent + become: yes + when: ansible_version.major == 2 and ansible_version.minor < 6 + + - name: install docker-py if ansible version < 2.6 + pip: + name: docker-py + state: present + become: yes + when: ansible_version.major == 2 and ansible_version.minor < 6 + diff --git a/ansible_3par_docker_plugin/uninstall/remove_3par_docker_container.yml b/ansible_3par_docker_plugin/uninstall/remove_3par_docker_container.yml index 71f808b5..b485c172 100644 --- a/ansible_3par_docker_plugin/uninstall/remove_3par_docker_container.yml +++ b/ansible_3par_docker_plugin/uninstall/remove_3par_docker_container.yml @@ -2,29 +2,25 @@ - name: load etcd settings include_vars: '../properties/etcd_cluster_properties.yml' - - name: Stop & remove etcd container - docker_container: - name: etcd_hpe - image: "{{ etcd_image }}" - state: stopped - - name: Stop & remove hpedockerplugin container docker_container: name: plugin_container - image: hpestorage/legacyvolumeplugin:2.1 + image: "{{ INVENTORY['DEFAULT']['volume_plugin'] }}" state: stopped - pause: seconds: 15 - - name: Remove etcd container - docker_container: - name: etcd_hpe - image: "{{ etcd_image }}" - state: absent - - name: Remove hpedockerplugin container docker_container: name: plugin_container - image: hpestorage/legacyvolumeplugin:2.1 + image: "{{ INVENTORY['DEFAULT']['volume_plugin'] }}" + state: absent + + - name: Remove the HPE docker volume plugin image + docker_image: state: absent + name: "{{ INVENTORY['DEFAULT']['volume_plugin'].split(':')[0] }}" + tag: "{{ INVENTORY['DEFAULT']['volume_plugin'].split(':')[1] }}" + force: yes + diff --git a/ansible_3par_docker_plugin/uninstall/remove_doryd_files.yml b/ansible_3par_docker_plugin/uninstall/remove_doryd_files.yml index 056ca7d6..35debdad 100644 --- a/ansible_3par_docker_plugin/uninstall/remove_doryd_files.yml +++ b/ansible_3par_docker_plugin/uninstall/remove_doryd_files.yml @@ -1,14 +1,27 @@ --- + - name: Get all doryd files + find: + paths: /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe + register: doryd_files_to_delete + - name: remove doryd files file: state: absent - path: '/usr/libexec/kubernetes/kubelet-plugins/volume/exec/{{ item }}' + path: "{{ doryd_files_to_delete.files[(item | int)-1].path }}" with_items: - - hpe.com~hpe/* + - "{{ lookup('sequence','start=0 end='+((doryd_files_to_delete.matched | int ) |string),wantlist=True) }}" + when: doryd_files_to_delete.matched | int > 0 + + - name: Get all sock files + find: + paths: /run/docker/plugins/hpe + file_type: link + register: sock_files_to_delete - name: Delete any existing sock files file: state: absent - path: /run/docker/plugins/{{ item }} + path: "{{ sock_files_to_delete.files[(item | int)-1].path }}" with_items: - - hpe/* + - "{{ lookup('sequence','start=0 end='+((sock_files_to_delete.matched | int ) |string),wantlist=True) }}" + when: sock_files_to_delete.matched | int > 0 diff --git a/ansible_3par_docker_plugin/uninstall/remove_doryd_service.yml b/ansible_3par_docker_plugin/uninstall/remove_doryd_service.yml index 03f833b6..2ca6c552 100644 --- a/ansible_3par_docker_plugin/uninstall/remove_doryd_service.yml +++ b/ansible_3par_docker_plugin/uninstall/remove_doryd_service.yml @@ -1,9 +1,15 @@ --- + - name: Check that the doryd.service file exists + stat: + path: /etc/systemd/system/doryd.service + register: stat_result + - name: stop doryd service, also issue daemon-reload to pick up config changes service: state: stopped enabled: no name: doryd.service + when: stat_result.stat.exists - pause: seconds: 5 diff --git a/ansible_3par_docker_plugin/uninstall/remove_etcd_container.yml b/ansible_3par_docker_plugin/uninstall/remove_etcd_container.yml new file mode 100644 index 00000000..49ee0e98 --- /dev/null +++ b/ansible_3par_docker_plugin/uninstall/remove_etcd_container.yml @@ -0,0 +1,26 @@ +--- + - name: load etcd settings + include_vars: '../properties/etcd_cluster_properties.yml' + + - name: Stop & remove etcd container + docker_container: + name: etcd_hpe + image: "{{ etcd_image }}" + state: stopped + + - pause: + seconds: 15 + + - name: Remove etcd container + docker_container: + name: etcd_hpe + image: "{{ etcd_image }}" + state: absent + + - name: Remove the etcd image + docker_image: + state: absent + name: "{{ etcd_image.split(':')[0] }}" + tag: "{{ etcd_version }}" + force: yes + diff --git a/ansible_3par_docker_plugin/uninstall/uninstall_hpe_3par_volume_driver.yml b/ansible_3par_docker_plugin/uninstall/uninstall_hpe_3par_volume_driver.yml index 17f08eb0..56a4ee84 100644 --- a/ansible_3par_docker_plugin/uninstall/uninstall_hpe_3par_volume_driver.yml +++ b/ansible_3par_docker_plugin/uninstall/uninstall_hpe_3par_volume_driver.yml @@ -1,11 +1,15 @@ --- - name: Uninstall HPE 3PAR Volume Driver for Kubernetes/OpenShift - hosts: all + hosts: masters,workers,etcd become: root tasks: - - name: Remove etcd and 3PAR Volume Plugin containers from Docker + - name: load plugin settings + include_vars: '../properties/plugin_configuration_properties.yml' + + - name: Remove 3PAR Volume Plugin container from Docker include: remove_3par_docker_container.yml + ignore_errors: yes - name: Reset multipath include: remove_multipath.yml @@ -19,7 +23,7 @@ include: remove_doryd_service.yml - name: Uninstall HPE 3PAR Volume Driver for Kubernetes/OpenShift - hosts: all + hosts: masters,workers,etcd become: root tasks: diff --git a/ansible_3par_docker_plugin/uninstall/uninstall_hpe_3par_volume_driver_etcd.yml b/ansible_3par_docker_plugin/uninstall/uninstall_hpe_3par_volume_driver_etcd.yml new file mode 100644 index 00000000..e85de145 --- /dev/null +++ b/ansible_3par_docker_plugin/uninstall/uninstall_hpe_3par_volume_driver_etcd.yml @@ -0,0 +1,35 @@ +--- +- name: Uninstall HPE 3PAR Volume Driver for Kubernetes/OpenShift + hosts: masters,workers,etcd + become: root + + tasks: + - name: load plugin settings + include_vars: '../properties/plugin_configuration_properties.yml' + + - name: Remove etcd container from Docker + include: remove_etcd_container.yml + ignore_errors: yes + + - name: Remove 3PAR Volume Plugin container from Docker + include: remove_3par_docker_container.yml + ignore_errors: yes + + - name: Reset multipath + include: remove_multipath.yml + +- name: Remove HPE 3PAR Volume Driver for Kubernetes/OpenShift + hosts: masters + become: root + + tasks: + - name: Start Dynamic Provisioner (doryd) on Master node + include: remove_doryd_service.yml + +- name: Uninstall HPE 3PAR Volume Driver for Kubernetes/OpenShift + hosts: masters,workers,etcd + become: root + + tasks: + - name: Remove FlexVolume drivers and config files + include: remove_doryd_files.yml diff --git a/docs/FileServiceDesign.md b/docs/FileServiceDesign.md new file mode 100644 index 00000000..d50b4764 --- /dev/null +++ b/docs/FileServiceDesign.md @@ -0,0 +1,187 @@ +## File Service Design for HPE 3PAR Docker Volume Plugin +Design decision for implementing File Services for HPE 3PAR Docker Volume plugin is captured in this wiki. + +### Objectives +1. Provide a way to present a file share (via NFS/CIFS -- which are currently supported on File Persona of HPE 3PAR) +to containerized applications. +Our plugin currently supoorts NFS protocol. + - To support share replication, we have model VFS as a docker volume object + - To support snapshot/quota we have to model FileStore to a docker volume object. We are currently moving to this model, since + it will give an ideal balance between granurality on number of the share(s) vs. Capabilities which we need to support. + +2. Creation of FPG and VFS to be transparent to the user. FPG (File Provisioning Group) requires CPG as input which will +be supplied via the config file `/etc/hpedockerplugin/hpe_file.conf`. Creation of VFS requires virtual IP Address/subnet mask. And each VFS can have multiple IP's associated with it, and this pool of ip addresses will be supplied via the config file. + +3. Provide a way to update the whitelisted IP's as part of ACL definition of File Store/Share. +(implicitly when a mount happens on a docker host) +4. A share to be allowed to mount on multiple containers on one or more hosts. This is to support `accessModes: ReadWriteMany` +option of kubernetes PVC + +5. Document the Limitations. + +### Mapping of a Docker Volume to a file persona object +- Due to some constraints on setting up of Quota/ACL (Access Control List) only applicable to File Store, and due +features like Replication/Snapshot available only at either File Store/VFS (Virtual File Server) level, we have to map the docker volume +at either File Store/VFS level. + +- But since there is a inherent limitations on the number of VFS which can be created on a 3PAR system (which is currently only 16), +we can't define the granularity of the docker volume object to the File Store level, but instead we want to come down on the hierarchy of +file persona objects to the File store. + +- Our design approach currently will be based on mapping a docker volume object to a File Store only. This may be later extend to mapping +the docker volume object to VFS in later phases of development. + + +### Limitations +- Updating a quota via the `docker volume create` is not currently supported due to + 1. Docker volume plugin v2 specification does'nt directly provide an update primitive , updating quota on the File Store is not feasible + 2. Other approach of providing a separate binary file / utility would be a out-of-band operation on the file share created, and possibly + the updates done by this utility/tool also will not be automatically replicated on the kubernetes object (like PVC). + + +## Below diagram represents how shares are mapped to CPG. +![3PAR persona hirarchy diagram](/docs/img/3PAR_FIlePersona_Share_Hierarchy.png) + +--- + +## Below are the default use cases and default behaviour with provided options + +1. Create file share when only name of share is mentioned +``` +docker volume create -d hpe --name share_name +``` +- User creates share under default CPG +- Default CPG is mentioned in hpe_file.conf file +- Default FPG OF 1TiB and Default store of size 64 GiB. +- To create a share under a specified fpg(FPG created via docker), user can specify -o size=x -o fpg= where x is in GiB + If Size is not specified it will always create a 64GiB store +- Here CPG, IP and mask will be picked from the conf file +- If IP and mask are not available then user need to supply it with -o +- ex. -o ipSubnet="192.168.68.38:255.255.192.0" +- If user wants to select 3 IPs from a pool to assign it to a vfs, user needs to provide this information +- with -o numOfInvolvedIps=2 +- If these many ips are not available error will be thrown. + + 2. Create file share on a particular cpg + ``` + docker volume create -d hpe --name share_name -o cpg=CPG_name + ``` + - User creates a share on non default CPG + - Provided CPG will be used to create FPG or if FPG exist, available FPG will be used to create store and share + - If FPG doesnt exist FPG will be created, with edfault value of FPG (1TiB) + + 3. Create file share under particular FPG + ``` + docker volume create -d hpe --name share_name -o fpg_name=FPG1 + ``` + - User want to use existing FPG created via docker + - Here fpg_size store_name and size(Store Quota) will be created with default values unless mentioned. + - If the fpg_name provided exist in 3PAR and same is not available in docker we will proceed with creation of share under this fpg with + default values unless provided with -o option + - IF fpg is created via plugin and fpg_size is provided, exception will be thrown + + +## Changes required to the configuration file +Following configuration parameters are required to support the above requirements: +1. **hpe3par_server_ip_pool**: List of IP addresses and corresponding subnet masks in the format: +*IP1:SubnetMask1,IP2:SubnetMask2,IP5-IP10:SubnetMask3...* +2. **hpe3par_default_fpg_size**: Default size to be used for FPG creation. If not specified in +the configuration file, this value defaults to 64TB. + + +## Share Metadata +Efficient information lookup would be required for the following two cases: +1. Share lookup by name and +2. Available FPG lookup for new share creation + +### Share lookup by name +This is required for retrieve, update, delete, mount and umount of a share. +To satisfy this requirement, we can continue to use *“/volumes/{id}”* ETCD key or have +a new key as *“/shares/{id}”* under which below share metadata can be kept. +``` +share_metadata = { + # Backend name + 'backend': backend, + + # UUID of the share + 'id': , + + # FPG name + 'fpg': , + + # VFS name + 'vfs': , + + # Share name supplied by the user + 'name': , + + # Default is True + 'readonly': , + + # Share size applied as quota on file store. Default value is 64GB. + 'size': , + + # NFS protocol options. If not supplied, 3PAR defaults will apply + 'nfsOptions': , + + # List of host IPs that can access this share + 'clientIPs': [], + + # Share description + 'comment': comment, +} +``` + +### Available FPG lookup for new share creation +Available default FPG needs to be located when a new share is created with default parameters i.e. FPG +name is not specified on the Docker CLI. + +To satisfy this requirement, the additional information needs to be maintained in ETCD under +a new key called *“/file-persona/{backend}.metadata”*. + +E.g. Below is a sample meta-data for a backend called *DEFAULT*, having two CPGs – CPG1 and CPG2, +to be stored under ETCD key *“/file-persona/DEFAULT.metadata”*: + +``` +{ + # Counter used to generate FPG name + 'counter': 3, + + # List of IPs currently in use by VFS on this backend + 'ips_in_use': ['ip1', 'ip2', 'ip3'], + + # List of IPs from IP pool that are blocked for use to avoid others from using it + # This is a temporary list. Once an IP is successfully assigned to VFS, it is moved + # to 'ips_in_use' list + 'ips_locked_for_use': ['ip4'] + + # Dictionary of current default FPGs used to efficiently locate default FPG on a + # given CPG when a share is created with default parameters + 'default_fpgs': {: } +} +``` + +### FPG metadata +FPG metadata is stored under the key *"/file-persona/{backend}.metadata/{cpg_name}/{fpg_name}"* +Following information is stored for each FPG at the above key: +``` +{ + # FPG name + 'fpg': 'DockerFPG_01', + + # FPG size + 'fpg_size': 64, + + # Flag indicating if FPG reached its full capacity + 'reached_full_capacity': False + + # Current share count on this FPG + 'share_cnt': 3, + + # VFS name + 'vfs': 'DockerVFS_01', + + # IP address used by 'vfs' + 'ips': {'255.255.255.0': [192.168.121.10]} +} +``` diff --git a/docs/QoS_Rule.md b/docs/QoS_Rule.md new file mode 100644 index 00000000..fb330a13 --- /dev/null +++ b/docs/QoS_Rule.md @@ -0,0 +1,39 @@ +HPE 3PAR Priority Optimization software provides quality-of-service rules to manage and control the I/O capacity of an HPE 3PAR +StoreServ Storage system across multiple workloads. +The use of QoS rules stabilizes performance in a multi-tenant environment. +HPE 3PAR Priority Optimization operates by applying upper-limit control on I/O traffic to and from hosts connected to an HPE +3PAR StoreServ Storage system. These limits, or QoS rules, are defined for front-end input/output operations per second (IOPS) and +for bandwidth. +QoS rules are applied using autonomic groups. Every QoS rule is associated with one (and only one) target object. +The smallest target object to which a QoS rule can be applied is a virtual volume set (VVset) or a virtual domain. +Because a VVset can consist of a single VV, a QoS rule can target a single VV. +Every QoS rule has six attributes: + +1. Name: The name of the QoS rule is the same as the name of the VVset. +2. State: The QoS rule can be active or disabled. +3. I/O: Sets the Min Goal and the Max Limit on IOPS for the target object. +4. Bandwidth: Sets the Min Goal and the Max Limit in bytes-per-second transfer rate for the target objective. +5. Priority: The limit for the target object can be set to low, normal, or high. +6. Latency Goal: The goals for the target object are determined in milliseconds. + + +HPE 3PAR Priority Optimization sets the values for IOPS and bandwidth in QoS rules in absolute numbers, not in percentages. +The IOPS number is stated as an integer between 0 and 231-1, although a more realistic upper limit is the number of IOPS that the +particular array in question is capable of providing, given its configuration. The value for bandwidth is stated as an integer between +0 and 263-1, expressed in KB/second, although a more realistic upper limit is the throughput in KB/second that the particular array in +question is capable of providing, given its configuration. + +``` +Note: We recommend user/administrator to set QoS rules based on Priority and Latency Goal, so that I/O and Bandwidth can be adjusted +automatically. If there are multiple volumes in VVSet and QoS rules are applied based on I/O or Bandwidth values, this will not guarantee +that each volume in vvset will have minimum/max I/O or Bandwidth as per set limit (These limits are at vvset level). +``` + +Example: Consider QoS rules for vvset ‘volumeset1’ is set to minimum Bandwidth of X KB/second and max bandwidth of Y KB/second. +If this vvset has 2 volumes volume1 and volume2, this does guarantee both volumes volume1 and volume2 will have minimum and maximum +bandwidth of X and Y KB/s respectively. + +Current implementation of Docker volume plugin only associates the docker created volume with Vvset specified in -o qos-name "vvsetname", +we recommend to be same QoS instead tweaking at docker volume plugin level. + +Please do refer the 3PAR Store Serv QoS Best practice whitepaper from https://h20195.www2.hpe.com/v2/GetPDF.aspx/4AA4-4524ENW.pdf diff --git a/docs/active-passive-based-replication.md b/docs/active-passive-based-replication.md new file mode 100644 index 00000000..6e39454f --- /dev/null +++ b/docs/active-passive-based-replication.md @@ -0,0 +1,174 @@ +# Active/Passive Based Replication # + +In Active/Passive based replication, only one array is in active state +at any point of time serving the VLUNs of a given replicated volume. + +When a remote copy group (RCG) is failed over manually via 3PAR CLI to the +secondary array, the secondary array becomes active. However, the VLUNs +of the failed over volumes are still not exported by the secondary array +to the host. In order to trigger that, the container/POD running on the +host needs to be restarted. + +## Configuring replication enabled backend +**For FC Host** +```sh +host_etcd_port_number= +hpe3par_username= +hpe3par_password= +hpe3par_cpg= +hpedockerplugin_driver=hpedockerplugin.hpe.hpe_3par_fc.HPE3PARFCDriver +logging=DEBUG +san_ip= +san_login= +san_password= +host_etcd_ip_address=[:PORT1[,IP2[:PORT2]][,IP3[:PORT3]]...] +hpe3par_api_url=https://:8080/api/v1 +replication_device = backend_id:, + replication_mode:, + cpg_map::, + snap_cpg_map:: + hpe3par_api_url:https://:8080/api/v1, + hpe3par_username:<3PAR-Username>, + hpe3par_password:<3PAR-Password>, + san_ip:<3PAR-SAN-IP>, + san_login:<3PAR-SAN-Username>, + san_password:<3PAR-SAN-Password> +``` + +*Note*: + +1. In case of asynchronous replication mode, *sync_period* field can optionally be +defined as part of *replication_device* entry and it should be between range 300 +and 31622400 seconds. If not defined, it defaults to 900 seconds. +2. Both *cpg_map* and *snap_cpg_map* in *replication_device* section are mandatory. +3. If password is encrypted for primary array, it must be encrypted for secondary array +as well using the same *pass-phrase* + + +**For ISCSI Host** +```sh +host_etcd_port_number= +hpe3par_username= +hpe3par_password= +hpe3par_cpg= +hpedockerplugin_driver=hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver +logging=DEBUG +san_ip= +san_login= +san_password= +host_etcd_ip_address=[:PORT1[,IP2[:PORT2]][,IP3[:PORT3]]...] +hpe3par_api_url=https://:8080/api/v1 +hpe3par_iscsi_ips=[,ISCSI_IP2,ISCSI_IP3...] +replication_device=backend_id:, +replication_device = backend_id:, + replication_mode:, + cpg_map::, + snap_cpg_map:: + hpe3par_api_url:https://:8080/api/v1, + hpe3par_username:<3PAR-Username>, + hpe3par_password:<3PAR-Password>, + san_ip:<3PAR-SAN-IP>, + san_login:<3PAR-SAN-Username>, + san_password:<3PAR-SAN-Password> + hpe3par_iscsi_ips=[;ISCSI_IP2;ISCSI_IP3...] +``` +*Note*: + +1. Both *cpg_map* and *snap_cpg_map* in *replication_device* section are mandatory. +2. *hpe3par_iscsi_ips* MUST be defined upfront for both source and target arrays. +3. *hpe3par_iscsi_ips* can be a single ISCSI IP or a list of ISCSI IPs delimited by +semi-colon. Delimiter for this field is applicable for *replication_device* section ONLY. +4. If password is encrypted for primary array, it MUST be encrypted for secondary array +as well using the same *pass-phrase*. +5. In case of asynchronous replication mode, *sync_period* field can optionally be +defined as part of *replication_device* entry and it should be between range 300 +and 31622400 seconds. If not defined, it defaults to 900 seconds. + + +## Managing Replicated Volumes ### +### Create replicated volume ### +This command allows creation of replicated volume along with RCG creation if the RCG +does not exist on the array. Newly created volume is then added to the RCG. +Existing RCG name can be used to add multiple newly created volumes to it. +```sh +$ docker volume create -d hpe --name -o replicationGroup=<3PAR_RCG_Name> [Options...] +``` +where, +- *replicationGroup*: Name of a new or existing replication copy group on 3PAR array + +One or more following *Options* can be specified additionally: +1. *size:* Size of volume in GBs +2. *provisioning:* Provision type of a volume to be created. +Valid values are thin, dedup, full with thin as default. +3. *backend:* Name of the backend to be used for creation of the volume. If not +specified, "DEFAULT" is used providied it is initialized successfully. +4. *mountConflictDelay:* Waiting period in seconds to be used during mount operation +of the volume being created. This happens when this volume is mounted on say Node1 and +Node2 wants to mount it. In such a case, Node2 will wait for *mountConflictDelay* +seconds for Node1 to unmount the volume. If even after this wait, Node1 doesn't unmount +the volume, then Node2 forcefully removes VLUNs exported to Node1 and the goes ahead +with the mount process. +5. *compression:* This flag specifies if the volume is a compressed volume. Allowed +values are *True* and *False*. + +#### Example #### + +**Create a replicated volume having size 1GB with a non-existing RCG using backend "ActivePassiveRepBackend"** +```sh +$ docker volume create -d hpe --name Test_RCG_Vol -o replicationGroup=Test_RCG -o size=1 -o backend=ActivePassiveRepBackend +``` +This will create volume Test_RCG_Vol along with TEST_RCG remote copy group. The volume +will then be added to the TEST_RCG. +Please note that in case of failure during the operation at any stage, previous actions +are rolled back. +E.g. if for some reason, volume Test_RCG_Vol could not be added to Test_RCG, the volume +is removed from the array. + + +### Failover a remote copy group ### + +There is no single Docker command or option to support failover of a RCG. Instead, following +steps must be carried out in order to do it: +1. On the host, the container using the replicated volume must be stopped or exited if it is running. +This triggers unmount of the volume(s) from the primary array. + +2. On the primary array, stop the remote copy group manually: +```sh +$ stoprcopygroup +``` + +3. On the secondary array, execute *failover* command: +```sh +$ setrcopygroup failover +``` + +4. Restart the container. This time the VLUNs would be served by the failed-over or secondary array + +### Failback workflow for Active/Passive based replication ### +There is no single Docker command or option to support failback of a RCG. Instead, +following steps must be carried out in order to do it: +1. On the host, the container using the replicated volume must be stopped or exited if it is running. +This triggers unmount of the volume(s) from the failed-over or secondary array. + +2. On the secondary array, execute *recover* and *restore* commands: +```sh +$ setrcopygroup recover +$ setrcopygroup restore +``` + +3. Restart the container so that the primary array exports VLUNs to the host this time. + + +### Delete replicated volume ### +```sh +$ docker volume rm +``` +This command allows the user to delete a replicated volume. If this was the last +volume present in RCG then the RCG is also removed from the backend. + + +**See also:** +[Peer Persistence Based Replication](peer-persistence-based-replication.md) + + +[<< Back to Replication: HPE 3PAR Docker Storage Plugin](replication.md) diff --git a/docs/create_snapshot_schedule.md b/docs/create_snapshot_schedule.md new file mode 100644 index 00000000..2b922ff7 --- /dev/null +++ b/docs/create_snapshot_schedule.md @@ -0,0 +1,141 @@ +# Creating snapshot schedule # +Following is an example to create a snapshot schedule for a volume of name volume1: +Below are the options which can be passed while creating a snapshot schedule. + +- -o virtualCopyOf=x This option is mandatory. x is the name of the volume for which snapshot schedule has to be created. +- -o scheduleFrequency=x This option is mandatory. x is the string that indicates the snapshot schedule frequency. + This string will contain 5 values which are seperated by space. Example x can be replaced with "5 * * * *" + First field in the string represents the number of minutes that are passed scheduled hour to execute the + scheduled task. Second field in the string indicated hour at which task needs to be executed. Third field in + the string indicates day of the month on which scheduled task has to be executed. Fourth field in the string + indicates month in which the task needs to be executed. Fifth field in the string indicates day of a week on + which task should be executed. x has to be specified in double quotes. Valid values for these fields are: + + Field Allowed Values + ----- -------------- + minute 0-59 + hour * or 0-23 + day-of-month * or 1-31 + month * or 1-12 + day-of-week * or 0-6 (0 is Sunday) + +- -o scheduleName=x This option is mandatory. x is a string which indicates name for the schedule on 3PAR. + If *scheduleName=auto* is passed via docker volume create , then the schedule name is + generated automatically based on timestamp. This is relevant for using the scheduleName + in StorageClass in Kubernetes environment. +- -o snapshotPrefix=x This option is mandatory. x is prefix string for the scheduled snapshots which will get created on 3PAR +- -o expHrs=x This option is not mandatory option. x is an integer, indicates number of hours after which snapshot created + via snapshot schedule will be deleted from 3PAR. +- -o retHrs=x This option is not mandatory option. x is an integer, indicates number of hours this snapshot will be retained. + +docker command to create a snapshot schedule: +``` +$ docker volume create -d hpe --name -o virtualCopyOf=volume1 +-o scheduleFrequency="10 2 * * *" -o scheduleName=dailyOnceSchedule +-o snapshotPrefix=pqr -o expHrs=5 -o retHrs=3 +``` + +#### Note: +1. Above command creates a docker snapshot with name snapshot_name. +2. It creates a snapshot schedule on 3PAR with name for schedule as dailyOnceSchedule. +3. scheduleFrequency string specifies that task has to be created daily for each month and on each day at 10 minutes passed 2 O'clock. +4. Snapshot created via scheduled snapshots will have prefix 'pqr' to its name and these snapshots will have retention period of 3 hours +and expiration period of 5 hours + +###Inspect on volume and snapshot having a schedule associated with it. +Consider volume1 is a volume for which snapshot schedule with name "ThisNewSnapSchedule" is created, for creating this schedule +a Docker snapshot with name snapshot1 is created. + +``` +$ docker volume inspect volume1 +```` +Output: +```json +[ + { + "CreatedAt": "0001-01-01T00:00:00Z", + "Driver": "hpe:latest", + "Labels": {}, + "Mountpoint": "/var/lib/docker/plugins/b31d0cf162f23852b2733671de48a81aacf078ec6e529d936ae99f2aec0a57d6/rootfs ", + "Name": " volume1", + "Options": { + "size": "9" + }, + "Scope": "global", + "Status": { + "Snapshots": [ + { + "Name": "snapshot1", + "ParentName": "volume1", + "snap_schedule": { + "schedule_name": "ThisNewSnapSchedule", + "sched_frequency": "5 2 * * *", + "snap_name_prefix": "pqr", + "sched_snap_exp_hrs": null + } + } + ], + "volume_detail": { + "compression": null, + "flash_cache": null, + "mountConflictDelay": 30, + "provisioning": "thin", + "size": 9 + } + } + } +] +``` +``` +$ docker volume inspect snapshot1 +```` +Output: +``` json +[ + { + "CreatedAt": "0001-01-01T00:00:00Z", + "Driver": "hpe:latest", + "Labels": {}, + "Mountpoint": "/var/lib/docker/plugins/b31d0cf162f23852b2733671de48a81aacf078ec6e529d936ae99f2aec0a57d6/rootfs", + "Name": "snapshot1", + "Options": { + "virtualCopyOf": "volume1" + }, + "Scope": "global", + "Status": { + "snap_detail": { + "compression": null, + "expiration_hours": null, + "is_snap": true, + "mountConflictDelay": 30, + "parent_id": "36084710-851b-49db-93f2-5d9a71e49423", + "parent_volume": "volume1", + "provisioning": "thin", + "retention_hours": null, + "has_schedule": true, + "size": 5, + "snap_schedule": { + "schedule_name": "ThisNewSnapSchedule", + "sched_frequency": "5 2 * * *", + "snap_name_prefix": "pqr", + "sched_snap_exp_hrs": null, + "sched_snap_exp_hrs": null + } + } + } + } +] +``` + +#### Note: + +If the above snapshot snapshot1 is removed, associated schedule will also be removed from 3PAR. In other words, to remove the schedule +"ThisNewSnapSchedule" use snapshot name associated with this schedule. + +Removing a snapshot and associated schedule: + +``` +$ docker volume rm snapshot1 +``` + +[<< Back to Usage](usage.md#snapshot_schedule) diff --git a/docs/docker EE 2.0 UCP 3.0.5 installation.md b/docs/docker EE 2.0 UCP 3.0.5 installation.md new file mode 100644 index 00000000..1d2b2469 --- /dev/null +++ b/docs/docker EE 2.0 UCP 3.0.5 installation.md @@ -0,0 +1,293 @@ +## Configuring HPE 3PAR Docker Volume Plugin for Docker EE 2.0 UCP 3.0.5 + +### **Prerequisite packages to be installed on host OS:** + +#### Install OS (RHEL, CentOs or Ubuntu) on all the nodes. + + +#### Ubuntu 16.04 or later: + + +1. Install the iSCSI (optional if you aren't using iSCSI) and Multipath packages +``` +$ sudo apt-get install -y open-iscsi multipath-tools +``` + +2. Enable the **iscsid** and **multipathd** services +``` +$ sudo systemctl daemon-reload +$ sudo systemctl restart open-iscsi multipath-tools docker +``` + + + +#### RHEL/CentOS 7.3 or later: + +1. Install the iSCSI (optional if you aren't using iSCSI) and Multipath packages + +``` +$ sudo yum install -y iscsi-initiator-utils device-mapper-multipath +``` + +2. Configure `/etc/multipath.conf` + +``` +$ vi /etc/multipath.conf +``` + +>Copy the following into `/etc/multipath.conf` + +``` +defaults +{ + polling_interval 10 + max_fds 8192 +} + +devices +{ + device + { + vendor "3PARdata" + product "VV" + no_path_retry 18 + features "0" + hardware_handler "0" + path_grouping_policy multibus + #getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n" + path_selector "round-robin 0" + rr_weight uniform + rr_min_io_rq 1 + path_checker tur + failback immediate + } +} +``` + +3. Enable the iscsid and multipathd services + +``` +$ sudo systemctl enable iscsid multipathd +$ sudo systemctl start iscsid multipathd +$ sudo systemctl daemon-reload +``` + +4. Docker EE installation on all hosts + +``` +$ export DOCKERURL="" +e.g. export DOCKERURL="https://storebits.docker.com/ee/m/sub-3352ca9f-2d4d-4859-957c-77838c9ecaf0/rhel" + +$ sudo -E sh -c 'echo "$DOCKERURL/rhel" > /etc/yum/vars/dockerurl' +$ sudo sh -c 'echo "7" > /etc/yum/vars/dockerosversion' +$ sudo yum install -y yum-utils \ + device-mapper-persistent-data \ + lvm2 + +$ sudo yum-config-manager --enable rhel-7-server-extras-rpms +$ sudo -E yum-config-manager \ + --add-repo \ + "$DOCKERURL/7.6/x86_64/stable-17.06" + +$ sudo yum -y install docker-ee docker-ee-cli containerd.io +``` +**Note:- if getting error related with public key, then update /etc/yum.repos.d/storebits.docker.com file with "gpgcheck=0".** + +``` +$ sudo systemctl start docker +$ sudo docker run hello-world +``` + +5. Etcd installation on all hosts +``` +$ export HostIP="Host_IP" + +$ sudo docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 40010:40010 \ + -p 23800:23800 -p 23790:23790 \ + --name etcd_hpe quay.io/coreos/etcd:v2.2.0 \ + -name etcd0 \ + -advertise-client-urls http://${HostIP}:23790,http://${HostIP}:40010 \ + -listen-client-urls http://0.0.0.0:23790,http://0.0.0.0:40010 \ + -initial-advertise-peer-urls http://${HostIP}:23800 \ + -listen-peer-urls http://0.0.0.0:23800 \ + -initial-cluster-token etcd-cluster-1 \ + -initial-cluster etcd0=http://${HostIP}:23800 \ + -initial-cluster-state new +``` + +6. Configure hpe.conf in all of the hosts +``` +$ sudo mkdir /etc/hpedockerplugin + +$ sudo vi /etc/hpedockerplugin/hpe.conf +[DEFAULT] +ssh_hosts_key_file = /root/.ssh/known_hosts +logging = DEBUG +hpe3par_debug = True +suppress_requests_ssl_warnings = False +host_etcd_ip_address = 192.168.68.37 +host_etcd_port_number = 23790 +hpedockerplugin_driver = hpedockerplugin.hpe.hpe_3par_fc.HPE3PARFCDriver +hpe3par_api_url = https://192.168.67.7:8080/api/v1 +hpe3par_username = 3paradm +hpe3par_password = 3pardata +san_ip = 192.168.67.7 +san_login = 3paradm +san_password = 3pardata +hpe3par_cpg = virendra +hpe3par_snapcpg = virendra-snap +#hpe3par_iscsi_ips = 192.168.68.201, 192.168.68.203 +mount_prefix = /var/lib/kubelet/plugins/hpe.com/3par/mounts/ +#hpe3par_iscsi_chap_enabled = True +#use_multipath = True +#enforce_multipath = True +mount_conflict_delay = 30 +``` +**Note:- Update *"host_etcd_ip_address"* & *"host_etcd_port_number"* as per cluster you want to create, i.e. if installed etcd in more than one hosts then provide IP for those host in hpe.conf file and keep hpe.conf file same in all host across the cluster.** + +7. Configure and create containerized plugin in all of the hosts. +``` +$ vi docker-compose.yml + +hpedockerplugin: + container_name: legacy_plugin + image: hpestorage/legacyvolumeplugin:3.1 + net: host + privileged: true + volumes: + - /dev:/dev + - /run/lock:/run/lock + - /var/lib:/var/lib + - /var/run/docker/plugins:/var/run/docker/plugins:rw + - /etc:/etc + - /root/.ssh:/root/.ssh + - /sys:/sys + - /root/plugin/certs:/root/plugin/certs + - /sbin/iscsiadm:/sbin/ia + - /lib/modules:/lib/modules + - /lib64:/lib64 + - /var/run/docker.sock:/var/run/docker.sock + - /var/lib/kubelet/plugins/hpe.com/3par/mounts/:/var/lib/kubelet/plugins/hpe.com/3par/mounts:rshared +``` + +8. Run HPE volume plugin +``` +$ docker-compose -f up -d +e.g. docker-compose -f /root/docker-compose.yml up -d + +Note:- If docker-compose not isntalled then install it as per below steps. + +$ curl -L https://github.com/docker/compose/releases/download/1.21.2/docker-compose-$(uname -s)-$(uname -m) --insecure -o /usr/local/bin/docker-compose +$ chmod +x /usr/local/bin/docker-compose +$ docker-compose --version +``` +**Note:- Make sure etcd should be running before starting volume plugin.** + +9. Install docker UCP 3.0.5 +``` +$ sudo docker image pull docker/ucp:3.0.5 +$ sudo docker container run --rm -it --name ucp \ + -v /var/run/docker.sock:/var/run/docker.sock \ + docker/ucp:3.0.5 install \ + --host-address 192.168.68.37 --pod-cidr 192.167.0.0/16 \ + --interactive + +In case of any error relataed with IP and port, please refer bottom down section of error and solution for the same. +``` +**Note:- Provide all details like username, password for UCP browser access and note login URL for UCP.** + +10. Configuration for kubernetes . +``` +$ sudo mkdir -p /etc/kubernetes +$ cp /var/lib/docker/volumes/ucp-node-certs/_data/kubelet.conf /etc/kubernetes/admin.conf +Modify /etc/kubernetes/admin.conf with correct certificate-authority, server, client-certificate, client-key + +(OPTIONAL if kubectl client is required). + # Set the Kubernetes version as found in the UCP Dashboard or API + export k8sversion=v1.8.11 + # Get the kubectl binary. + curl -LO https://storage.googleapis.com/kubernetes-release/release/$k8sversion/bin/linux/amd64/kubectl + # Make the kubectl binary executable. + chmod +x ./kubectl + # Move the kubectl executable to /usr/local/bin. + sudo mv ./kubectl /usr/local/bin/kubectl + +$ export KUBERNETES_SERVICE_HOST= +$ export KUBERNETES_SERVICE_PORT=443 +``` + +11. Dory installation on all hosts +``` +$ sudo yum install wget +$ sudo yum intsall git +$ wget https://github.com/hpe-storage/python-hpedockerplugin/raw/master/dory_installer_v31 +$ chmod u+x ./dory_installer +$ sudo ./dory_installer + +Execute command in master host to run doryd. +$/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/doryd /etc/kubernetes/admin.conf hpe.com + +To verify dory and doryd, below command can be used. +ls -l /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/ +/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/hpe init +/usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/hpe config +``` +**Note:- Once all setup completed then we can use login URL and credential received after UCP installation to access it from browser.** + + + +### Error & Solution + +**Error-1:** FATA[0036] the following required ports are blocked on your host: 6443, 6444, 10250, 12376, 12378 - 12386. Check your firewall settings + +**Solution:** +``` +$ for i in 179 443 2376 6443 6444 10250 12376 12378 12379 12380 12381 12382 12383 12384 12385 12386 12387 ; do + echo adding $i to the firewall + firewall-cmd --add-port=$i/tcp --permanent +done + +$ firewall-cmd --reload +$ systemctl restart docker +``` + +**Error-2:** Getting error related with "*.pem" file not found, while executing `export KUBECONFIG=/etc/kubernetes/admin.conf`. + +**Solution:** +``` +find required "*.pem" file in admin.conf and update correct path or move file to path, which is mentioned in admin.conf. +e.g. + +mkdir -p /var/lib/docker/ucp/ucp-node-certs/ <> +cp /var/lib/docker/volumes/ucp-node-certs/_data/ca.pem /var/lib/docker/ucp/ucp-node-certs/ca.pem +cp /var/lib/docker/volumes/ucp-node-certs/_data/cert.pem /var/lib/docker/ucp/ucp-node-certs/cert.pem +cp /var/lib/docker/volumes/ucp-node-certs/_data/key.pem /var/lib/docker/ucp/ucp-node-certs/key.pem +``` + +**Error-3:** Unable to see worker node in running state, after running join command on worker. + +**Solution:** +``` +Open port on worker:- +$ firewall-cmd --add-port=10250/tcp --permanent +$ firewall-cmd --reload + +Open port on master:- +$ firewall-cmd --add-port=10250/tcp --permanent +$ firewall-cmd --add-port=10251/tcp --permanent +$ firewall-cmd --add-port=10252/tcp --permanent +$ firewall-cmd --reload + +$ systemctl stop firewalld +$ systemctl start firewalld +``` + +**Error-4:** Pod getting stuck at "containerCreating" state with describe message as "-- MountVolume.SetUp succeeded for volume "default-token-phdh6" +Later goes in timeout." + +**Solution:** +``` +Possible solution can be others too but primary things to check is kubelet service of worker and master. +Since UCP have all services in container form, find and restart kubelet container on all nodes. +``` + diff --git a/docs/file-permission-owner.md b/docs/file-permission-owner.md new file mode 100644 index 00000000..324be627 --- /dev/null +++ b/docs/file-permission-owner.md @@ -0,0 +1,160 @@ +# Enabling File Permissions and Ownership # + +This section describes the -o fsMode and -o fsOwner options used with volume creation in detail + +### fsOwner option + +To change the ownership of root directory of the filesystem, user needs to pass userId and groupID +with this fsOwner option of Docker volume create command. + +#### Usage +-o fsOwner=X X is the user id and group id that should own the root directory of the filesystem, in the form of [userId:groupId] + +``` +Example + +$ docker volume create -d hpe --name VOLUME -o size=1 -o fsOwner=1001:1001 +VOLUME + +$ docker volume ls +DRIVER VOLUME NAME +hpe:latest VOLUME + +$ docker volume inspect VOLUME +[ + { + "Driver": "hpe:latest", + "Labels": {}, + "Mountpoint": "/var/lib/docker/plugins/d669f6a28ed316f2cac5ef8c876fca66e7dafd63d5273366c7b5ab3638cd1a31/rootfs", + "Name": "VOLUME", + "Options": { + "fsOwner": "1001:1001", + "size": "1" + }, + "Scope": "global", + "Status": { + "volume_detail": { + "compression": null, + "flash_cache": null, + "fsMode": null, + "fsOwner": "1001:1001", + "mountConflictDelay": 30, + "provisioning": "thin", + "size": 1 + } + } + } +] + +```` +### fsMode Option + +To change the mode of root directory of the filesystem, user needs to pass file mode in octal format with this fsMode option of Docker volume create command. + +#### Usage +-o fsMode=X X is 1 to 4 octal digits that represent the file mode to be applied to the root directory of the filesystem. + +```` +Example + +$ docker volume create -d hpe --name VOLUME -o size=1 -o fsMode=0755 +VOLUME + +$ docker volume ls +DRIVER VOLUME NAME +hpe:latest VOLUME + +$ docker volume inspect VOLUME +[ + { + "Driver": "hpe:latest", + "Labels": {}, + "Mountpoint": "/var/lib/docker/plugins/d669f6a28ed316f2cac5ef8c876fca66e7dafd63d5273366c7b5ab3638cd1a31/rootfs", + "Name": "VOLUME", + "Options": { + "fsMode": "0755", + "size": "1" + }, + "Scope": "global", + "Status": { + "volume_detail": { + "compression": null, + "flash_cache": null, + "fsMode": "0755", + "fsOwner": null, + "mountConflictDelay": 30, + "provisioning": "thin", + "size": 1 + } + } + } +] + +```` +### Mounting a volume having ownership and permission. + +In order to properly utilize the fsMode and fsowner options, user needs to mount the volume to a container using --user option. +By default container run as a root user, --user provides the ability to run a container as a non-root user. + +#### Example +- Creating volume with fsMode and fsOwner +```` +$ docker volume create -d hpe --name VOLUME -o size=1 -o fsMode=0444 -o fsOwner=1001:1001 +VOLUME +```` +- Inspecting the volume created to verify the fsMode and fsOwner +```` +$ docker volume inspect VOLUME +[ + { + "Driver": "hpe:latest", + "Labels": {}, + "Mountpoint": "/var/lib/docker/plugins/d669f6a28ed316f2cac5ef8c876fca66e7dafd63d5273366c7b5ab3638cd1a31/rootfs", + "Name": "VOLUME", + "Options": { + "fsMode": "0444", + "fsOwner": "1001:1001", + "size": "1" + }, + "Scope": "global", + "Status": { + "volume_detail": { + "compression": null, + "flash_cache": null, + "fsMode": "0444", + "fsOwner": "1001:1001", + "mountConflictDelay": 30, + "provisioning": "thin", + "size": 1 + } + } + } +] +```` +- Mounting volume to a container with uid:gid as 1002:1002 using --user option +```` +$ docker run -it -v VOLUME:/data1 --rm --user 1002:1002 --volume-driver hpe busybox /bin/sh +/ $ ls -lrth +total 40 +drwxr-xr-x 2 root root 12.0K May 22 17:00 bin +drwxr-xr-x 4 root root 4.0K May 22 17:00 var +drwxr-xr-x 3 root root 4.0K May 22 17:00 usr +drwxrwxrwt 2 root root 4.0K May 22 17:00 tmp +drwx------ 2 root root 4.0K May 22 17:00 root +drwxr-xr-x 2 nobody nogroup 4.0K May 22 17:00 home +dr--r--r-- 2 1001 1001 4.0K Jul 30 09:59 data1 +drwxr-xr-x 3 root root 4.0K Jul 30 10:00 etc +dr-xr-xr-x 445 root root 0 Jul 30 10:00 proc +dr-xr-xr-x 13 root root 0 Jul 30 10:00 sys +drwxr-xr-x 5 root root 360 Jul 30 10:00 dev +/ $ id +uid=1002 gid=1002 +/ $ +```` +Here data1 is mountpoint. Permission of data1 can be seen as what we have provided while creating the volume and uid and gid of the container is 1002:1002 as provided in mount command. + + +**NOTE:** Snapshots and clones retain the same permissions as provided to the parent volume. + + +[<< Back to Usage](usage.md#file-permission-owner) diff --git a/docs/img/3PAR_FIlePersona_Share_Hierarchy.png b/docs/img/3PAR_FIlePersona_Share_Hierarchy.png new file mode 100644 index 00000000..1d9803c0 Binary files /dev/null and b/docs/img/3PAR_FIlePersona_Share_Hierarchy.png differ diff --git a/docs/manual_install_guide_hpe_3par_plugin_with_openshift_kubernetes.md b/docs/manual_install_guide_hpe_3par_plugin_with_openshift_kubernetes.md index cc994199..472d1c55 100644 --- a/docs/manual_install_guide_hpe_3par_plugin_with_openshift_kubernetes.md +++ b/docs/manual_install_guide_hpe_3par_plugin_with_openshift_kubernetes.md @@ -1,4 +1,4 @@ -## Manual Install Guide for Integration of HPE 3PAR Containerized Plugin with RedHat OpenShift / Kubernetes (ADVANCED) +## Install Guide for Integration of HPE 3PAR Containerized Plugin with RedHat OpenShift / Kubernetes (ADVANCED) * [Introduction](#introduction) * [Before you begin](#before) @@ -10,7 +10,7 @@ --- ### Introduction -This document details the installation steps in order to get up and running quickly with the HPE 3PAR Volume Plug-in for Docker within a Kubernetes 1.7/Openshift 3.7 environment. +This document details the installation steps in order to get up and running quickly with the HPE 3PAR Volume Plug-in for Docker within a Kubernetes /Openshift environment. **We highly recommend to use the Ansible playbooks that simplify and automate the install process before using the manual install process.** [/ansible_3par_docker_plugin/README.md](/ansible_3par_docker_plugin/README.md) @@ -24,24 +24,18 @@ This document details the installation steps in order to get up and running quic * OpenShift https://docs.openshift.org/3.7/install_config/install/planning.html -## Support Matrix for Kubernetes and Openshift 3.7 +## SPOCK for HPE 3PAR Volume Plugin for Docker -| Platforms | Support for Containerized Plugin | Docker Engine Version | HPE 3PAR OS version | -|---------------------------------------------------------|----------------------------------|-----------------------|----------------------------------| -| Kubernetes 1.6.13 | Yes | 1.12.6 | 3.2.2 MU6+ P107 & 3.3.1 MU1, MU2 | -| Kubernetes 1.7.6 | Yes | 1.12.6 | 3.2.2 MU6+ P107 & 3.3.1 MU1, MU2 | -| Kubernetes 1.8.9 | Yes | 17.06 | 3.2.2 MU6+ P107 & 3.3.1 MU1, MU2 | -| Kubernetes 1.10.3 | Yes | 17.03 | 3.2.2 MU6+ P107 & 3.3.1 MU1, MU2 | -| OpenShift 3.7 RPM based installation (Kubernetes 1.7.6) | Yes | 1.12.6 | 3.2.2 MU6+ P107 & 3.3.1 MU1, MU2 | +* [Support Matrix for Kubernetes and Openshift](https://spock.corp.int.hpe.com/spock/utility/document.aspx?docurl=Shared%20Documents/hw/3par/3par_volume_plugin_for_docker.pdf) **NOTE** - * Managed Plugin is not supported for Kubernetes or Openshift 3.7 + * Managed Plugin is not supported for Kubernetes or Openshift - * The install of OpenShift for this paper was done on RHEL 7.4. Other versions of Linux may not be supported. + * The install of OpenShift for this paper was done on RHEL 7.x. Other versions of Linux may not be supported. ## Deploying the HPE 3PAR Volume Plug-in in Kubernetes/OpenShift -Below is the order and steps that will be followed to deploy the **HPE 3PAR Volume Plug-in for Docker (Containerized Plug-in) within a Kubernetes 1.7/OpenShift 3.7** environment. +Below is the order and steps that will be followed to deploy the **HPE 3PAR Volume Plug-in for Docker (Containerized Plug-in) within a Kubernetes /OpenShift ** environment. Let's get started. @@ -171,7 +165,7 @@ $ vi hpe.conf > >[/docs/config_examples/hpe.conf.sample.3parFC](/docs/config_examples/hpe.conf.sample.3parFC) -7. Use Docker Compose to deploy the HPE 3PAR Volume Plug-In for Docker (Containerized Plug-in) from the pre-built image available on Docker Hub: +7. Use Docker Compose to deploy the HPE 3PAR Volume Plug-In for Docker `Containerized Plug-in` from the pre-built image available on Docker Hub: ``` $ cd ~ diff --git a/docs/manual_install_guide_hpe_3par_plugin_with_rancher_kubernetes.md b/docs/manual_install_guide_hpe_3par_plugin_with_rancher_kubernetes.md new file mode 100644 index 00000000..31a92e96 --- /dev/null +++ b/docs/manual_install_guide_hpe_3par_plugin_with_rancher_kubernetes.md @@ -0,0 +1,282 @@ +## Manual Install Guide for Integration of HPE 3PAR Containerized Plugin with Rancher Kubernetes (ADVANCED) + +* [Introduction](#introduction) +* [Before you begin](#before) +* [Deploying the HPE 3PAR Volume Plug-in in Kubernetes](#deploying) + * [Configuring etcd](#etcd) + * [Installing the HPE 3PAR Volume Plug-in](#installing) +* [Usage](#usage) +--- + +### Introduction +This document details the installation steps in order to get up and running quickly with the HPE 3PAR Volume Plug-in for Docker within a Rancher Kubernetes environment on SLES. + +## Before you begin +* You need to have a basic knowledge of containers + +**NOTE** + * Managed Plugin is not supported for Kubernetes + +## Deploying the HPE 3PAR Volume Plug-in in Kubernetes + +Below is the order and steps that will be followed to deploy the **HPE 3PAR Volume Plug-in for Docker (Containerized Plug-in) within a Kubernetes** environment. + +Let's get started. + +For this installation process, login in as **root**: + +```bash +$ sudo su - +``` + +### Configuring etcd + +**Note:** For this quick start quide, we will be creating a single-node **etcd** deployment, as shown in the example below, but for production, it is **recommended** to deploy a High Availability **etcd** cluster. + +For more information on etcd and how to setup an **etcd** cluster for High Availability, please refer: +[/docs/advanced/etcd_cluster_setup.md](/docs/advanced/etcd_cluster_setup.md) + +1. Export the Kubernetes/OpenShift `Master` node IP address +``` +$ export HostIP="" +``` + +2. Run the following to create the `etcd` container. + + +>**NOTE:** This etcd instance is separate from the **etcd** deployed by Kubernetes/OpenShift and is **required** for managing the **HPE 3PAR Volume Plug-in for Docker**. We need to modify the default ports (**2379, 4001, 2380**) of the **new etcd** instance to prevent conflicts. This allows two instances of **etcd** to safely run in the environment.` + +```yaml +sudo docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 40010:40010 \ +-p 23800:23800 -p 23790:23790 \ +--name etcd_hpe quay.io/coreos/etcd:v2.2.0 \ +-name etcd0 \ +-advertise-client-urls http://${HostIP}:23790,http://${HostIP}:40010 \ +-listen-client-urls http://0.0.0.0:23790,http://0.0.0.0:40010 \ +-initial-advertise-peer-urls http://${HostIP}:23800 \ +-listen-peer-urls http://0.0.0.0:23800 \ +-initial-cluster-token etcd-cluster-1 \ +-initial-cluster etcd0=http://${HostIP}:23800 \ +-initial-cluster-state new +``` + +### Installing the HPE 3PAR Volume Plug-in + +1. Rebuild the initrd, otherwise the system may not boot anymore + +``` +$ dracut --force --add multipath +``` + +2. Configure /etc/multipath.conf + +``` +$ multipath -t > /etc/multipath.conf +``` + +3. Enable the multipathd services + +``` +$ systemctl enable multipathd +$ systemctl start multipathd +``` +>Note: To read more about multipathd service config on SLES12 refer https://www.suse.com/documentation/sles-12/stor_admin/data/sec_multipath_config.html#sec_multipath_configuration_start and https://www.suse.com/documentation/sles-12/stor_admin/data/sec_multipath_conf_file.html + +4. Setup the Docker plugin configuration file + +``` +$ mkdir –p /etc/hpedockerplugin/ +$ cd /etc/hpedockerplugin +$ vi hpe.conf +``` + +>Copy the contents from the sample hpe.conf file, based on your storage configuration for either iSCSI or Fiber Channel: + +>##### HPE 3PAR iSCSI: +> +>[/docs/config_examples/hpe.conf.sample.3parISCSI](/docs/config_examples/hpe.conf.sample.3parISCSI) + + +>##### HPE 3PAR Fiber Channel: +> +>[/docs/config_examples/hpe.conf.sample.3parFC](/docs/config_examples/hpe.conf.sample.3parFC) + +> Note: Also add mount_prefix in hpe.conf to /var/lib/rancher/ +``` +mount_prefix = /var/lib/rancher/ +``` + +5. Use Docker Compose to deploy the HPE 3PAR Volume Plug-In for Docker (Containerized Plug-in) from the pre-built image available on Docker Hub: + +``` +$ cd ~ +$ vi docker-compose.yml +``` + +> Copy the content below into the `docker-compose.yml` file + +``` +hpedockerplugin: + image: hpestorage/legacyvolumeplugin:3.1 + container_name: plugin_container + net: host + restart: always + privileged: true + volumes: + - /dev:/dev + - /run/lock:/run/lock + - /var/lib:/var/lib + - /var/run/docker/plugins:/var/run/docker/plugins:rw + - /etc:/etc + - /root/.ssh:/root/.ssh + - /sys:/sys + - /root/plugin/certs:/root/plugin/certs + - /sbin/iscsiadm:/sbin/ia + - /lib/modules:/lib/modules + - /lib64:/lib64 + - /var/run/docker.sock:/var/run/docker.sock + - /var/lib/rancher:/var/lib/rancher:rshared + - /usr/lib64:/usr/lib64 +``` + +>Save and exit + +> **NOTE:** Before we start the HPE 3PAR Volume Plug-in container, make sure etcd is running. +> +>Use the Docker command: `docker ps -a | grep -i etcd_hpe` + +6. Start the HPE 3PAR Volume Plug-in for Docker (Containerized Plug-in) + +>Make sure you are in the location of the `docker-compose.yml` filed + +``` +$ docker-compose up -d +``` + +>**NOTE:** In case you are missing `docker-compose`, https://docs.docker.com/compose/install/#install-compose +> +``` +$ curl -x 16.85.88.10:8080 -L https://github.com/docker/compose/releases/download/1.21.0/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose +$ sudo chmod +x /usr/local/bin/docker-compose +``` +> +>Visit https://docs.docker.com/compose/install/#install-compose for latest curl details +> +>Test the installation: +``` +$ docker-compose --version +docker-compose version 1.21.0, build 1719ceb +``` +> Re-run step 6 + +7. Success, you should now be able to test docker volume operations like: + +``` +$ docker volume create -d hpe --name sample_vol -o size=1 +``` + +8. Start Rancher Server + +``` +$ docker run -d --restart=unless-stopped -p 8080:80 -p 8443:443 rancher/rancher:v2.1.6 +``` +> Launch browser and open https://:8443/ and set the password +> Note: Rancher server can be started on any host, make sure it has connectivity to the machines which would be part of the cluster. + +9. Create a cluster with option "From my own existing nodes" + +> Wait for the cluster to become active + +10. Create a file ~/.kube/config. Navigate to **Cluster -> Kubeconfig file** and copy file content to add into ~/.kube/config + +``` +$ vi ~/.kube/config +``` + +11. Add kubectl binary on the host to run kubectl commands + +``` +$ docker ps | grep rancher-agent +$ docker cp :/usr/bin/kubectl /tmp +$ cp /tmp/kubectl /usr/bin/ +$ chmod +x /usr/bin/kubectl +``` +> SLES doesn't have a kubectl binary be default to install/execute. To verify whether kubectl is installed correctly, run command +``` +$ kubectl version +``` +> This must show correct output with client and server versions. Same can be verified from **Cluster -> Launch kubectl* -> kubectl version* + +12. Install the HPE 3PAR FlexVolume driver + +``` +$ wget https://github.com/hpe-storage/python-hpedockerplugin/raw/master/dory_installer_v31 +$ chmod u+x ./dory_installer_v31 +$ sudo ./dory_installer_v31 +``` + +13. Confirm HPE 3PAR FlexVolume driver installed correctly + +``` +$ ls -l /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/ +-rwxr-xr-x. 1 docker docker 47046107 Apr 20 06:11 doryd +-rwxr-xr-x. 1 docker docker 6561963 Apr 20 06:11 hpe +-rw-r--r--. 1 docker docker 237 Apr 20 06:11 hpe.json +``` + +14. Copy the HPE 3PAR FlexVolume dynamic provisioner to volume plugin directory being used by kubelet container in Rancher + +``` +$ cp -R /usr/libexec/kubernetes/kubelet-plugins/volume/exec/hpe.com~hpe/ /var/lib/kubelet/volumeplugins/ +``` + +>For more information on the HPE FlexVolume driver, please visit this link: +> +>https://github.com/hpe-storage/dory/ + +15. Repeat steps 1-14 on all worker nodes. **Steps 8, 9 and 11 only needs to be ran on the Master node.** + +>**Upon successful completion of the above steps, you should have a working installation of Rancher-Kubernetes integrated with HPE 3PAR Volume Plug-in for Docker on SLES** + +## Node addition to cluster +To add nodes to cluster, one must edit cluster on Rancher Server and select the roles that the node will have(node can have control-plane, etcd or worker role). This will create a docker run command on the Rancher server. Copy the command and run it on the desired node. +Command looks like: +``` +sudo docker run -d --privileged --restart=unless-stopped --net=host -v /etc/kubernetes:/etc/kubernetes -v /var/run:/var/run rancher/rancher-agent:v2.1.6 --server https://192.168.68.32:8443 --token j2vmt5b72cdz8zk4h88dd5s6px5h8jjq76j9675mfh4rvbmhmwmkd4 --ca-checksum dde5d7384baa0cf1dcfa3de1e99b5bff3c5317c7bda358807e308880cb60a999 --worker +``` +> Note: Here, --worker means it has only worker role assigned. Similarly, other roles can be assigned to a node. Refer https://rancher.com/docs/rancher/v2.x/en/cluster-provisioning/production/ for more details. + +## Containerized build +Building Doryd in a container +``` +docker build -t hpe3par_doryd_sles:latest https://github.com/hpe-storage/python-hpedockerplugin/raw/master/examples/Dockerfile_SLES +``` +> Note: The doryd container is already available on docker hub. One can use the image as is and start running doryd as daemonset. + +# Running +Doryd is available on Docker Hub and an [example DaemonSet specification](../examples/ds-doryd-sles.yml) is available. + +## Prerequisities +The `doryd` binary needs access to the cluster via a kubeconfig file. The location may vary between distributions. The DaemonSet spec will consider `/root/.kube/config` in SLES12. This file needs to exist on all nodes prior to deploying the DaemonSet. + +The default provisioner name is prefixed with `hpe.com` and will listen for Persistent Volume Claims that asks for Storage Classes with `provisioner: hpe.com/hpe`. Hence it's important that the FlexVolume driver name matches up with what you name your provisioner. + +A custom `doryd` command line could look like this: +``` +doryd /root/.kube/config hpe.com +``` + +There should then be a Dory FlexVolume driver named `hpe.com/hpe` and Storage Classes should use `provisioner: hpe.com/hpe`. + +## kubectl +Deploying the default DaemonSet out-of-the-box can be accomplished with: +``` +kubectl apply -f https://raw.githubusercontent.com/hpe-storage/python-hpedockerplugin/master/examples/ds-doryd-sles.yml +``` +> Note: This will run doryd as a DaemonSet which will run as pod. + +## Usage + +For usage go to: + +[Usage](/docs/usage.md) diff --git a/docs/multi-array-feature.md b/docs/multi-array-feature.md new file mode 100644 index 00000000..512648d2 --- /dev/null +++ b/docs/multi-array-feature.md @@ -0,0 +1,86 @@ +# Managing volumes using multiple backends # +As of 2.1 release of Docker Volume Plugin only one 3PAR array is supported. To support multi-arrays parallely, we are introducing support for more than +one 3PAR Array via a concept called as "backend" + +Each "backend" identifies a set of configuration details for a 3PAR Array +Currently till 2.1 release, there was a single default backend by name "DEFAULT" + +With 2.2 release , we can have more than one backends like this as shown in the example. + +This example configuration has 2 backends, one is the "DEFAULT", and other is "3par1". Each backend name is enclosed with square brackets + +/etc/hpedockerplugin/hpe.conf +``` +[DEFAULT] +ssh_hosts_key_file = /root/.ssh/known_hosts + + +host_etcd_ip_address = 192.168.68.40 +host_etcd_port_number = 2379 + + +# OSLO based Logging level for the plugin. +logging = DEBUG + +# Enable 3PAR client debug messages +hpe3par_debug = False + +# Suppress Requests Library SSL warnings +suppress_requests_ssl_warnings = True + +hpedockerplugin_driver = hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver + +hpe3par_api_url = https://15.212.192.252:8080/api/v1 +hpe3par_username = 3paradm +hpe3par_password = 3pardata +san_ip = 15.212.192.252 +san_login = 3paradm +san_password = 3pardata +hpe3par_cpg = FC_r1 +hpe3par_iscsi_ips = 15.212.192.112 +hpe3par_iscsi_chap_enabled = False + +# iscsi_ip_address = 15.213.64.237 + +# hpe3par_iscsi_chap_enabled = False +use_multipath = True +enforce_multipath = True + +[3par1] + +ssh_hosts_key_file = /root/.ssh/known_hosts +hpedockerplugin_driver = hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver + +hpe3par_api_url = https://192.168.67.7:8080/api/v1 +hpe3par_username = 3paradm +hpe3par_password = 3pardata +san_ip = 192.168.67.7 +san_login = 3paradm +san_password = 3pardata +hpe3par_cpg = docker_cpg +hpe3par_iscsi_ips = 192.168.68.201, 192.168.68.203 +use_multipath = True +enforce_multipath = True + +``` + +Following option called `-o backend=|backend_name|` is introduced now in 2.2 which will appended to the regular docker volume create CLI command +and -o importVol=, etc. + +Eg. +To create a volume named "db1_3par1" on backend identified by section "3par1" we can do this + +`` docker volume create -d hpe --name db1_3par1 -o size=12 -o provisioning=thin -o backend=3par1 `` + +Note: Ignoring the `-o backend=` option results in volume created on the DEFAULT backend. + +When snapshot/clones are created, the original volume's backend is computed, and the snapshot/clone is created on that particular backend. + +Similarly, `-o importVol=X`, with a `-o backend=|backend_name|` imports the volume from a particular backend. + +Only the `docker volume ls` will query all the volumes created on all backends + +`docker volume rm ` will query the backend automatically and remove that volume from the backend where it was created. + + +[<< Back to Usage](usage.md#multi-array-feature) diff --git a/docs/multipath.md b/docs/multipath.md new file mode 100644 index 00000000..9c9d22c1 --- /dev/null +++ b/docs/multipath.md @@ -0,0 +1,85 @@ +## Multipath Support + +This section describes steps that should be performed to properly enable multipath support with the HPE 3PAR StoreServ and HPE Docker Volume Plugin. + +### Multipath Settings + +When multipathing is required with the HPE 3PAR StoreServ, you must update the multipath.conf, iscsid.conf, docker-compose.yml, and hpe.conf files as outlined below. The procedures below are only examples, please review the appropriate HPE 3PAR StoreServ Implementation Guide and the [Single Point of Connectivity Knowledge (SPOCK)] (https://www.hpe.com/storage/spock) website for updated support requirements. + +> ##### Note +> Although the procedure below requires multipath.conf and iscsid.conf files to be created, neither multipath-tools or open-iscsi are should be installed on the container host. If either exists, please uninstall them as they will cause unexpected behavior with both volume mount and unmount operations. + +#### /etc/multipath.conf + +You can find details on how to properly configure multipath.conf in the [HPE 3PAR Red Hat Enterprise Linux and Oracle Linux Implementation Guide] (http://h20565.www2.hpe.com/hpsc/doc/public/display?docId=c04448818). + +Below is an example multipath.conf file. Please review the HPE 3PAR Red Hat Enterprise Linux and Oracle Linux Implementation Guide for any required updates. + +``` +defaults { + polling_interval 10 + max_fds 8192 + } + + devices { + device { + vendor "3PARdata" + product "VV" + no_path_retry 18 + features "0" + hardware_handler "0" + path_grouping_policy multibus + #getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n" + path_selector "round-robin 0" + rr_weight uniform + rr_min_io_rq 1 + path_checker tur + failback immediate + } + } +``` + +#### /etc/iscsi/iscsid.conf + +You can find details on how to properly configure multipath.conf in the [HPE 3PAR Red Hat Enterprise Linux and Oracle Linux Implementation Guide] (http://h20565.www2.hpe.com/hpsc/doc/public/display?docId=c04448818). + +Change the following iSCSI parameters in /etc/iscsi/iscsid.conf. Please review the HPE 3PAR Red Hat Enterprise Linux and Oracle Linux Implementation Guide for any required updates. + +``` +node.startup = automatic +node.conn[0].startup = automatic +node.session.timeo.replacement_timeout = 10 +node.conn[0].timeo.noop_out_interval = 10 +``` + +#### docker-compose.yml + +The files previously modified need to be made available to the HPE Docker Volume Plugin container. Below is an example docker-compose.yml file. Notice the addition of /etc/iscsi/iscsid.conf and /etc/multipath.conf. + +``` +hpedockerplugin: + image: hub.docker.hpecorp.net/hpe-storage/hpedockerplugin: + container_name: hpeplugin + net: host + privileged: true + volumes: + - /dev:/dev + - /run/docker/plugins:/run/docker/plugins + - /lib/modules:/lib/modules + - /var/lib/docker/:/var/lib/docker + - /etc/hpedockerplugin/data:/etc/hpedockerplugin/data:shared + - /etc/iscsi/initiatorname.iscsi:/etc/iscsi/initiatorname.iscsi + - /etc/hpedockerplugin:/etc/hpedockerplugin + - /var/run/docker.sock:/var/run/docker.sock + - /etc/iscsi/iscsid.conf:/etc/iscsi/iscsid.conf + - /etc/multipath.conf:/etc/multipath.conf +``` + +#### /etc/hpedockerplugin/hpe.conf + +Lastly, make the following additions to the /etc/hpedockerplugin/hpe.conf file to enable multipathing. + + ``` +use_multipath = True +enforce_multipath = True +``` diff --git a/docs/peer-persistence-based-replication.md b/docs/peer-persistence-based-replication.md new file mode 100644 index 00000000..2388d5fc --- /dev/null +++ b/docs/peer-persistence-based-replication.md @@ -0,0 +1,164 @@ +# Peer Persistence based replication # +Peer Persistence feature of 3PAR provides a non-disruptive disaster recovery solution wherein in +case of disaster, the hosts automatically and seamlessly get connected to the secondary +array and start seeing the VLUNs which were earlier exported by the failed array. + +With Peer Persistence, when a Docker user mounts a replicated volume(s), HPE 3PAR Docker +Plugin creates VLUNs corresponding to the replicated volume(s) on BOTH +the arrays. However, they are served only by the active array with the other array being on +standby mode. When the corresponding RCG is switched over or primary array goes down, +the secondary array takes over and makes the VLUN(s) available. After swithover, the +active array goes in standby mode while the other array becomes active. + +**Pre-requisites** +1. Remote copy setup is up and running +2. Quorum Witness is running with primary and secondary arrays registered with it +3. Multipath daemon is running so that non-disruptive seamless mounting of VLUN(s) +on the host is possible. + + +## Configuring replication enabled backend +Compared to Active/Passive configuration, in Peer Persistence, the ONLY discriminator +is the presence of *quorum_witness_ip* sub-field under *replication_device* field - +rest of the fields are applicable. + +**For FC Host** + +```sh +host_etcd_port_number= +hpe3par_username= +hpe3par_password= +hpe3par_cpg= +hpedockerplugin_driver=hpedockerplugin.hpe.hpe_3par_fc.HPE3PARFCDriver +logging=DEBUG +san_ip= +san_login= +san_password= +host_etcd_ip_address=[:PORT1[,IP2[:PORT2]][,IP3[:PORT3]]...] +hpe3par_api_url=https://:8080/api/v1 +replication_device = backend_id:, + quorum_witness_ip:, + replication_mode:synchronous, + cpg_map::, + snap_cpg_map:: + hpe3par_api_url:https://:8080/api/v1, + hpe3par_username:<3PAR-Username>, + hpe3par_password:<3PAR-Password>, + san_ip:<3PAR-SAN-IP>, + san_login:<3PAR-SAN-Username>, + san_password:<3PAR-SAN-Password> +``` + +**Note:** + +1. *replication_mode* MUST be set to *synchronous* as a pre-requisite for Peer +Persistence based replication. +2. Both *cpg_map* and *snap_cpg_map* in *replication_device* section are mandatory +3. If password is encrypted for primary array, it must be encrypted for secondary array +as well using the same *pass-phrase* + +**For ISCSI Host** +```sh +host_etcd_port_number= +hpe3par_username= +hpe3par_password= +hpe3par_cpg= +hpedockerplugin_driver=hpedockerplugin.hpe.hpe_3par_iscsi.HPE3PARISCSIDriver +logging=DEBUG +san_ip= +san_login= +san_password= +host_etcd_ip_address=[:PORT1[,IP2[:PORT2]][,IP3[:PORT3]]...] +hpe3par_api_url=https://:8080/api/v1 +hpe3par_iscsi_ips=[,ISCSI_IP2,ISCSI_IP3...] +replication_device=backend_id:, +replication_device = backend_id:, + quorum_witness_ip:, + replication_mode:synchronous, + cpg_map::, + snap_cpg_map:: + hpe3par_api_url:https://:8080/api/v1, + hpe3par_username:<3PAR-Username>, + hpe3par_password:<3PAR-Password>, + san_ip:<3PAR-SAN-IP>, + san_login:<3PAR-SAN-Username>, + san_password:<3PAR-SAN-Password> + hpe3par_iscsi_ips=[;ISCSI_IP2;ISCSI_IP3...] +``` +*Note*: + +1. Both *cpg_map* and *snap_cpg_map* in *replication_device* section are mandatory. +2. *hpe3par_iscsi_ips* MUST be defined upfront for both source and target arrays. +3. *hpe3par_iscsi_ips* can be a single ISCSI IP or a list of ISCSI IPs delimited by +semi-colon. Delimiter for this field is applicable for *replication_device* section ONLY. +4. If password is encrypted for primary array, it must be encrypted for secondary array +as well using the same *pass-phrase* +5. *replication_mode* MUST be set to *synchronous* as a pre-requisite for Peer +Persistence based replication. + +## Managing Replicated Volumes ### + +### Create replicated volume ### +This command allows creation of replicated volume along with RCG creation if the RCG +does not exist on the array. Newly created volume is then added to the RCG. +Existing RCG name can be used to add multiple newly created volumes to it. +```sh +docker volume create -d hpe --name -o replicationGroup=<3PAR_RCG_Name> [Options...] +``` +where, +- *replicationGroup*: Name of a new or existing replication copy group on 3PAR array + +One or more following *Options* can be specified additionally: +1. *size:* Size of volume in GBs +2. *provisioning:* Provision type of a volume to be created. +Valid values are thin, dedup, full with thin as default. +3. *backend:* Name of the backend to be used for creation of the volume. If not +specified, "DEFAULT" is used providied it is initialized successfully. +4. *mountConflictDelay:* Waiting period in seconds to be used during mount operation +of the volume being created. This happens when this volume is mounted on say Node1 and +Node2 wants to mount it. In such a case, Node2 will wait for *mountConflictDelay* +seconds for Node1 to unmount the volume. If even after this wait, Node1 doesn't unmount +the volume, then Node2 forcefully removes VLUNs exported to Node1 and the goes ahead +with the mount process. +5. *compression:* This flag specifies if the volume is a compressed volume. Allowed +values are *True* and *False*. + +#### Example #### + +**Create a replicated volume having size 1GB with a non-existing RCG using backend "ActivePassiceRepBackend"** +```sh +$ docker volume create -d hpe --name Test_RCG_Vol -o replicationGroup=Test_RCG -o size=1 -o backend=ActivePassiceRepBackend +``` +This will create volume Test_RCG_Vol along with TEST_RCG remote copy group. The volume +will then be added to the TEST_RCG. +Please note that in case of failure during the operation at any stage, previous actions +are rolled back. +E.g. if for some reason, volume Test_RCG_Vol could not be added to Test_RCG, the volume +is removed from the array. + + +### Switchover a remote copy group ### +There is no single Docker command or option to support switchover of a RCG from one +array to the other. Instead, following 3PAR command must be executed. + +```sh +$ setrcopygroup switchover +``` +where: +- *RCG_Name* is the name of remote copy group on the array where the above command is executed. + +Having done the switchover, multipath daemon takes care of seamless mounting of volume(s) from the +switched over array. + +### Delete replicated volume ### +This command allows user to delete a replicated volume. If this is the last volume +present in RCG then the RCG is also removed from the backend. +```sh +docker volume rm +``` + +**See also:** +[Active/Passive Based Replication](active-passive-based-replication.md) + + +[<< Back to Replication: HPE 3PAR Docker Storage Plugin](replication.md) diff --git a/docs/quick_start_guide.md b/docs/quick_start_guide.md index 30bc4120..efc266d4 100644 --- a/docs/quick_start_guide.md +++ b/docs/quick_start_guide.md @@ -1,8 +1,11 @@ -# Quick Start Guide to installing the HPE 3PAR Volume Plug-in for Docker +# Deployment methods for HPE 3PAR Volume Plugin-in +## HPE 3PAR Docker volume Plugin can be deployed in following methods: + +* [Ansible playbook to deploy the HPE 3PAR Volume Plug-in for Docker (RECOMMENDED)](/ansible_3par_docker_plugin) * [Quick Start Guide for Standalone Docker environments](#docker) * [Quick Start Guide for Kubernetes/OpenShift environments](#k8) -* [Usage](#usage) + --- @@ -16,22 +19,6 @@ Steps for Deploying the Managed Plugin (HPE 3PAR Volume Plug-in for Docker) in a ### **Prerequisite packages to be installed on host OS:** -#### Ubuntu 16.04 or later: - - -1. Install the iSCSI (optional if you aren't using iSCSI) and Multipath packages -``` -$ sudo apt-get install -y open-iscsi multipath-tools -``` - -2. Enable the **iscsid** and **multipathd** services -``` -$ systemctl daemon-reload -$ systemctl restart open-iscsi multipath-tools docker -``` - - - #### RHEL/CentOS 7.3 or later: 1. Install the iSCSI (optional if you aren't using iSCSI) and Multipath packages @@ -99,6 +86,27 @@ $ systemctl daemon-reload $ systemctl restart docker.service ``` +#### SLES12 SP3 or later: + +1. Rebuild the initrd, otherwise the system may not boot anymore + +``` +$ dracut --force --add multipath +``` + +2. Configure `/etc/multipath.conf` + +``` +$ multipath -t > /etc/multipath.conf +``` + +3. Enable the iscsid and multipathd services + +``` +$ systemctl enable multipathd +$ systemctl start multipathd +``` + Now the systems are ready to setup the HPE 3PAR Volume Plug-in for Docker. @@ -122,14 +130,14 @@ sudo docker run -d -v /usr/share/ca-certificates/:/etc/ssl/certs -p 4001:4001 \ -name etcd0 \ -advertise-client-urls http://${HostIP}:2379,http://${HostIP}:4001 \ -listen-client-urls http://0.0.0.0:2379,http://0.0.0.0:4001 \ --initial-advertise-peer-urls http://${HostIP}:23800 \ +-initial-advertise-peer-urls http://${HostIP}:2380 \ -listen-peer-urls http://0.0.0.0:2380 \ -initial-cluster-token etcd-cluster-1 \ -initial-cluster etcd0=http://${HostIP}:2380 \ -initial-cluster-state new ``` -### HPE 3PAR Volume Managed Plug-in config +### HPE 3PAR Volume `Managed Plug-in` config 1. Add HPE 3PAR into `~/.ssh/known_hosts` @@ -165,15 +173,6 @@ Before enabling the plugin, validate the following: 3. Run the following commands to install the plugin: -**Ubuntu** - ->version=2.1 - -``` -$ docker plugin install store/hpestorage/hpedockervolumeplugin: --disable --alias hpe -$ docker plugin set hpe certs.source=/tmp -$ docker plugin enable hpe -``` **RHEL/CentOS** @@ -199,7 +198,7 @@ There are two methods for installing the HPE 3PAR Volume Plug-in for Docker for 1. [Ansible playbook to deploy the HPE 3PAR Volume Plug-in for Docker (**RECOMMENDED**)](/ansible_3par_docker_plugin/README.md) -2. [Manual install HPE 3PAR Volume Plug-in for Docker](/docs/manual_install_guide_hpe_3par_plugin_with_openshift_kubernetes.md) +2. [Install Guide for HPE 3PAR Volume Plug-in for Docker](/docs/manual_install_guide_hpe_3par_plugin_with_openshift_kubernetes.md) ## Usage diff --git a/docs/replication.md b/docs/replication.md new file mode 100644 index 00000000..8d66bf2b --- /dev/null +++ b/docs/replication.md @@ -0,0 +1,48 @@ +# Replication: HPE 3PAR Docker Storage Plugin # + +This feature allows Docker users to create replicated volume(s) using +HPE 3PAR Storage Plugin. Docker CLI does not directly support +replication. HPE 3PAR Storage Plugin extends Docker's "volume create" +command interface via optional parameter in order to make it possible. + +HPE 3PAR Storage Plugin assumes that an already working 3PAR Remote +Copy setup is present. The plugin has to be configured with the +details of this setup in a configuration file called hpe.conf. + +On the 3PAR front, core to the idea of replication is the concept of +remote copy group (RCG) that aggregates all the volumes that need to +be replicated simultaneously to a remote site. + +HPE 3PAR Storage Plugin extends Docker's "volume create" command via +optional parameter 'replicationGroup'. This represents the name of the +RCG on 3PAR which may or may not exist. In the former case, it gets +created and the new volume is added to it. In the latter case, the +newly created volume is added to the existing RCG. + +'replicationGroup' flag is effective only if the backend in +the configuration file hpe.conf has been configured as a +replication-enabled backend. Multiple backends with different +permutations and combinations can be configured. + +**Note:** + +1. User can create non-replicated/standard volume(s) using +replication-enabled backend. In order to do so, 'replicationGroup' must +not be specified in the create volume command. +2. For a backend that is NOT replication-enabled, specifying 'replicationGroup' +is incorrect and results in error. +3. For a given RCG, mixed transport protocol is not supported. E.g. volumes v1, v2 and v3 + are part of RCG called TestRCG, then on primary array, if these volumes are exported via + FC protocol then on secondary array those CANNOT be exported via ISCSI (after failover) + and vice versa. +4. Cold remote site (e.g. ISCSI IPs on remote array not configured) is not supported. +For ISCSI based transport protocol, the ISCSI IPs on both primary and secondary arrays +MUST be defined upfront in hpe.conf. + +HPE 3PAR Docker Storage Plugin supports two types of replication the details of +which can be found at: +1. [Active/Passive Based Replication](active-passive-based-replication.md) and +2. [Peer Persistence Based Replication](peer-persistence-based-replication.md). + + +[<< Back to Usage](usage.md#replication) diff --git a/docs/secret-management.md b/docs/secret-management.md new file mode 100644 index 00000000..aa459687 --- /dev/null +++ b/docs/secret-management.md @@ -0,0 +1,82 @@ +# Secrets Management # + +This section describes the steps that need to be taken in order to use secrets in encrypted format rather than plain text. + +### Using Encryption utility + +To encrypt the password user need to use a python package, "py-3parencryptor". +This package can be installed using the below command on linux machine + +```` +$ pip install py-3parencryptor + +```` + +#### Pre-requisite + +- hpe.conf should be present in /etc/hpedockerplugin/ path with etcd details in it. +- etcd should be running +- 3PAR plugin should be disabled + +#### About the package + +When py-3parencryptor is installed on machine. It can be used with the help of hpe3parencryptor command like below. +You have to use the same passphrase to encrypt all the passwords for a backend. +There can be 4 possible password: +1. hpe3par_password +2. san_password +3. hpe3par_password for replication array +4. san_password for replication array. + +After generating the password replace the password with encrypted one. + +```` +#hpe3parencryptor -a + +Example: + +#hpe3parencryptor -a "@123#" "password" +SUCCESSFUL: Encrypted password: +CB1E8Je1j8= + +```` +#### Add the encrypted password in /etc/hpedockerplugin/hpe.conf + +Use the encrypted password generated by utility as hpe3par_password in hpe.conf + +enable the plugin now + +#### Running the utility with -d option + If user wants to remove the current encrypted password and replace it with plain text or new encrypted password, +user need to delete the current password by using -d option in the utility. + +```` +# hpe3parencryptor -d +Key Successfully deleted +```` +## For Multiple backend + +### Encrypting a specific backend +- When multiple backend present in the configuration file(hpe.conf). User can use the utility to encrypt the password on backend basis. +- With --backend option user can provide the backend for which backend they want to encrypt the passwords. + +```` +#hpe3parencryptor -a --backend + +```` +### Removing encrypted password from a specific backend + +Users can remove the encrypted password of a specific backend. Users can use the utility to delete that. +There is an additional optional argument with -d, --backend. + +```` +# hpe3parencryptor -d --backend + +```` + +#### Note : +```` +If --backend is not used, in both the case (-a and -d), package will take the default backend for performing the operations. +```` + + +[<< Back to Usage](usage.md#) diff --git a/docs/system-reqs.md b/docs/system-reqs.md index 9f83e651..5b917c66 100644 --- a/docs/system-reqs.md +++ b/docs/system-reqs.md @@ -6,6 +6,7 @@ Versions starting at v1.0 have been tested and are supported on the following Li * Ubuntu 16.04 * RHEL 7.x * CentOS 7.x +* SLES 12 SP3 >**NOTE:** Although the plugin software is supported on the listed Linux distributions, you should consult the Hewlett Packard Enterprise Single Point of Connectivity Knowledge (SPOCK) for HPE Storage Products for specific details about which operating systems are supported by HPE 3PAR StoreServ and StoreVirtual Storage products (https://www.hpe.com/storage/spock). > diff --git a/docs/troubleshooting.md b/docs/troubleshooting.md index 72a4c174..04d5e585 100644 --- a/docs/troubleshooting.md +++ b/docs/troubleshooting.md @@ -32,6 +32,11 @@ For setting up secured etcd cluster, refer this doc: Sometimes it is useful to get more verbose output from the plugin. In order to do this one must change the logging property to be one of the following values: INFO, WARN, ERROR, DEBUG. +To enable logging for REST calls made to 3PAR array from Volume Plugin use below flag +``` +hpe3par_debug=True in /etc/hpedockerplugin/hpe.conf +``` + #### Logs for the plugin Logs of plugin provides useful information on troubleshooting issue/error further. On Ubuntu, grep for the `plugin id` in the logs , where the `plugin id` can be identified by: @@ -41,3 +46,52 @@ Logs of plugin provides useful information on troubleshooting issue/error furthe Plugin logs will be available in system logs (e.g. `/var/log/syslog` on Ubuntu). On RHEL and CentOS, issue `journalctl -f -u docker.service` to get the plugin logs. + +#### Removing Dangling LUN + +If no volumes are in mounted state and `lsscsi` lists any 3PAR data volumes then user is recommended to run the following script to clean up the dangling LUN. + +``` +for i in `lsscsi | grep 3PARdata | awk '{print $6}'| grep -v "-"| cut -d"/" -f3`; do echo $i; echo 1 > /sys/block/$i/device/delete; done +rescan-scsi-bus.sh -r -f -m +``` + +### Collecting necessary Logs + +if any issue found please do collect following logs from your Docker host + +``` +v3.1 onwards +/etc/hpedockerplugin/3pardcv.log +``` +#### Managed Plugin +for any older version below v3.1 + +``` +/var/log/messages +``` +#### Containerized Plugin + +``` +$docker logs -f +Getting container id of plugin: docker ps -a | grep hpe +``` + + ## Capturing Logs in Kubernetes/OpenShift environments + + Collect above Containerized Plugin logs along with the following logs. + + ``` + /var/log/dory.log + ``` + + Note: From all the nodes in the Kubernetes/OpenShift Cluster. + + ### Dynamic Provisioner Hang + + if you observe any doryd hang in your system, following command need to run to bring back online. + + ``` + systemctl restart doryd.service + ``` + diff --git a/docs/usage.md b/docs/usage.md index 7e9c248b..55464bfa 100644 --- a/docs/usage.md +++ b/docs/usage.md @@ -3,16 +3,20 @@ The following guide covers many of the options used for provisioning volumes and volume management within standalone Docker environments as well as Kubernetes/OpenShift environments. * ### [Using HPE 3PAR Volume Plug-in with Docker](#docker_usage) - * [Create a basic HPE 3PAR volume](#basic) + * [Creating a basic HPE 3PAR volume](#basic) * [Volume optional parameters](#options) - * [Deleting a Volume](#delete) - * [List Volumes](#list) - * [Inspect a Volume](#inspect) - * [Mounting a Volume](#mount) - * [Unmounting a Volume](#unmount) - * [Creating a Volume with QoS rules](#qos) - * [Cloning a Volume](#clone) - * [Enabling compression on Volume](#compression) + * [Creating replicated volume](#replication) + * [Deleting a volume](#delete) + * [Listing volumes](#list) + * [Inspecting a volume](#inspect) + * [Mounting a volume](#mount) + * [Unmounting a volume](#unmount) + * [Creating a volume with QoS rules](#qos) + * [Cloning a volume](#clone) + * [Enabling compression on volume](#compression) + * [Enabling file permissions and ownership](#file-permission-owner) + * [Managing volumes using multiple backends](#multi-array-feature) + * ### [Using HPE 3PAR Volume Plug-in with Kubernetes/OpenShift](#k8_usage) * [Kubernetes/OpenShift Terms](#terms) @@ -28,28 +32,36 @@ The following guide covers many of the options used for provisioning volumes and ## Within Docker The following section covers the supported actions for the **HPE 3PAR Volume Plug-in** within a **Docker** environment. -* [Create a basic HPE 3PAR volume](#basic) -* [Volume optional parameters](#options) -* [Deleting a Volume](#delete) -* [List Volumes](#list) -* [Inspect a Volume](#inspect) -* [Mounting a Volume](#mount) -* [Unmounting a Volume](#unmount) -* [Creating a Volume with QoS rules](#qos) -* [Cloning a Volume](#clone) -* [Enabling compression on Volume](#compression) + * [Creating a basic HPE 3PAR volume](#basic) + * [Volume optional parameters](#options) + * [Creating replicated volume](#replication) + * [Deleting a volume](#delete) + * [Listing volumes](#list) + * [Inspecting a volume](#inspect) + * [Mounting a volume](#mount) + * [Unmounting a volume](#unmount) + * [Creating a volume with QoS rules](#qos) + * [Cloning a volume](#clone) + * [Enabling compression on volume](#compression) + * [Enabling file permissions and ownership](#file-permission-owner) + * [Managing volumes using multiple backends](#multi-array-feature) + * [Creating a snapshot or virtual-copy of a volume](#snapshot) + * [Creating HPE 3PAR snapshot schedule](#snapshot_schedule) + * [Display help on usage](#usage-help) + * [Display available backends and their status](#backends-status) + * [Importing legacy volumes as docker volumes](#import-vol) If you are using **Kubernetes** or **OpenShift**, please go the [Kubernetes/OpenShift Usage section](#k8_usage). ### Creating a basic HPE 3PAR volume ``` -sudo docker volume create -d hpe --name +$ sudo docker volume create -d hpe --name ``` ### HPE 3PAR Docker Volume parameters The **HPE 3PAR Docker Volume Plug-in** supports several optional parameters that can be used during volume creation: -- **size** -- specifies the desired size in GB of the volume. If size is not specified during volume creation , it defaults to 100 GB. +- **size** -- specifies the desired size in GB of the volume. If size is not specified during volume creation, it defaults to 100 GB. - **provisioning** -- specifies the type of provisioning to use (thin, full, dedup). If provisioning is not specified during volume creation, it defaults to thin provisioning. For dedup provisioning, CPG with SSD device type must be configured. @@ -57,44 +69,93 @@ The **HPE 3PAR Docker Volume Plug-in** supports several optional parameters that - **compression** -- enables or disabled compression on the volume which is being created. It is only supported for thin/dedup volumes 16 GB in size or larger. * Valid values for compression are (true, false) or (True, False). - * Compression is only supported on 3par OS version 3.3.1 (**introduced in plugin version 2.1**) + * Compression is only supported on 3par OS version 3.3.1. (**introduced in plugin version 2.1**) + +- **mountConflictDelay** -- specifies period in seconds to wait for a mounted volume to gracefully unmount from a node before it can be mounted to another. If graceful unmount doesn't happen within the specified time then a forced cleanup of the VLUN is performed so that volume can be remounted to another node. (**introduced in plugin version 2.1**) + +- **qos-name** -- name of existing VVset on the HPE 3PAR where QoS rules are applied. (**introduced in plugin version 2.1**) + +- **cpg** -- name of user CPG to be used in the operation instead of the one specified in hpe.conf. (**introduced in plugin version 3.0**) + +- **snapcpg** -- name of snapshot CPG to be used in the operation instead of the one specified in hpe.conf. (**introduced in plugin version 3.0**) + + In case, *snapcpg* option is not explicitly specified, then: + * Snapshot CPG takes the value of *hpe3par_snapcpg* from 'hpe.conf' if specified. + * If *hpe3par_snapcpg* is not specified in 'hpe.conf', then snapshot CPG takes the + value of optional parameter *cpg*. + * If both *hpe3par_snapcpg* and *cpg* are not specified, then snapshot CPG takes the value + of *hpe3par_cpg* specified in 'hpe.conf'. -- **mountConflictDelay** -- specifies period in seconds to wait for a mounted volume to gracefully unmount from a node before it can be mounted to another. If graceful unmount doesn't happen within the specified time then a forced cleanup of the VLUN is performed so that volume can be remounted to another node.(**introduced in plugin version 2.1**) +- **replicationGroup** -- name of an existing remote copy group on the HPE 3PAR. (**introduced in plugin version 3.0**) -- **qos-name** -- name of existing VVset on the HPE 3PAR where QoS rules are applied.(**introduced in plugin version 2.1**) +- **fsOwner** -- user ID and group ID that should own root directory of file system. (**introduced in plugin version 3.0**) ->Note: Setting flash-cache to True does not gurantee flash-cache will be used. The backend system +- **fsMode** -- mode of the root directory of file system to be specified as octal number. (**introduced in plugin version 3.0**) + +- **backend** -- backend to be used for the volume creation. (**introduced in plugin version 3.0**) + +- **help** -- displays usage help and backend initialization status. (**introduced in plugin version 3.0**) + + +>Note: Setting flash-cache to True does not guarantee flash-cache will be used. The backend system must have the appropriate SSD setup configured too. The following is an example Docker command creating a full provisioned, 50 GB volume: ``` -docker volume create -d hpe --name -o size=50 -o provisioning=full +$ docker volume create -d hpe --name -o size=50 -o provisioning=full ``` +### Creating replicated volume +``` +$ docker volume create -d hpe --name -o replicationGroup=<3PAR RCG name> +``` +For details, please see [Replication: HPE 3PAR Docker Storage Plugin](replication.md) + + +### Enabling file permissions and ownership +1. To set permissions of root directory of a file system: +``` +$ docker volume create -d hpe --name -o fsMode= +``` + +2. To set ownership of root directory of a file system: +``` +$ docker volume create -d hpe --name -o fsOwner=: +``` +For details, please see [Enabling file permissions and ownership](file-permission-owner.md) + + +### Managing volumes using multiple backends +``` +$ docker volume create -d hpe --name -o backend= +``` +For details, please see [Managing volumes using multiple backends](multi-array-feature.md) + + ### Deleting a volume ``` -docker volume rm +$ docker volume rm ``` -### List volumes +### Listing volumes ``` -docker volume ls +$ docker volume ls ``` -### Inspect a volume +### Inspecting a volume ``` -docker volume inspect +$ docker volume inspect ``` ### Mounting a volume Use the following command to mount a volume and start a bash prompt: ``` -docker run -it -v :// --volume-driver hpe bash +$ docker run -it -v :// --volume-driver hpe bash ``` On Docker 17.06 or later, run below command: ``` -docker run -it --mount type=volume,src=,dst=,volume-driver=,volume-opt==,volume-opt== --name mycontainer +$ docker run -it --mount type=volume,src=,dst=,volume-driver=,volume-opt==,volume-opt== --name mycontainer ``` >Note: If the volume does not exist it will be created. @@ -106,49 +167,49 @@ for more details. ### Unmounting a volume Exiting the bash prompt will cause the volume to unmount: ``` -exit +/ exit ``` The volume is still associated with a container at this point. Run the following command to get the container ID associated with the volume: ``` -sudo docker ps -a +$ sudo docker ps -a ``` Then stop the container: ``` -sudo docker stop +$ sudo docker stop ``` Next, delete the container: ``` -sudo docker rm +$ sudo docker rm ``` Finally, remove the volume: ``` -sudo docker volume rm +$ sudo docker volume rm ``` ### Creating a volume with an existing VVset and QoS rules (**introduced in plugin version 2.1**) ``` -docker volume create -d hpe --name -o qos-name= +$ docker volume create -d hpe --name -o qos-name= ``` >**Note:** The **VVset** defined in **vvset_name** MUST exist in the HPE 3PAR and have QoS rules applied. ### Creating a clone of a volume (**introduced in plugin version 2.1**) ``` -docker volume create -d hpe --name -o cloneOf= +$ docker volume create -d hpe --name -o cloneOf= ``` ### Creating compressed volume (**introduced in plugin version 2.1**) ``` -docker volume create -d hpe --name -o compression=true +$ docker volume create -d hpe --name -o compression=true ``` ### Creating a snapshot or virtualcopy of a volume (**introduced in plugin version 2.1**) ``` -docker volume create -d hpe --name -o virtualCopyOf= +$ docker volume create -d hpe --name -o virtualCopyOf= ``` **Snapshot optional parameters** - **expirationHours** -- specifies the expiration time for a snapshot in hours and will be automatically deleted from the 3PAR once the time defined in **expirationHours** expires. @@ -158,14 +219,39 @@ docker volume create -d hpe --name -o virtualCopyOf=**Note:** >* If **snapcpg** is not configured in `hpe.conf` then the **cpg** defined in `hpe.conf` will be used for snapshot creation. > ->* If both **expirationHours** and **retentionHours** are used while creating a snapshot then **retentionHours** should be *less* than **expirationHours** +>* If both **expirationHours** and **retentionHours** are used while creating a snapshot, then **retentionHours** should be *less* than **expirationHours** +``` +$ docker volume create -d hpe --name -o virtualCopyOf= -o expirationHours=3 +``` +>**Note:** To mount a snapshot, you can use the same commands as [mounting a volume](#mount) as specified above. + +### Importing non-docker managed volumes (**introduced in plugin version 3.0**) ``` -docker volume create -d hpe --name -o virtualCopyOf= -o expirationHours=3 +$ docker volume create -d hpe --name -o importVol=<3par_volume_name> ``` +>**Note:** To import a snapshot, you have to first import the parent volume of the snapshot as docker volume + +### Displaying help on usage (**introduced in plugin version 3.0**) +``` +$ docker volume create -d hpe -o help +``` + +### Displaying available backends and their status (**introduced in plugin version 3.0**) +``` +$ docker volume create -d hpe -o help=backends +``` + + +### Creating HPE 3PAR snapshot schedule (**introduced in plugin version 3.0**) +``` +$ docker volume create -d hpe --name -o virtualCopyOf= +-o scheduleFrequency= -o scheduleName= +-o snapshotPrefix= -o expHrs= -o retHrs= +``` +For details, please see [Creating HPE 3PAR snapshot schedule](create_snapshot_schedule.md) ->**Note:** To mount a snapshot, you can use the same commands as [mounting a volume](#mount) as specified above. ## Usage of the HPE 3PAR Volume Plug-in for Docker in Kubernetes/OpenShift @@ -182,14 +268,15 @@ To learn more about Persistent Volume Storage and Kubernetes/OpenShift, go to: https://kubernetes.io/docs/tasks/configure-pod-container/configure-persistent-volume-storage/ ### Key Kubernetes/OpenShift Terms: -* **kubectl** – command line interface for running commands against Kubernetes clusters -* **oc** – command line interface for running commands against OpenShift platform +* **kubectl** – command line interface for running commands against Kubernetes clusters. +* **oc** – command line interface for running commands against OpenShift platform. * **PV** – Persistent Volume is a piece of storage in the cluster that has been provisioned by an administrator. * **PVC** – Persistent Volume Claim is a request for storage by a user. * **SC** – Storage Class provides a way for administrators to describe the “classes” of storage they offer. * **hostPath volume** – mounts a file or directory from the host node’s filesystem into your Pod. -To get started, in an OpenShift environment, we need to relax the security of your cluster so pods are allowed to use the **hostPath** volume plugin without granting everyone access to the privileged **SCC**: +To get started, in an OpenShift environment, we need to relax the security of your cluster, so pods are allowed to +use the **hostPath** volume plugin without granting everyone access to the privileged **SCC**: 1. Edit the restricted SCC: ``` @@ -234,16 +321,17 @@ EOF | StorageClass Options | Type | Parameters | Example | |----------------------|---------|--------------------------------------------|----------------------------------| | size | integer | - | size: "10" | -| provisioning | String | thin, thick | provisioning: thin | -| flash-cache | String | enable, disable | flash-cache: enable | -| compression | boolean | true, false | compression: true | +| provisioning | String | thin, thick | provisioning: "thin" | +| flash-cache | String | true, false | flash-cache: "true" | +| compression | boolean | true, false | compression: "true" | | MountConflictDelay | integer | - | MountConflictDelay: "30" | -| qos_name | String | vvset name | qos_name: "" | +| qos-name | String | vvset name | qos-name: "" | | cloneOf | String | volume name | cloneOf: "" | | virtualCopyOf | String | volume name | virtualCopyOf: "" | | expirationHours | integer | option of virtualCopyOf | expirationHours: "10" | | retentionHours | integer | option of virtualCopyOf | retentionHours: "10" | | accessModes | String | ReadWriteOnce | accessModes:
   - ReadWriteOnce | +| replicationGroup | String | 3PAR RCG name | replicationGroup: "Test-RCG" | ### Persistent Volume Claim Example @@ -271,7 +359,7 @@ At this point, after creating the **SC** and **PVC** definitions, the volume has ### Pod Example -So let’s create a **pod "pod1"** using the **nginx** container along with some persistent storage: +So, let’s create a **pod "pod1"** using the **nginx** container along with some persistent storage: ```yaml $ sudo kubectl create -f - << EOF @@ -302,9 +390,7 @@ $ docker volume ls DRIVER VOLUME NAME hpe export ``` - On the Kubernetes/OpenShift side, it should now look something like this: - ``` $ kubectl get pv,pvc,pod -o wide NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE @@ -315,9 +401,94 @@ pvc/pvc1 Bound pv100 20Gi RWO 11m NAME READY STATUS RESTARTS AGE IP NODE po/pod1 1/1 Running 0 11m 10.128.1.53 cld6b16 +``` + +**Static provisioning** is a feature that is native to Kubernetes and that allows cluster admins to make existing storage devices available to a cluster. As a cluster admin, you must know the details of the storage device, its supported configurations, and mount options. +To make existing storage available to a cluster user, you must manually create the storage device, a PV,PVC and POD. + +Below is an example yaml specification to create Persistent Volumes using the HPE 3PAR FlexVolume driver. + +``` +Note: If you have OpenShift installed, kubectl create and oc create commands can be used interchangeably when creating PVs, PVCs, and PODs. +``` + +Persistent volume Example +The following creates a Persistent volume "pv-first" with the help of HPE 3PAR Docker Volume Plugin. + +```yaml +$ sudo kubectl create -f - << EOF +--- +apiVersion: v1 +kind: PersistentVolume +metadata: + name: pv-first +spec: + capacity: + storage: 10Gi + accessModes: + - ReadWriteOnce + flexVolume: + driver: hpe.com/hpe + options: + size: "10" +EOF +``` + +Persistent Volume Claim Example +Now let’s create a claim PersistentVolumeClaim (PVC). Here we specify the PVC name pvc-first. + +```yaml +$ sudo kubectl create -f - << EOF +--- +kind: PersistentVolumeClaim +apiVersion: v1 +metadata: + name: pvc-first +spec: + accessModes: + - ReadWriteOnce + resources: + requests: + storage: 10Gi +EOF ``` +At this point, after creating the PV and PVC definitions, the volume hasn’t been created yet. The actual volume gets created on-the-fly during the pod deployment and volume mount phase. + +Pod Example +So, let’s create a pod "pod-first" using the minio container along with some persistent storage: + +```yaml +$ sudo kubectl create -f - << EOF +--- +apiVersion: v1 +kind: Pod +metadata: + name: pod-first +spec: + containers: + - name: minio + image: minio/minio:latest + args: + - server + - /export + env: + - name: MINIO_ACCESS_KEY + value: minio + - name: MINIO_SECRET_KEY + value: doryspeakswhale + ports: + - containerPort: 9000 + volumeMounts: + - name: export + mountPath: /export + volumes: + - name: export + persistentVolumeClaim: + claimName: pvc-first +EOF +``` Now the **pod** can be deleted to unmount the Docker volume. Deleting a **Docker volume** does not require manual clean-up because the dynamic provisioner provides automatic clean-up. You can delete the **PersistentVolumeClaim** and see the **PersistentVolume** and **Docker volume** automatically deleted. diff --git a/dory_installer_v31 b/dory_installer_v31 new file mode 100755 index 00000000..1b826cc1 Binary files /dev/null and b/dory_installer_v31 differ diff --git a/examples/Dockerfile_SLES b/examples/Dockerfile_SLES new file mode 100644 index 00000000..8dd531da --- /dev/null +++ b/examples/Dockerfile_SLES @@ -0,0 +1,4 @@ +FROM alpine:latest +ADD [ "doryd", "/usr/local/bin/doryd" ] +ENTRYPOINT [ "doryd" ] +CMD [ "/root/.kube/config", "hpe.com" ] diff --git a/examples/ds-doryd-sles.yml b/examples/ds-doryd-sles.yml new file mode 100644 index 00000000..0413ef7b --- /dev/null +++ b/examples/ds-doryd-sles.yml @@ -0,0 +1,47 @@ +--- +apiVersion: extensions/v1beta1 +kind: DaemonSet +metadata: + name: doryd +spec: + updateStrategy: + type: RollingUpdate + template: + metadata: + namespace: kube-system + labels: + daemon: dory-daemon + name: doryd + spec: + restartPolicy: Always + tolerations: + - + effect: NoSchedule + operator: Exists + containers: + - + image: hpestorage/hpe3par_doryd_sles:1.0 + imagePullPolicy: Always + name: dory + volumeMounts: + - name: k8s + mountPath: /etc/kubernetes + - name: config + mountPath: /root/.kube/config + - name: flexvolumedriver + mountPath: /usr/libexec/kubernetes/kubelet-plugins/volume/exec + - name: dockersocket + mountPath: /run/docker/plugins/ + volumes: + - name: k8s + hostPath: + path: /etc/kubernetes/ + - name: config + hostPath: + path: /root/.kube/config + - name: flexvolumedriver + hostPath: + path: /var/lib/kubelet/volumeplugins + - name: dockersocket + hostPath: + path: /run/docker/plugins/ diff --git a/quick-start/README.md b/quick-start/README.md index 2e9f0286..7cff45c2 100644 --- a/quick-start/README.md +++ b/quick-start/README.md @@ -9,16 +9,18 @@ This quick start guide is designed to allow you to get up and running quickly wi ## Supported Operating Systems: * Ubuntu 16.04 -* RHEL 7.x -* CentOS 7.x +* RHEL 7.3, 7.4, 7.5 +* CentOS 7.3, 7.4, 7.5 +* SLES 12 SP3 ## Supported HPE 3PAR storage arrays: -* OS version support for 3PAR 3.2.1 MU2 and later +* OS version support for 3PAR 3.2.1 MU2, 3.2.2 MU4, MU6, 3.3.1 MU2, MU3 ## Supported Docker versions: -Docker EE 17.03 or later. +* Docker EE 17.03 +* Docker EE 17.06 ``` docker --version @@ -26,8 +28,8 @@ docker --version ## Supported Kubernetes/OpenShift versions: -* Kubernetes 1.7 -* Openshift 3.7 +* Kubernetes 1.7, 1.8, 1.9, 1.10, 1.12 +* Openshift 3.7, 3.9, 3.10 ## Installation Guides