Skip to content
This repository has been archived by the owner on Nov 20, 2023. It is now read-only.

Add RHV as a provider #347

Open
wants to merge 7 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
92 changes: 92 additions & 0 deletions docs/PROVISIONING_RHV.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
= OpenShift on RHV using CASL

TODO: Configure and test OpenShift Container Storage portions

== Local Setup (one time, only)

NOTE: These steps are a canned set of steps serving as an example, and may be different in your environment.

Before getting started following this guide, you'll need the following:

* Access to the RHV Manager with the proper policies to create resources (see details below)
* Docker installed
** RHEL/CentOS: `yum install -y docker`
** Fedora: `dnf install -y docker`
** **NOTE:** If you plan to run docker as yourself (non-root), your username must be added to the `docker` user group.
* Ansible 2.5 or later installed
** link:https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html[See Installation Guide]

[source,bash]
----
cd ~/src/
git clone https://github.com/redhat-cop/casl-ansible.git
----

* Run `ansible-galaxy` to pull in the necessary requirements for the CASL provisioning of OpenShift on RHV:

NOTE: The target directory ( `galaxy` ) is **important** as the playbooks know to source roles and playbooks from that location.

[source,bash]
----
cd ~/src/casl-ansible
ansible-galaxy install -r casl-requirements.yml -p galaxy
----

== RHV Setup

The following needs to be set up in your RHV manager before provisioning.

* Access to the admin@internal or similarly privileged account.
* A RHEL template must be created
** Satellite CA certificate (if using Satellite) should be installed in the template
** A user with sudo access and a known password
* Storage domain must be available
* DNS for all the OCP nodes, public and private URL and wildcard application URL must be in place PRIOR to running this playbook
** During the RUN a list of all expected DNS entries will be provided
** If any of the required DNS settings are not in place, the role will fail

Cool! Now you're ready to provision OpenShift clusters on RHV.

== Provision an OpenShift Cluster

As an example, we'll provision the `sample.rhv.example.com` cluster defined in the `~/src/casl-ansible/inventory` directory.

NOTE: Unless you already have a working inventory, it is recommended that you make a copy of the above mentioned sample inventory and keep it somewhere outside of the casl-ansible directory. This allows you to update/remove/change your casl-ansble source directory without losing your inventory. Also note that it may take some effort to get the inventory just right, hence it is very beneficial to keep it around for future use without having to redo everything.

The following is just an example on how the `sample.rhv.example.com` inventory can be used:

1. Update the variable settings in `group_vars/all.yml` with your environmental settings.

[source,bash]
----
docker run':
docker run -u `id -u` \
-v $HOME/.ssh/id_rsa:/opt/app-root/src/.ssh/id_rsa:Z \
-v <REPLACE WITH PATH TO PARENT CASL GIT DIRECTORY>:/tmp/src:Z \
-e INVENTORY_DIR=<REPLACE WITH PATH TO PARENT CASL GIT DIRECTORY>/casl-ansible/inventory/sample.rhv.example.com.d/inventory \
-e PLAYBOOK_FILE=<REPLACE WITH PATH TO PARENT CASL GIT DIRECTORY>/casl-ansible/playbooks/openshift/rhv/provision.yml \
-e OVIRT_URL='https://rhvm.example.com/ovirt-engine/api/v4' \
-e OVIRT_USERNAME='admin@internal' \
-e OVIRT_PASSWORD='rhvm_password' \
-e OVIRT_CA='<REPLACE WITH PATH TO PARENT CASL GIT DIRECTORY>/casl-ansible/ca.crt' \
-e ANSIBLE_USER='cloud-user' \
-e ANSIBLE_PASS='template_password' \
-i redhat-cop/casl-ansible

----

== Updating a Cluster

Once provisioned, a cluster may be adjusted/reconfigured as needed by updating the inventory and re-running the `end-to-end.yml` playbook.

== Scaling Up and Down

A cluster's Infra and App nodes may be scaled up and down by editing the following parameters in the `all.yml` file and then re-running the `end-to-end.yml` playbook as shown above.

[source,yaml]
----
appnodes:
count: <REPLACE WITH NUMBER OF INSTANCES TO CREATE>
infranodes:
count: <REPLACE WITH NUMBER OF INSTANCES TO CREATE>
----
111 changes: 111 additions & 0 deletions inventory/sample.rhv.example.com.d/inventory/group_vars/OSEv3.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
---

# The username Ansible should use to access the instances with
ansible_user: "{{ lookup('env','ANSIBLE_USER') }}"
ansible_password: "{{ lookup('env','ANSIBLE_PASS') }}"

# Should Ansible use "become" to gain elevated privileges (i.e.: root)
ansible_become: true

# CNS relative vars - Uncommented to automatically deploy CNS - 'cns_deploy' from all.yml must be 'true' in that case
# openshift_storage_glusterfs_namespace: glusterfs
# openshift_storage_glusterfs_name: cns

openshift_clusterid: "{{ env_id }}"


# OpenShift Specific Configuration Options
# - Check the official OpenShift documentation for more details
deployment_type: openshift-enterprise
openshift_deployment_type: openshift-enterprise
containerized: false

### OCP version to install
openshift_release: v3.11

osm_default_node_selector: 'node-role.kubernetes.io/compute=true'
osm_use_cockpit: true
osm_cockpit_plugins:
- 'cockpit-kubernetes'

# Enable the Multi-Tenant plugin
os_sdn_network_plugin_name: 'redhat/openshift-ovs-multitenant'

# OpenShift FQDNs, DNS, App domain specific configurations
openshift_master_cluster_method: native
openshift_master_default_subdomain: "apps.{{ env_id }}.{{ dns_domain }}"
openshift_master_cluster_hostname: "console.internal.{{ env_id }}.{{ dns_domain }}"
openshift_master_cluster_public_hostname: "console.{{ env_id }}.{{ dns_domain }}"

# Registry URL & Credentials
# For more info: https://access.redhat.com/terms-based-registry/
oreg_url: 'registry.redhat.io/openshift3/ose-${component}:${version}'
#oreg_auth_user: "{{ lookup('env', 'OREG_AUTH_USER' )}}"
#oreg_auth_password: "{{ lookup('env', 'OREG_AUTH_PASSWORD' )}}"

# Deploy Logging with dynamic storage
#openshift_logging_install_logging: false
#openshift_logging_es_pvc_dynamic: true
#openshift_logging_es_pvc_size: 40G
#openshift_logging_curator_default_days: 1

# Deploy Metrics with dynamic storage
#openshift_metrics_install_metrics: false
#openshift_metrics_cassandra_storage_type: dynamic
#openshift_metrics_cassandra_pvc_size: 40G
#openshift_metrics_duration: 2

# HTPASSWD Identity Provider
# - update to other types of auth providers if necessary (i.e: LDAP, OAuth, ...)
openshift_master_identity_providers:
- 'name': 'htpasswd_auth'
'login': 'true'
'challenge': 'true'
'kind': 'HTPasswdPasswordIdentityProvider'

# Uncommented to automatically create a set of test users with the above
# HTPASSWD Identity Provider
#create_users:
# num_users: 5
# prefix: 'rdu-user'
# passwd_file: '/etc/origin/master/htpasswd'
# password: 'rdu-sample'

# OpenShift Node specific parameters
openshift_node_groups:
- name: node-config-master
labels:
- 'node-role.kubernetes.io/master=true'
edits:
- key: kubeletArguments.kube-reserved
value:
- 'cpu={{ ansible_processor_vcpus * 50 }}m'
- 'memory={{ ansible_processor_vcpus * 50 }}M'
- key: kubeletArguments.system-reserved
value:
- 'cpu={{ ansible_processor_vcpus * 50 }}m'
- 'memory={{ ansible_processor_vcpus * 100 }}M'
- name: node-config-infra
labels:
- 'node-role.kubernetes.io/infra=true'
edits:
- key: kubeletArguments.kube-reserved
value:
- 'cpu={{ ansible_processor_vcpus * 50 }}m'
- 'memory={{ ansible_processor_vcpus * 50 }}M'
- key: kubeletArguments.system-reserved
value:
- 'cpu={{ ansible_processor_vcpus * 50 }}m'
- 'memory={{ ansible_processor_vcpus * 100 }}M'
- name: node-config-compute
labels:
- 'node-role.kubernetes.io/compute=true'
edits:
- key: kubeletArguments.kube-reserved
value:
- 'cpu={{ ansible_processor_vcpus * 50 }}m'
- 'memory={{ ansible_processor_vcpus * 50 }}M'
- key: kubeletArguments.system-reserved
value:
- 'cpu={{ ansible_processor_vcpus * 50 }}m'
- 'memory={{ ansible_processor_vcpus * 100 }}M'
Loading