Skip to content

Latest commit

 

History

History
 
 

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 

OpenShift UPI Install Automation for O^3 (OpenShift On OpenShift)

Before getting started, I recommend reading this blog post to get yourself familiarized with the concept and architecture of O^3

Note: This is very much a work in progress and there is lots of room for improvement. I will be updating this repo pretty often.

Functionality

This playbook will do the following tasks

  • Download the oc and openshift-install binaries
  • Download the appropriate RHCOS image for the platform
  • Deploy an OpenShift cluster

Assumptions

The O^3 playbook assumes you have an Infra Cluster already deployed with a speedy storage class that can do dynamic provisioning. OpenShift Container Storage (OCS) was used for testing this deployment.

Pre-req

  1. Install Ansible, Unizp and Tar
dnf install ansible unzip tar -y
  1. Run the following command to install required collection from Ansible Galaxy
cd src/ansible
ansible-galaxy collection install -r requirements.yml
  1. run the following pip command to install OpenShift library for Python
pip3 install openshift
  1. Install a web-server that will serve files over HTTP (In future updates I will add functionality to be able to upload the ignition files to a remote http server)

  2. Install OpenShift Virtualization on your infrastructure cluster from Operator Hub

  3. Download virtctl and add it to a folder referenced by your PATH variable. Typically /usr/bin

  4. Setup kubeconfig file for the infrastructure cluster

  • Locate a kubeconfig file for the infra cluster. This can be the one for kubeadmin or one generated by a user with cluster-admin privileges.
  • Create the directory /kubeconfig
  • Copy the infra cluster kubeconfig file to the directory above renaming the file to the name of your base infra cluster (ex. ocp-infra1)
  1. Create a bridge interface on infra cluster nodes using NNCP. Make sure to update the interface name in the yaml to the interface name of your infra cluster OCP nodes.
apiVersion: nmstate.io/v1alpha1
kind: NodeNetworkConfigurationPolicy
metadata:
  name: br1
spec:
  nodeSelector: 
    node-role.kubernetes.io/worker: ""
  desiredState:
    interfaces:
      - name: br1
        description: Linux bridge using enp1s0 device
        type: linux-bridge
        state: up
        ipv4:
          dhcp: true
          enabled: true
        bridge:
          options:
            stp:
              enabled: false
          port:
            - name: enp1s0 # Interface name on infra cluster nodes

alternatively you can do this via machine config. take note of the bridge name, because you will need to define it in the vars.

Update vars

Update the following variables

src/ansible/vars/common.yaml

rhcos_ssh_pub_key - Your public key that you can use to ssh into your RHCOS VMs

additional_trust_bundle - Certificate of your private registry (optional). If not required set the value to an empty string ("")

pull_secret - Your pull secret JSON

http_file_server_base_path - The base path for the web-server that will be serving up the bootstrap ignition file for your cluster. For now this has to be local to the machine that will run this playbook

http_base_url - The base URL of your web-server. Make sure you don't have an / at the end

src/ansible/vars/O^3/vars.yaml

base_domain - The base domain of your cluster.

ocp_cluster_name - The name of your cluster

infra_cluster_name - This will be the name of your infrastructure cluster. Make sure to have named the kubeconfig for this cluster the same which should be located under /kubeconfig as described in the Pre-Req section

staging_dir - Changing the stating location is optional. Make sure to have at least 4GB of space on that drive

infra_cluster_bridge_name - name of the bridge defined on infra cluster nodes.

infra_cluster_storage_class_name - Name of the storage class to be used for Persistent Volumes for the VMs. Note I have only tested this with Ceph RBD via OCS. I believe other speedy storage providers should work

infra_cluster_storage_volume_mode - Volume mode for the VM storage. Block mode is highly recommended due to the latency sensitive nature of ETCD.

infra_cluster_storage_access_mode - Access mode of the VM storage. If supported, ReadWriteMany is recommended.

Update inventory file

You can use the existing MAC addresses in the inventory file or generate your own using the generate-mac script located in the scripts folder or a method of your own choosing.

Update the domain name of the nodes to match your domain

Update infrastructure services

  • Update your DHCP service with the MACS used in your inventory file.
  • Add the required DNS entries for your cluster (api, api-int,*.apps, vm hostnames)
  • I will update the automation to work with remote web servers, but at the moment you need to run the web server on the same machine that you will run this playbook. Ensure that the web server can serve up files to the OpenShift host network.

Run the playbook

cd src/ansible
ansible-playbook playbooks/openshift-O^3-cluster-deploy.yaml -i `pwd`/vars/O^3/inventory -e config=`pwd`/vars/O^3/main.yaml