Skip to content

Commit

Permalink
Bump default OCP to 4.13 (#399)
Browse files Browse the repository at this point in the history
  • Loading branch information
akrzos authored Oct 30, 2023
1 parent f1a6f9f commit 2e96b3a
Show file tree
Hide file tree
Showing 11 changed files with 44 additions and 45 deletions.
19 changes: 9 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ The listed hardware has been used for cluster deployments successfully. Potentia
| Supermicro E5-2620 | Yes | Yes |
| Lenovo ThinkSystem SR630 | Yes | Yes |

For guidance on how to order hardware on IBMcloud, see [order-hardware-ibmcloud.md](docs/order-hardware-ibmcloud.md) in [docs](docs) directory.

## Prerequisites

Expand Down Expand Up @@ -85,8 +86,6 @@ Installing Ansible via bootstrap (requires python3-pip)
(.ansible) [root@xxx-xxx-xxx-r640 jetlag]#
```

For guidance on how to order hardware on IBMcloud, see [order-hardware-ibmcloud.md](docs/order-hardware-ibmcloud.md) in [docs](docs) directory.

Pre-reqs for Supermicro hardware:

* [SMCIPMITool](https://www.supermicro.com/SwDownload/SwSelect_Free.aspx?cat=IPMI) downloaded to jetlag repo, renamed to `smcipmitool.tar.gz`, and placed under `ansible/`
Expand All @@ -102,8 +101,8 @@ There are three main files to configure. The inventory file is generated but mig
Start by editing the vars

```console
[root@xxx-xxx-xxx-r640 jetlag]# cp ansible/vars/all.sample.yml ansible/vars/all.yml
[root@xxx-xxx-xxx-r640 jetlag]# vi ansible/vars/all.yml
(.ansible) [root@xxx-xxx-xxx-r640 jetlag]# cp ansible/vars/all.sample.yml ansible/vars/all.yml
(.ansible) [root@xxx-xxx-xxx-r640 jetlag]# vi ansible/vars/all.yml
```

Make sure to set/review the following vars:
Expand All @@ -121,7 +120,7 @@ Make sure to set/review the following vars:
Set your pull-secret in `pull_secret.txt` in repo base directory. Example:

```console
[root@xxx-xxx-xxx-r640 jetlag]# cat pull_secret.txt
(.ansible) [root@xxx-xxx-xxx-r640 jetlag]# cat pull_secret.txt
{
"auths": {
...
Expand All @@ -130,34 +129,34 @@ Set your pull-secret in `pull_secret.txt` in repo base directory. Example:
Run create-inventory playbook

```console
[root@xxx-xxx-xxx-r640 jetlag]# ansible-playbook ansible/create-inventory.yml
(.ansible) [root@xxx-xxx-xxx-r640 jetlag]# ansible-playbook ansible/create-inventory.yml
```

Run setup-bastion playbook

```console
[root@xxx-xxx-xxx-r640 jetlag]# ansible-playbook -i ansible/inventory/cloud99.local ansible/setup-bastion.yml
(.ansible) [root@xxx-xxx-xxx-r640 jetlag]# ansible-playbook -i ansible/inventory/cloud99.local ansible/setup-bastion.yml
```

Run deploy for either bm/rwn/sno playbook with inventory created by create-inventory playbook

Bare Metal Cluster:

```console
[root@xxx-xxx-xxx-r640 jetlag]# ansible-playbook -i ansible/inventory/cloud99.local ansible/bm-deploy.yml
(.ansible) [root@xxx-xxx-xxx-r640 jetlag]# ansible-playbook -i ansible/inventory/cloud99.local ansible/bm-deploy.yml
```
See [troubleshooting.md](https://github.com/redhat-performance/jetlag/blob/main/docs/troubleshooting.md) in [docs](https://github.com/redhat-performance/jetlag/tree/main/docs) directory for BM install related issues

Remote Worker Node Cluster:

```console
[root@xxx-xxx-xxx-r640 jetlag]# ansible-playbook -i ansible/inventory/cloud99.local ansible/rwn-deploy.yml
(.ansible) [root@xxx-xxx-xxx-r640 jetlag]# ansible-playbook -i ansible/inventory/cloud99.local ansible/rwn-deploy.yml
```

Single Node OpenShift:

```console
[root@xxx-xxx-xxx-r640 jetlag]# ansible-playbook -i ansible/inventory/cloud99.local ansible/sno-deploy.yml
(.ansible) [root@xxx-xxx-xxx-r640 jetlag]# ansible-playbook -i ansible/inventory/cloud99.local ansible/sno-deploy.yml
```

## Quickstart guides
Expand Down
8 changes: 4 additions & 4 deletions ansible/roles/sync-operator-index/defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -15,12 +15,12 @@ registry_path: /opt/registry
operator_index_name: redhat-operator-index

# operator_index_tag represents the destination operator tag on the installed cluster (actual image tags can be found in the operator_index_container_image value under operators_to_sync)
operator_index_tag: v4.12
operator_index_tag: v4.13

# These defaults match ACM/ZTP testing (4.12 SNO with 4.12 Operators). For SNO DU profile defaults, refer ansible/vars/sync-operator-index.sample.yml
# These defaults match ACM/ZTP testing (4.13 SNO with 4.13 Operators). For SNO DU profile defaults, refer ansible/vars/sync-operator-index.sample.yml
operators_to_sync:
- name: redhat-operators
operator_index_container_image: registry.redhat.io/redhat/redhat-operator-index:v4.12
operator_index_container_image: registry.redhat.io/redhat/redhat-operator-index:v4.13
operators:
- cluster-logging
- local-storage-operator
Expand All @@ -33,6 +33,6 @@ operators_to_sync:
- topology-aware-lifecycle-manager
- volsync-product
#- name: certified-operators
# operator_index_container_image: registry.redhat.io/redhat/certified-operator-index:v4.12
# operator_index_container_image: registry.redhat.io/redhat/certified-operator-index:v4.13
# operators:
# - sriov-fec
6 changes: 3 additions & 3 deletions ansible/vars/all.sample.yml
Original file line number Diff line number Diff line change
Expand Up @@ -27,10 +27,10 @@ public_vlan: false
# you must stop and rm all assisted-installer containers on the bastion and rerun
# the setup-bastion step in order to setup your bastion's assisted-installer to
# the version you specified
ocp_release_image: quay.io/openshift-release-dev/ocp-release:4.12.16-x86_64
ocp_release_image: quay.io/openshift-release-dev/ocp-release:4.13.17-x86_64

# This should just match the above release image version (Ex: 4.12)
openshift_version: "4.12"
# This should just match the above release image version (Ex: 4.13)
openshift_version: "4.13"

# Either "OVNKubernetes" or "OpenShiftSDN" (Only for BM/RWN cluster types)
networktype: OVNKubernetes
Expand Down
2 changes: 1 addition & 1 deletion ansible/vars/hv.sample.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@ standard_cluster_count: 0
standard_cluster_node_count: 5

# The cluster imageset used in the manifests
cluster_image_set: openshift-4.12.16
cluster_image_set: openshift-4.13.17

# Include ACM CRs in the manifests
hv_vm_manifest_acm_cr: false
Expand Down
6 changes: 3 additions & 3 deletions ansible/vars/ibmcloud.sample.yml
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,10 @@ sno_node_count:
# you must stop and rm all assisted-installer containers on the bastion and rerun
# the setup-bastion step in order to setup your bastion's assisted-installer to
# the version you specified
ocp_release_image: quay.io/openshift-release-dev/ocp-release:4.12.16-x86_64
ocp_release_image: quay.io/openshift-release-dev/ocp-release:4.13.17-x86_64

# This should just match the above release image version (Ex: 4.12)
openshift_version: "4.12"
# This should just match the above release image version (Ex: 4.13)
openshift_version: "4.13"

# Either "OVNKubernetes" or "OpenShiftSDN" (Only for BM/RWN cluster types)
networktype: OVNKubernetes
Expand Down
2 changes: 1 addition & 1 deletion ansible/vars/sync-ocp-release.sample.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
---
# Sample sync-ocp-release vars file

ocp_release_image: quay.io/openshift-release-dev/ocp-release:4.12.16-x86_64
ocp_release_image: quay.io/openshift-release-dev/ocp-release:4.13.17-x86_64

pull_secret: "{{ lookup('file', '../pull_secret.txt') }}"
6 changes: 3 additions & 3 deletions ansible/vars/sync-operator-index.sample.yml
Original file line number Diff line number Diff line change
Expand Up @@ -11,12 +11,12 @@ bastion_cluster_config_dir: /root/{{ cluster_type }}
operator_index_name: redhat-operator-index

# operator_index_tag represents the destination operator tag on the installed cluster (actual image tags can be found in the operator_index_container_image value under operators_to_sync)
operator_index_tag: v4.12
operator_index_tag: v4.13

# All of the following operators apply as part of DU profile, sync-operator-index makes all the required DU operators available through a single catalog source
operators_to_sync:
- name: redhat-operators
operator_index_container_image: registry.redhat.io/redhat/redhat-operator-index:v4.12
operator_index_container_image: registry.redhat.io/redhat/redhat-operator-index:v4.13
operators:
- cluster-logging
- local-storage-operator
Expand All @@ -29,7 +29,7 @@ operators_to_sync:
# - topology-aware-lifecycle-manager
# - volsync-product
- name: certified-operators
operator_index_container_image: registry.redhat.io/redhat/certified-operator-index:v4.12
operator_index_container_image: registry.redhat.io/redhat/certified-operator-index:v4.13
operators:
- sriov-fec

Expand Down
14 changes: 7 additions & 7 deletions docs/bastion-deploy-bm.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,8 +161,8 @@ Change `cluster_type` to `cluster_type: bm`

Set `worker_node_count` if you desire to limit the number of worker nodes from your scale lab allocation. Set it to `0` if you want a 3 node compact cluster.

Change `ocp_release_image` to the desired image if the default (4.12.16) is not the desired version.
If you change `ocp_release_image` to a different major version (Ex `4.12`), then change `openshift_version` accordingly.
Change `ocp_release_image` to the desired image if the default (4.13.17) is not the desired version.
If you change `ocp_release_image` to a different major version (Ex `4.13`), then change `openshift_version` accordingly.

Only change `networktype` if you need to test something other than `OVNKubernetes`

Expand Down Expand Up @@ -266,10 +266,10 @@ public_vlan: false
# you must stop and rm all assisted-installer containers on the bastion and rerun
# the setup-bastion step in order to setup your bastion's assisted-installer to
# the version you specified
ocp_release_image: quay.io/openshift-release-dev/ocp-release:4.12.16-x86_64
ocp_release_image: quay.io/openshift-release-dev/ocp-release:4.13.17-x86_64
# This should just match the above release image version (Ex: 4.12)
openshift_version: "4.12"
# This should just match the above release image version (Ex: 4.13)
openshift_version: "4.13"
# Either "OVNKubernetes" or "OpenShiftSDN" (Only for BM/RWN cluster types)
networktype: OVNKubernetes
Expand Down Expand Up @@ -423,8 +423,8 @@ It is suggested to monitor your first deployment to see if anything hangs on boo
If everything goes well you should have a cluster in about 60-70 minutes. You can interact with the cluster from the bastion.

```console
[root@xxx-h01-000-r650 ~]# export KUBECONFIG=/root/bm/kubeconfig
[root@xxx-h01-000-r650 ~]# oc get no
(.ansible) [root@xxx-h01-000-r650 ~]# export KUBECONFIG=/root/bm/kubeconfig
(.ansible) [root@xxx-h01-000-r650 ~]# oc get no
NAME STATUS ROLES AGE VERSION
xxx-h02-000-r650 Ready control-plane,master,worker 73m v1.25.7+eab9cc9
xxx-h03-000-r650 Ready control-plane,master,worker 103m v1.25.7+eab9cc9
Expand Down
10 changes: 5 additions & 5 deletions docs/deploy-bm-ibmcloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,8 +80,8 @@ Change `cluster_type` to `cluster_type: bm`

Set `worker_node_count` if you need to limit the number of worker nodes from available hardware.

Change `ocp_release_image` to the required image if the default (4.12.16) is not the desired version.
If you change `ocp_release_image` to a different major version (Ex `4.12`), then change `openshift_version` accordingly.
Change `ocp_release_image` to the required image if the default (4.13.17) is not the desired version.
If you change `ocp_release_image` to a different major version (Ex `4.13`), then change `openshift_version` accordingly.

Only change `networktype` if you need to test something other than `OVNKubernetes`

Expand Down Expand Up @@ -138,10 +138,10 @@ sno_node_count:
# you must stop and rm all assisted-installer containers on the bastion and rerun
# the setup-bastion step in order to setup your bastion's assisted-installer to
# the version you specified
ocp_release_image: quay.io/openshift-release-dev/ocp-release:4.12.16-x86_64
ocp_release_image: quay.io/openshift-release-dev/ocp-release:4.13.17-x86_64
# This should just match the above release image version (Ex: 4.12)
openshift_version: "4.12"
# This should just match the above release image version (Ex: 4.13)
openshift_version: "4.13"
# Either "OVNKubernetes" or "OpenShiftSDN" (Only for BM/RWN cluster types)
networktype: OVNKubernetes
Expand Down
6 changes: 3 additions & 3 deletions docs/deploy-sno-ibmcloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,10 +39,10 @@ sno_node_count: 2
# you must stop and rm all assisted-installer containers on the bastion and rerun
# the setup-bastion step in order to setup your bastion's assisted-installer to
# the version you specified
ocp_release_image: quay.io/openshift-release-dev/ocp-release:4.12.16-x86_64
ocp_release_image: quay.io/openshift-release-dev/ocp-release:4.13.17-x86_64

# This should just match the above release image version (Ex: 4.12)
openshift_version: "4.12"
# This should just match the above release image version (Ex: 4.13)
openshift_version: "4.13"

# Either "OVNKubernetes" or "OpenShiftSDN" (Only for BM/RWN cluster types)
networktype: OVNKubernetes
Expand Down
10 changes: 5 additions & 5 deletions docs/deploy-sno-quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,8 +73,8 @@ Change `cluster_type` to `cluster_type: sno`

Change `sno_node_count` to the number of SNOs that should be provisioned. For example `sno_node_count: 1`

Change `ocp_release_image` to the desired image if the default (4.12.16) is not the desired version.
If you change `ocp_release_image` to a different major version (Ex `4.12`), then change `openshift_version` accordingly.
Change `ocp_release_image` to the desired image if the default (4.13.17) is not the desired version.
If you change `ocp_release_image` to a different major version (Ex `4.13`), then change `openshift_version` accordingly.

For the ssh keys we have a chicken before the egg problem in that our bastion machine won't be defined or ensure that keys are created until after we run `create-inventory.yml` and `setup-bastion.yml` playbooks. We will revisit that a little bit later.

Expand Down Expand Up @@ -240,10 +240,10 @@ public_vlan: false
# you must stop and rm all assisted-installer containers on the bastion and rerun
# the setup-bastion step in order to setup your bastion's assisted-installer to
# the version you specified
ocp_release_image: quay.io/openshift-release-dev/ocp-release:4.12.16-x86_64
ocp_release_image: quay.io/openshift-release-dev/ocp-release:4.13.17-x86_64
# This should just match the above release image version (Ex: 4.12)
openshift_version: "4.12"
# This should just match the above release image version (Ex: 4.13)
openshift_version: "4.13"
# Either "OVNKubernetes" or "OpenShiftSDN" (Only for BM/RWN cluster types)
networktype: OVNKubernetes
Expand Down

0 comments on commit 2e96b3a

Please sign in to comment.