Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New KW Wait For Deployment Replica To Be Ready for oc_install.robot #1867

Merged
merged 2 commits into from
Oct 1, 2024

Conversation

manosnoam
Copy link
Contributor

@manosnoam manosnoam commented Sep 29, 2024

This is to verify if the number of running pods is as desired by replica-set, instead of directly failing if the pods number is not as defined in ODS-CI (hard-coded).

For example, 3 pods are currently expected for odh-model-controller in ODS-CI, but in the latest ODH build the replica-set desired number is 1, not 3. This should fix this false failure.


Here is the tracking for the change of the number of replicas for the odh-model-controller:

Copy link
Contributor

Robot Results

✅ Passed ❌ Failed ⏭️ Skipped Total Pass %
545 0 0 545 100

@manosnoam
Copy link
Contributor Author

manosnoam commented Sep 29, 2024

Output example - See line:
18:38:07 [WARN] Timeout exceeded for all 3 pods to be ready. Verifying if desired ReplicaSet created
18:38:07 true

18:35:02  ==============================================================================
18:35:02  Rhods Olm :: Perform and verify RHODS OLM tasks                               
18:35:02  ==============================================================================
18:35:02  Can Install RHODS Operator                                            Cluster type: selfmanaged
18:35:02  Checking if RHODS is installed with "selfmanaged" "odh-nightlies" "CLi
18:35:02  Getting CSV from subscription rhoai-operator-dev namespace opendatahub-operators
18:35:02  Error from server (NotFound): subscriptions.operators.coreos.com "rhoai-operator-dev" not found
18:35:02  Got CSV  from subscription rhoai-operator-dev, result: 0
18:35:02  Operator with sub rhoai-operator-dev is installed result: False
18:35:02  RHODS is installed: False
18:35:02  Installing RHODS operator in selfmanaged
18:35:02  Authorino Operator is already installed
18:35:02  ServiceMesh Operator is already installed
18:35:03  Serverless Operator is already installed
18:35:04  Cloning into '/home/cloud-user/jenkins/workspace/cypress/dashboard-tests/ods-ci/ods_ci/rhodsolm'...
18:35:04  Watching command output: cd /home/cloud-user/jenkins/workspace/cypress/dashboard-tests/ods-ci/ods_ci/rhodsolm && ./setup.sh -t operator -u odh-nightlies -i brew.registry.redhat.io/rh-osbs/iib:827557 -n rhods-operator -p opendatahub-operators
18:35:04  Shell process started in the background
18:35:16  ..Error from server (AlreadyExists): namespaces "opendatahub-operators" already exists

18:35:16  catalogsource.operators.coreos.com/rhoai-catalog-dev configured
18:35:16  operatorgroup.operators.coreos.com/rhoai-operator-dev created
18:35:16  subscription.operators.coreos.com/rhoai-operator-dev created
18:35:16  Waiting 10m for Operator CSV 'Open Data Hub Operator' in opendatahub-operators to have status phase 'Succeeded'
18:35:48  Verifying RHODS installation
18:35:48  Waiting for all RHODS resources to be up and running
18:35:48  pod/opendatahub-operator-controller-manager-57b68c45c8-n6jjr condition met

18:35:48  1/1 pods created with label name=opendatahub-operator in opendatahub-operators namespace
18:35:48  Verified opendatahub-operators
18:35:48  output : 0, return_code : 0
18:35:48  Requested Configuration:
18:35:48  Applying DSCI yaml
18:35:48  apiVersion: dscinitialization.opendatahub.io/v1
18:35:48  kind: DSCInitialization
18:35:48  metadata:
18:35:48    name: default-dsci
18:35:48  spec:
18:35:48      applicationsNamespace: opendatahub
18:35:48      monitoring:
18:35:48          managementState: Managed
18:35:48          namespace: opendatahub
18:35:48      serviceMesh:
18:35:48          controlPlane:
18:35:48              metricsCollection: Istio
18:35:48              name: data-science-smcp
18:35:48              namespace: istio-system
18:35:48          managementState: Managed
18:35:48      trustedCABundle:
18:35:48          customCABundle: ''
18:35:48          managementState: Managed

18:35:48  dscinitialization.dscinitialization.opendatahub.io/default-dsci created
18:35:48  Waiting 30 seconds for DSCInitialization CustomResource To Be Ready
18:36:16  Requested Configuration:
18:36:16  codeflare - Managed
18:36:16  dashboard - Managed
18:36:16  datasciencepipelines - Managed
18:36:16  kserve - Managed
18:36:16  kueue - Managed
18:36:16  modelmeshserving - Managed
18:36:16  ray - Managed
18:36:16  trainingoperator - Removed
18:36:16  trustyai - Removed
18:36:16  workbenches - Managed
18:36:16  modelregistry - Managed
18:36:16  Creating DataScience Cluster using yml template
18:36:16  Applying Custom Manifests
18:36:16  Applying DSC yaml
18:36:16  apiVersion: datasciencecluster.opendatahub.io/v1
18:36:16  kind: DataScienceCluster
18:36:16  metadata:
18:36:16    name: default-dsc
18:36:16  spec:
18:36:16    components:
18:36:16      codeflare:
18:36:16        devFlags: 
18:36:16        managementState: Managed
18:36:16      dashboard:
18:36:16        devFlags: 
18:36:16        managementState: Managed
18:36:16      datasciencepipelines:
18:36:16        devFlags: 
18:36:16        managementState: Managed
18:36:16      kserve:
18:36:16        devFlags: 
18:36:16        defaultDeploymentMode: Serverless
18:36:16        managementState: Managed
18:36:16      kueue:
18:36:16        devFlags: 
18:36:16        managementState: Managed
18:36:16      modelmeshserving:
18:36:16        devFlags: 
18:36:16        managementState: Managed
18:36:16      ray:
18:36:16        devFlags: 
18:36:16        managementState: Managed
18:36:16      trainingoperator:
18:36:16        devFlags: 
18:36:16        managementState: Removed
18:36:16      trustyai:
18:36:16        devFlags: 
18:36:16        managementState: Removed
18:36:16      workbenches:
18:36:16        devFlags: 
18:36:16        managementState: Managed
18:36:16      modelregistry:
18:36:16        devFlags: 
18:36:16        managementState: Managed
18:36:16        registriesNamespace: odh-model-registries

18:36:17  datasciencecluster.datasciencecluster.opendatahub.io/default-dsc created
18:36:19  Waiting for 2 pods in opendatahub, label_selector=app=odh-dashboard

18:37:28  pod/odh-dashboard-76f5cb59ff-nmw2k condition met
18:37:28  pod/odh-dashboard-76f5cb59ff-q64gj condition met

18:37:28  2/2 pods created with label app=odh-dashboard in opendatahub namespace
18:37:28  Waiting for 1 pod in opendatahub, label_selector=app=notebook-controller

18:37:51  pod/notebook-controller-deployment-69dff8db8c-hq92f condition met

18:37:52  1/1 pods created with label app=notebook-controller in opendatahub namespace
18:37:52  Waiting for 1 pod in opendatahub, label_selector=app=odh-notebook-controller
18:37:52  pod/odh-notebook-controller-manager-6f4b96478c-qnjdz condition met

18:37:52  1/1 pods created with label app=odh-notebook-controller in opendatahub namespace
18:37:52  Waiting for 3 pods in opendatahub, label_selector=app=odh-model-controller

18:38:07  pod/odh-model-controller-698c9fb479-x6t2j condition met

18:38:07  1/3 pods created with label app=odh-model-controller in opendatahub namespace
18:38:07  [WARN] Timeout exceeded for all 3 pods to be ready. Verifying if desired ReplicaSet created
18:38:07  true

18:38:07  Waiting for 1 pod in opendatahub, label_selector=component=model-mesh-etcd
18:39:59  pod/etcd-68f96d7c55-dj8rm condition met

18:39:59  1/1 pods created with label component=model-mesh-etcd in opendatahub namespace
18:39:59  Waiting for 3 pods in opendatahub, label_selector=app.kubernetes.io/name=modelmesh-controller
18:39:59  pod/modelmesh-controller-c6454476c-qc888 condition met
18:39:59  pod/modelmesh-controller-c6454476c-t7bhn condition met
18:39:59  pod/modelmesh-controller-c6454476c-ttbgr condition met

18:39:59  3/3 pods created with label app.kubernetes.io/name=modelmesh-controller in opendatahub namespace
18:39:59  Waiting for 1 pod in opendatahub, label_selector=app.kubernetes.io/name=data-science-pipelines-operator

18:40:07  pod/data-science-pipelines-operator-controller-manager-7dd8c8bv59qt condition met

18:40:07  1/1 pods created with label app.kubernetes.io/name=data-science-pipelines-operator in opendatahub namespace
18:40:07  Waiting for 1 pod in opendatahub, label_selector=app.kubernetes.io/part-of=model-registry-operator

18:41:45  pod/model-registry-operator-controller-manager-8459676b8b-p9mxv condition met

18:41:46  1/1 pods created with label app.kubernetes.io/part-of=model-registry-operator in opendatahub namespace
18:41:46  Waiting for 3 pods in opendatahub, label_selector=app=odh-model-controller
18:41:46  pod/odh-model-controller-698c9fb479-x6t2j condition met

18:41:46  1/3 pods created with label app=odh-model-controller in opendatahub namespace
18:41:46  [WARN] Timeout exceeded for all 3 pods to be ready. Verifying if desired ReplicaSet created
18:41:46  true

18:41:46  Waiting for 1 pods in opendatahub, label_selector=control-plane=kserve-controller-manager
18:41:46  pod/kserve-controller-manager-5766998974-z6j9g condition met

18:41:47  1/1 pods created with label control-plane=kserve-controller-manager in opendatahub namespace
18:41:47  Waiting for pod status in opendatahub
18:41:50  Verified Applications NS: opendatahub
18:41:50  RHODS has been installed
18:41:50  | PASS |
18:41:50  ------------------------------------------------------------------------------
18:41:50  Rhods Olm :: Perform and verify RHODS OLM tasks                       | PASS |

@manosnoam manosnoam added verified This PR has been tested with Jenkins enhancements Bugfixes, enhancements, refactoring, ... in tests or libraries (PR will be listed in release-notes) labels Sep 30, 2024
Copy link
Member

@jstourac jstourac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure whether having two subsequent conditions (first hardcoded pod number, then check for the value in replicas ready) here won't make this a bit unnecessarily complex 🤔

Personally, I would go only with one approach - either having this still hardcoded or checking the expected replicas are ready (I prefer this one, though, I'm not sure whether it has some drawbacks). I think it doesn't make much sense to have both checks here since we pass if any of them pass anyway.

@CFSNM
Copy link
Contributor

CFSNM commented Sep 30, 2024

I agree with @jstourac . Having 2 conditions adds additional complexity. I would go with the new approach checking the number of pods in the replicaset and removing the hardcoded part

@manosnoam
Copy link
Contributor Author

manosnoam commented Sep 30, 2024

I agree with @jstourac . Having 2 conditions adds additional complexity. I would go with the new approach of checking the number of pods in the replicaset and removing the hardcoded part

I did not want to change backward compatibility, since the KW name is Wait For Pods Numbers, and it is widely used in oc_install.robot.
Look at https://github.com/red-hat-data-services/ods-ci/pull/1867/files/f82ddcd95bf03478738e9d0f6fbeec3fefbfc41e#diff-11c2f02f8419b8b4ee8bf5f117645cfe5da320a349f21813d8da399c47a7ffb3R234-R235 :
I made the KW just [WARN] if the expected number of pods is not the desired number by the replica-set, but it will not FAIL the run. The run will only fail if any pods are not running, or if there are fewer pods than desired.

@jstourac
Copy link
Member

jstourac commented Sep 30, 2024

I did not want to change backward compatibility, since the KW name is Wait For Pods Numbers, and it is widely used in oc_install.robot. Look at https://github.com/red-hat-data-services/ods-ci/pull/1867/files/f82ddcd95bf03478738e9d0f6fbeec3fefbfc41e#diff-11c2f02f8419b8b4ee8bf5f117645cfe5da320a349f21813d8da399c47a7ffb3R234-R235 : I made the KW just [WARN] if the expected number of pods is not the desired number by the replica-set, but it will not FAIL the run. The run will only fail if any pods are not running, or if there are fewer pods than desired.

Yeah, I saw that log. Still, I think that this may be confusing so if we are changing this, we should do it the proper way 🙂 We can have separate keywords for this kind of check and use the one that is preferable for different places.

Anyway, this is mergeable, just I don't like this as a final solution. So, we can merge it in case there is a followup PR to narrow this down. Or we can wait till this is updated. Or, if I'm the only one, I will simply be sad and this becomes the final solution, that is also an option 😀

@manosnoam
Copy link
Contributor Author

manosnoam commented Sep 30, 2024

We can have separate keywords for this kind of check and use the one that is preferable for different places.

@jstourac there are 13 calls to Wait For Pods Numbers {NUMBER} in oc_install.robot.
I can create a new Keyword or use the existing one Wait For Pods To Be Ready, and replace all those 13 calls.

For example, instead of calling:

Wait For Pods Numbers  1
  ...                   namespace=${OPERATOR_NAMESPACE}
  ...                   label_selector=name=${OPERATOR_NAME_LABEL}
  ...                   timeout=2000

Call:
Wait For Pods To Be Ready label_selector=name=${OPERATOR_NAME_LABEL} namespace=${OPERATOR_NAMESPACE} timeout=2000s

However, we will then lose the warning that the number of replica-set is not as expected (a warning that is seen with my original PR).

ods_ci/tests/Resources/OCP.resource Fixed Show fixed Hide fixed
ods_ci/tests/Resources/OCP.resource Fixed Show fixed Hide fixed
ods_ci/tests/Resources/OCP.resource Fixed Show fixed Hide fixed
[Arguments] ${namespace} ${label_selector} ${timeout}=600s
Log To Console Waiting for Namespace ${namespace} Deployment with label "${label_selector}" to have desired Replica-Set
${output} = Wait Until Keyword Succeeds ${timeout} 3s Run And Verify Command
... oc get deployment -l ${label_selector} -n ${namespace} -o json | jq -e '.status | .replicas \=\= .readyReplicas'

Check warning

Code scanning / Robocop

Line is too long ({{ line_length }}/{{ allowed_length }}) Warning test

Line is too long (121/120)
@manosnoam
Copy link
Contributor Author

@jstourac , @CFSNM as you requested, I created a new Keyword Wait For Deployment Replica To Be Ready (in OCP.resource). It replaces the 13 calls to Wait For Pods Numbers in oc_install.robot.

@manosnoam manosnoam changed the title Refactor KW Wait For Pods Numbers to first wait for all pods readiness New KW Wait For Deployment Replica To Be Ready for oc_install.robot Sep 30, 2024
CFSNM
CFSNM previously approved these changes Sep 30, 2024
Copy link
Contributor

@CFSNM CFSNM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @manosnoam ! LGTM

jstourac
jstourac previously approved these changes Sep 30, 2024
Copy link
Member

@jstourac jstourac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, Noam!

@manosnoam manosnoam dismissed stale reviews from jstourac and CFSNM via 78b3d0d September 30, 2024 15:24
Also verify if the number of running pods is as desired by replica-set,
instead of directly failing if the hard-coded pod number
is not up to date.

For example, 3 pods are currently expected for odh-model-controller,
but in latest ODH build the replica-set desired number is 1.

Signed-off-by: manosnoam <[email protected]>
This Keyword waits for all pods with a specified label in a specified
deployment, to have the Replica-Set ready (desired pods running)

Signed-off-by: manosnoam <[email protected]>
Copy link

Wait For Deployment Replica To Be Ready
[Documentation] Wait for Deployment of ${label_selector} in ${namespace} to have the Replica-Set Ready
[Arguments] ${namespace} ${label_selector} ${timeout}=600s
Log To Console Waiting for Deployment with label "${label_selector}" in Namespace "${namespace}", to have desired Replica-Set

Check warning

Code scanning / Robocop

Line is too long ({{ line_length }}/{{ allowed_length }}) Warning test

Line is too long (130/120)
[Documentation] Wait for Deployment of ${label_selector} in ${namespace} to have the Replica-Set Ready
[Arguments] ${namespace} ${label_selector} ${timeout}=600s
Log To Console Waiting for Deployment with label "${label_selector}" in Namespace "${namespace}", to have desired Replica-Set
${output} = Wait Until Keyword Succeeds ${timeout} 3s Run And Verify Command

Check warning

Code scanning / Robocop

The assignment sign is not consistent within the file. Expected '{{ expected_sign }}' but got '{{ actual_sign }}' instead Warning test

The assignment sign is not consistent within the file. Expected '=' but got ' =' instead
[Documentation] Wait for Deployment of ${label_selector} in ${namespace} to have the Replica-Set Ready
[Arguments] ${namespace} ${label_selector} ${timeout}=600s
Log To Console Waiting for Deployment with label "${label_selector}" in Namespace "${namespace}", to have desired Replica-Set
${output} = Wait Until Keyword Succeeds ${timeout} 3s Run And Verify Command

Check notice

Code scanning / Robocop

Variable '{{ name }}' is assigned but not used Note test

Variable '${output}' is assigned but not used
@manosnoam
Copy link
Contributor Author

manosnoam commented Sep 30, 2024

Re-run the ODH and RHOAI installs to verify the new commit - Verified:

image

@manosnoam manosnoam added the do not merge Do not merge this yet please label Sep 30, 2024
@manosnoam manosnoam removed the do not merge Do not merge this yet please label Sep 30, 2024
@manosnoam manosnoam enabled auto-merge (squash) September 30, 2024 18:32
@manosnoam manosnoam requested review from jstourac and CFSNM September 30, 2024 18:33
@tarukumar
Copy link
Contributor

Are we moving away from validating the number of replicas for each component for a specific reason. The original code was designed to ensure that the default number of pods for each component is present. If the goal of this PR is simply to ensure that component resources are ready, I don't think that's a good approach. This change seems to bypass the critical testing of the actual default pod validation. Without testing to confirm that the default number of pods exists, I believe this PR should be put on hold. So in simple term if you want this pr to be merged first create the tc to actual validate the number of default pod

@tarukumar tarukumar added the do not merge Do not merge this yet please label Oct 1, 2024
@jstourac
Copy link
Member

jstourac commented Oct 1, 2024

Are we moving away from validating the number of replicas for each component for a specific reason. The original code was designed to ensure that the default number of pods for each component is present.

Yes and no - now instead of depending on the hardcoded expected value, we are checking that the number of the replicas matches what is expected for each deployment. So at least we check that what is set in the deployment is matched.

If the goal of this PR is simply to ensure that component resources are ready, I don't think that's a good approach. This change seems to bypass the critical testing of the actual default pod validation. Without testing to confirm that the default number of pods exists, I believe this PR should be put on hold. So in simple term if you want this pr to be merged first create the tc to actual validate the number of default pod

Yes, if we want to check that the default number of replicas for each deployment is at some number, we should have a true test for this and not in this code, which should handle just and only the installation of the operator and not blocking further steps unless something very serious happens.

@tarukumar
Copy link
Contributor

The issue isn’t whether we should check—IMO we should definitely check that. The real question is that we previously had validation of the number of pods created by the default but now, with the recent changes, that validation is absent. It feels like we’re operating without a necessary test until we have an actual test in place. So, you’re asking if we should proceed without it. We could, since we know there are no significant changes in version 2.14, but that wouldn’t be an ideal approach.

@manosnoam
Copy link
Contributor Author

Yes, if we want to check that the default number of replicas for each deployment is at some number, we should have a true test for this and not in this code, which should handle just and only the installation of the operator and not blocking further steps unless something very serious happens.

Exactly, and to do so we will need an additional test that gets the defined pod numbers from ODH configurations/docs, not from the ODS-CI test code itself. Until we implement such a test case, we should not block ODH deployment, but only verify the replica-set.

@tarukumar
Copy link
Contributor

Yes, if we want to check that the default number of replicas for each deployment is at some number, we should have a true test for this and not in this code, which should handle just and only the installation of the operator and not blocking further steps unless something very serious happens.

Exactly, and to do so we will need an additional test that gets the defined pod numbers from ODH configurations/docs, not from the ODS-CI test code itself. Until we implement such a test case, we should not block ODH deployment, but only verify the replica-set.

same reply as earlier
The issue isn’t whether we should check—IMO we should definitely check that. The real question is that we previously had validation of the number of pods created by the default but now, with the recent changes, that validation is absent. It feels like we’re operating without a necessary test until we have an actual test in place. So, you’re asking if we should proceed without it. We could, since we know there are no significant changes in version 2.14, but that wouldn’t be an ideal approach.

@manosnoam manosnoam merged commit fb48468 into red-hat-data-services:master Oct 1, 2024
8 checks passed
@manosnoam
Copy link
Contributor Author

manosnoam commented Oct 1, 2024

The issue isn’t whether we should check—IMO we should definitely check that. The real question is that we previously had validation of the number of pods created by the default but now, with the recent changes, that validation is absent. It feels like we’re operating without a necessary test until we have an actual test in place. So, you’re asking if we should proceed without it. We could, since we know there are no significant changes in version 2.14, but that wouldn’t be an ideal approach.

Thanks Tarun for your suggestion!
We should aim to keep the original test - "Is the number of pods correct?" - but currently it can block deployments.
In my original PR I improved the same KW Wait For Pods Numbers to "[WARN]" if the number of pods is not as expected by ODS-CI, but without sending a FAIL signal. This was confusing for some of us, so I made a new KW Wait For Deployment Replica To Be Ready instead (that does not warn if the pod number is not as expected by ODS-CI).

FYI, we currently check the number of pods in dedicated tests in, so the deployment task might not need to verify pods numbers:

  • 0107__kservewaw_rhoai_installation.robot
  • 0105__serverless_operator.robot
  • 1007__model_serving_llm_UI.robot

@tarukumar
Copy link
Contributor

The issue isn’t whether we should check—IMO we should definitely check that. The real question is that we previously had validation of the number of pods created by the default but now, with the recent changes, that validation is absent. It feels like we’re operating without a necessary test until we have an actual test in place. So, you’re asking if we should proceed without it. We could, since we know there are no significant changes in version 2.14, but that wouldn’t be an ideal approach.

Thanks Tarun for your suggestion! We should aim to keep the original test - "Is the number of pods correct?" - but currently it can block deployments. In my original PR I improved the same KW Wait For Pods Numbers to "[WARN]" if the number of pods is not as expected by ODS-CI, but without sending a FAIL signal. This was confusing for some of us, so I made a new KW Wait For Deployment Replica To Be Ready instead (that does not warn if the pod number is not as expected by ODS-CI).

Yes, but theree has been changed made for odh nightly which reduced the pod number for model controller to 1 l. Which we should have be3n informed so that we can make change to our repo accordingly which I think was missing hree beacuse of that model controller pod was failing so for odh it is 1 of rhaoi it is 3

jgarciao pushed a commit to jgarciao/ods-ci that referenced this pull request Oct 1, 2024
…red-hat-data-services#1867)

* New KW `Wait For Deployment Replica To Be Ready` for oc_install.robot

This Keyword waits for all Deployment pods with a specified label,
to have its Replica-Set ready (desired pods running).

Signed-off-by: manosnoam <[email protected]>
jgarciao added a commit that referenced this pull request Oct 1, 2024
* Enhance keyword "Run And Verify Command" returning stdout
Signed-off-by: Jorge Garcia Oncins <[email protected]>

* Disable cache in version-test and take-nap pipeline samples
Signed-off-by: Jorge Garcia Oncins <[email protected]>

* Use kw "Run And Verify Command" in DataSciencePipelinesBackend
Signed-off-by: Jorge Garcia Oncins <[email protected]>

* Fix PIP_TRUSTED_HOST in GPU testing sample
Signed-off-by: Jorge Garcia Oncins <[email protected]>

* Revert "Disable cache in version-test and take-nap pipeline samples"

This reverts commit 336790bc5482dc708e9d0824f879e3de14a680ae.

* Add initial DSP upgrade testing tests
Signed-off-by: Jorge Garcia Oncins <[email protected]>

* Fix linter errors
Signed-off-by: Jorge Garcia Oncins <[email protected]>

* Rename keyword and fix typo
Signed-off-by: Jorge Garcia Oncins <[email protected]>

* New KW `Wait For Deployment Replica To Be Ready` for oc_install.robot (#1867)

* New KW `Wait For Deployment Replica To Be Ready` for oc_install.robot

This Keyword waits for all Deployment pods with a specified label,
to have its Replica-Set ready (desired pods running).

Signed-off-by: manosnoam <[email protected]>

---------

Signed-off-by: manosnoam <[email protected]>
Co-authored-by: Noam Manos <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
do not merge Do not merge this yet please enhancements Bugfixes, enhancements, refactoring, ... in tests or libraries (PR will be listed in release-notes) verified This PR has been tested with Jenkins
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants