Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor test_response to use Requests instead of curl #96

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

SlouchyButton
Copy link
Collaborator

I think we should prefer native-python approach when available. This will result in more clear code.

Also, we don't have to have two requests to check code and text, we can check both in one response.

@SlouchyButton SlouchyButton requested a review from phracek December 4, 2024 08:51
@SlouchyButton SlouchyButton self-assigned this Dec 4, 2024
Copy link
Member

@phracek phracek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this pull request!!!

Please address my changes.

container_ci_suite/engines/s2i_container.py Outdated Show resolved Hide resolved
container_ci_suite/engines/s2i_container.py Outdated Show resolved Hide resolved
container_ci_suite/engines/s2i_container.py Show resolved Hide resolved
@SlouchyButton
Copy link
Collaborator Author

SlouchyButton commented Dec 4, 2024

@phracek One thing that also came to mind is whether we might want to throw error instead of return false. Throwing error will get caught by PyTest directly without us having to deal with it in the test.

@phracek
Copy link
Member

phracek commented Dec 9, 2024

@phracek One thing that also came to mind is whether we might want to throw error instead of return false. Throwing error will get caught by PyTest directly without us having to deal with it in the test.

This could be problematic. E.g. In case we will call .test_response and it failed with traceback, then we should do something like:

with pytest.raise as ConnectionError:
    <API>.test_response

which could be a problematic.

The return value True / False is for assert easy.

Copy link
Member

@phracek phracek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Last two issues. Otherwise LGTM. We are more or less done. Thanks for your engagement.

"""
Test HTTP response of specified url

If expected output is empty, check will be skipped
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does not make sense to call a response in case we do not want to check.
We always want to check an output to know the connection works properly.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imagine that we use some external library (which we do in case of ruby, for example) as a test sample app. We might want to check that the app works - code 200 is returned, and content might be irrelevant for us as it can change without our knowledge.

I feel like having an option to pass the test only based on correct HTTP response code ignoring content is worth having (and even preferred for test stability).

to know the connection works properly

If it is not, you won't get 200 HTTP code.

This feature of API is optional, so it is up to test developer to correctly assess whether checking response body is needed.

# Log both code and text together as text can show error
# message
if not checks_passed:
logger.error("Unexpected code %d or output %s, \
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use also "f" output like
logger.error(f"Unexpected code {response.status_code} or output {response.text}....)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://pylint.readthedocs.io/en/latest/user_guide/messages/warning/logging-fstring-interpolation.html
https://docs.python.org/3/howto/logging.html#optimization

TL;DR It is preferred to use '%s formatting' in logging functions as it allows logger to parse everything lazily - so it is preferred for optimization purposes.

@SlouchyButton
Copy link
Collaborator Author

@phracek One thing that also came to mind is whether we might want to throw error instead of return false. Throwing error will get caught by PyTest directly without us having to deal with it in the test.

This could be problematic. E.g. In case we will call .test_response and it failed with traceback, then we should do something like:

with pytest.raise as ConnectionError:
    <API>.test_response

which could be a problematic.

The return value True / False is for assert easy.

In the rare case where we want the request to fail (which IDK if we even need anywhere as of now?) you are right that we would have to uses pytest.raise. On the other hand, I think that if we would assert in the test_response directly, it allows PyTest to work its magic of showing verbosity on demand. We wouldn't have to rely on logging as much, and we could debug test easier by just increasing verbosity. PyTest can show you exact differences and whole values of compared variables when you increase verbosity.

Both approaches seem valid, tho one uses PyTest internal functionality more than the other.

See further reading: https://docs.pytest.org/en/stable/how-to/output.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants