-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor test_response to use Requests instead of curl #96
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this pull request!!!
Please address my changes.
@phracek One thing that also came to mind is whether we might want to throw error instead of return false. Throwing error will get caught by PyTest directly without us having to deal with it in the test. |
This could be problematic. E.g. In case we will call .test_response and it failed with traceback, then we should do something like:
which could be a problematic. The return value True / False is for assert easy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Last two issues. Otherwise LGTM. We are more or less done. Thanks for your engagement.
""" | ||
Test HTTP response of specified url | ||
|
||
If expected output is empty, check will be skipped |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does not make sense to call a response in case we do not want to check.
We always want to check an output to know the connection works properly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Imagine that we use some external library (which we do in case of ruby, for example) as a test sample app. We might want to check that the app works - code 200 is returned, and content might be irrelevant for us as it can change without our knowledge.
I feel like having an option to pass the test only based on correct HTTP response code ignoring content is worth having (and even preferred for test stability).
to know the connection works properly
If it is not, you won't get 200 HTTP code.
This feature of API is optional, so it is up to test developer to correctly assess whether checking response body is needed.
# Log both code and text together as text can show error | ||
# message | ||
if not checks_passed: | ||
logger.error("Unexpected code %d or output %s, \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use also "f" output like
logger.error(f"Unexpected code {response.status_code} or output {response.text}....)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://pylint.readthedocs.io/en/latest/user_guide/messages/warning/logging-fstring-interpolation.html
https://docs.python.org/3/howto/logging.html#optimization
TL;DR It is preferred to use '%s formatting' in logging functions as it allows logger to parse everything lazily - so it is preferred for optimization purposes.
In the rare case where we want the request to fail (which IDK if we even need anywhere as of now?) you are right that we would have to uses Both approaches seem valid, tho one uses PyTest internal functionality more than the other. See further reading: https://docs.pytest.org/en/stable/how-to/output.html |
I think we should prefer native-python approach when available. This will result in more clear code.
Also, we don't have to have two requests to check code and text, we can check both in one response.