-
Notifications
You must be signed in to change notification settings - Fork 90
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tests: add a test for missing runmode v3 #1454
Conversation
Ticket: #5711
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We're getting there!
The way to check for the error message needs some fixes, but I suggest you wait for the PR changes to be accepted before you try to get this test right, as any changes happening on the Suricata side might require an update to the test.
- file: | ||
args: "No capture mode specified as a command-line argument, specify one from the --list-runmodes" stderr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this works as intended. Please take a better look at the example from https://github.com/OISF/suricata-verify/blob/master/tests/detect-strip_whitespace-01/test.yaml#L9-L12
to understand how to check for an error message in the stderr file and count its occurrences ;)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this works as intended. Please take a better look at the example from https://github.com/OISF/suricata-verify/blob/master/tests/detect-strip_whitespace-01/test.yaml#L9-L12 to understand how to check for an error message in the stderr file and count its occurrences ;)
Thank you for your feedback. I've carefully reviewed the suggestion. And I have verified that the error message in the stderr
file matches the specified one exactly, and there are no extra occurrences. On using grep
and wc
in the checks like the example you gave. In my case, the error message occurs exactly once in the stderr
file, so I did not explicitly specify expect: 1
.
If it's beneficial for clarity, I can include it, but I wanted to confirm if my understanding is correct and if it's necessary in this specific scenario.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand your reasoning, but have you run this test locally to see if it works as expected?
What I meant to say with my other comment is that the check itself doesn't work as is. Since we are missing the grep
there, there isn't something for SV to compare against and indicate whether the test fails or succeeds, and therefore it doesn't interpret this as a test. Does this help?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand your reasoning, but have you run this test locally to see if it works as expected?
What I meant to say with my other comment is that the check itself doesn't work as is. Since we are missing the
grep
there, there isn't something for SV to compare against and indicate whether the test fails or succeeds, and therefore it doesn't interpret this as a test. Does this help?
okay, I understand now and I just ran this test again with this:
checks:
shell:
args: "grep 'No capture mode specified as a command-line argument, specify one from the --list-runmodes' stderr | wc -l | xargs"
expect: 1
and here is the result:
Finished release [optimized] target(s) in 0.80s
Number of concurrent jobs: 2
===> runmode-test: OK
PASSED: 1
FAILED: 0
SKIPPED: 0
But suprisingingly, I tried using grep a lot of times earlier when trying to incoporate feedback before I decided to PR , it didnt say any test pass or fail. it just indicated 0 for both pass and fail. maybe there was a typo I neglected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is very possible, one thing I still want to improve with SV is to get better error messages when a test fails to run, instead of just showing nothing, as it happens now and has happened to you. Glad to know that things are making sense! :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That is very possible, one thing I still want to improve with SV is to get better error messages when a test fails to run, instead of just showing nothing, as it happens now and has happened to you. Glad to know that things are making sense! :)
Thank you. So do I go ahead and incorporate this feedback or I wait out for the PR changes
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The format of the test itself needs to be updated, regardless of future decisions on what should the output of that command look like. So I think you are safe to incorporate these changes, but can hold on submitting a new PR for them, for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please see my reply and inline comment ;)
requires: | ||
min-version: 7.0.3 | ||
pcap: false | ||
|
||
exit-code: 1 | ||
|
||
args: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: While reviewing locally, I've noticed some trailing whitespace at the end of min-version
and args:
lines. Could you remove those, please?
- file: | ||
args: "No capture mode specified as a command-line argument, specify one from the --list-runmodes" stderr |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand your reasoning, but have you run this test locally to see if it works as expected?
What I meant to say with my other comment is that the check itself doesn't work as is. Since we are missing the grep
there, there isn't something for SV to compare against and indicate whether the test fails or succeeds, and therefore it doesn't interpret this as a test. Does this help?
Hi there, according to our guidelines for stale PRs, I'm closing this. Thanks for your interest in contributing to our project, and feel free to reach out in case you have time and want to contribute to Suricata again, or if you're still working on this task! :) :) https://docs.suricata.io/en/latest/devguide/contributing/github-pr-workflow.html#mergeable-pull-requests |
Ticket: 5711
Describe changes:
Previous PR: #1442
Suricata PR: OISF/suricata#9740
Redmine ticket: https://redmine.openinfosecfoundation.org/issues/5711