Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tests: add a test for missing runmode v3 #1454

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions tests/runmode-test/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
# Suricata Missing Runmode Issue

This directory contains test files and documentation for addressing the "Missing runmode" issue 5711 in Suricata.

## Issue Description

The issue relates to Suricata not providing adequate feedback when the runmode is missing, leading to confusion for users.

## Related Issue

https://redmine.openinfosecfoundation.org/issues/5711
2 changes: 2 additions & 0 deletions tests/runmode-test/suricata.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
%YAML 1.1
---
13 changes: 13 additions & 0 deletions tests/runmode-test/test.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
requires:
min-version: 7.0.3
pcap: false

exit-code: 1

args:
Comment on lines +1 to +7
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: While reviewing locally, I've noticed some trailing whitespace at the end of min-version and args: lines. Could you remove those, please?

- -c suricata.yaml

checks:

- file:
args: "No capture mode specified as a command-line argument, specify one from the --list-runmodes" stderr
Comment on lines +12 to +13
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this works as intended. Please take a better look at the example from https://github.com/OISF/suricata-verify/blob/master/tests/detect-strip_whitespace-01/test.yaml#L9-L12
to understand how to check for an error message in the stderr file and count its occurrences ;)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this works as intended. Please take a better look at the example from https://github.com/OISF/suricata-verify/blob/master/tests/detect-strip_whitespace-01/test.yaml#L9-L12 to understand how to check for an error message in the stderr file and count its occurrences ;)

Thank you for your feedback. I've carefully reviewed the suggestion. And I have verified that the error message in the stderr file matches the specified one exactly, and there are no extra occurrences. On using grep and wc in the checks like the example you gave. In my case, the error message occurs exactly once in the stderr file, so I did not explicitly specify expect: 1.
If it's beneficial for clarity, I can include it, but I wanted to confirm if my understanding is correct and if it's necessary in this specific scenario.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand your reasoning, but have you run this test locally to see if it works as expected?

What I meant to say with my other comment is that the check itself doesn't work as is. Since we are missing the grep there, there isn't something for SV to compare against and indicate whether the test fails or succeeds, and therefore it doesn't interpret this as a test. Does this help?

Copy link
Author

@comfort619 comfort619 Nov 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand your reasoning, but have you run this test locally to see if it works as expected?

What I meant to say with my other comment is that the check itself doesn't work as is. Since we are missing the grep there, there isn't something for SV to compare against and indicate whether the test fails or succeeds, and therefore it doesn't interpret this as a test. Does this help?

okay, I understand now and I just ran this test again with this:

checks:
shell:
args: "grep 'No capture mode specified as a command-line argument, specify one from the --list-runmodes' stderr | wc -l | xargs"
expect: 1

and here is the result:
Finished release [optimized] target(s) in 0.80s
Number of concurrent jobs: 2
===> runmode-test: OK

PASSED: 1
FAILED: 0
SKIPPED: 0

But suprisingingly, I tried using grep a lot of times earlier when trying to incoporate feedback before I decided to PR , it didnt say any test pass or fail. it just indicated 0 for both pass and fail. maybe there was a typo I neglected.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is very possible, one thing I still want to improve with SV is to get better error messages when a test fails to run, instead of just showing nothing, as it happens now and has happened to you. Glad to know that things are making sense! :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is very possible, one thing I still want to improve with SV is to get better error messages when a test fails to run, instead of just showing nothing, as it happens now and has happened to you. Glad to know that things are making sense! :)

Thank you. So do I go ahead and incorporate this feedback or I wait out for the PR changes

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The format of the test itself needs to be updated, regardless of future decisions on what should the output of that command look like. So I think you are safe to incorporate these changes, but can hold on submitting a new PR for them, for now.