-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
7.0.x: backport treating unknown requirements as unsatisfied with opt-out flag - v3 #12238
Closed
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sphinx embeds a date in the generated man pages, and to provide reproducible builds this date needs to be provided to Sphinx, otherwise it will use the current date. If building from Git, extract the date from the most recent commit. In a release, this commit would be the commit that sets the version so is accurate. If .git does not exist, use the most recent data found in the ChangeLog. The ChangeLog is not used when building from git, as the main/master branch may not have recent enough timestamps. This should provide a consistent date when re-building the distribution from the same non-git archive, or from the same git commit. Ticket: OISF#6911 (cherry picked from commit b58dd5e)
In worktree scenarios, .git is a file. Assuming its a directory causes the release date to check the ChangeLog instead of the last commit, while not a big issue, can be confusing.
When --enable-unittests w/o --enable-debug is used. (cherry picked from commit e651cf9)
v4 was doing redundant recursion level setup. v6 was missing PKT_REBUILT_FRAGMENT flag. (cherry picked from commit af97316)
Eve's packet_info.linktype should correctly indicated what the `packet` field contains. Until now it was using DLT_RAW even if Ethernet or other L2+ headers were present. This commit records the datalink of the packet creating the first fragment, which can include the L2+ header data. Bug: OISF#6887. (cherry picked from commit 49c67b2)
Commit b8b8aa6 used tm_name of the first StatsRecord of a thread block as key for the "threads" object. However, depending on the type of thread, tm_name can be NULL and would result in no entry being included for that thread at all. This caused non-worker metrics to vanish from the "threads" object in the dump-counters output. This patch fixes this by remembering the first occurrence of a valid tm_name within the per-thread block and adds another unittest to cover this scenario. (cherry picked from commit f172041)
New suricata-verify test listens on loopback interface, resulting in the capture and in_iface fields in the stats and event objects. (cherry picked from commit f9cf87a)
Issue: 6957 Rather than selecting the thread_id index by packets traveling to the server, use the flow flags. If the flow has been reversed, the second slot is represents the thread id to be used. (cherry picked from commit c305ed1)
Ticket: 6878 Follow up on 1564942 When adding many sequence nodes, either from start or scalar event We add "sequence nodes" whose name is an integer cf sequence_node_name and then run ConfNodeLookupChild to see if it had been already set (from the command line cf comment in the code) And ConfNodeLookupChild iterates the whole linked list... 1. We add node 1 2. To add node 2, we check if node 1 equals this new node 3. To add node 3, we check if nodes 1, or 2 equals this new node's name And so on... This commits avoids these checks ig the list is empty at the beginning (cherry picked from commit 240e068)
(cherry picked from commit 365a66a)
Minor changes to improve readability, remove extraneous include files. (cherry picked from commit c27dee7)
Issue: 6864 Reduce complexity by eliminating the PCRE logic and adding a unittest to validate null/empty string handling (cherry picked from commit ee94239)
Issue: 6864 Multiple IP options were not handled properly as the value being OR'd into the packet's ip option variable were enum values instead of bit values. (cherry picked from commit d7026b7)
(cherry picked from commit d4085fc)
Ticket: 6872 (cherry picked from commit 10590e6)
Ticket: 6948 http.response_body keyword did not enforce a direction, and thus could match on files sent with POST requests (cherry picked from commit e6895b8)
Unsafe handling of buffer offset and to be inserted data's length could lead to a integer overflow. This in turn would skip growing the target buffer, which then would be memcpy'd into, leading to an out of bounds write. This issue shouldn't be reachable through any of the consumers of the API, but to be sure some debug validation checks have been added. Bug: OISF#6903. (cherry picked from commit cf6278f)
Improve it for af-packet, dpdk, netmap. Check would not consider an interface IDS if the `default` section contained a copy-mode field. (cherry picked from commit 58bff9b)
For the capture methods that support livedev and IPS, livedev.use-for-tracking is not supported. This setting causes major flow tracking issues, as both sides of a flow would be tracked in different flows. This patch disables the livedev.use-for-tracking setting if it is set to true. A warning will be issued. Ticket: OISF#6726. (cherry picked from commit 08841f2)
- typo in comment - remove debug function that is not used and no longer valid (cherry picked from commit 276d3d6)
Make tests more readable for comparing to the paper "Target-Based Fragmentation Reassembly". (cherry picked from commit 6339dea)
Use a more consistent naming scheme between ipv4 and ipv6. (cherry picked from commit 2f00b58)
(cherry picked from commit bdd17de)
Instead of breaking the loop when the current fragment does not have any more fragments, set a flag and continue to the next fragment as the next fragment may have data that occurs before this fragment, but overlaps it. Then break if the next fragment does not overlap the previous. Bug: OISF#6668 (cherry picked from commit d0fd078)
Commit changes are made to avoid possible memory leaks. If the parser is initialized before configuration file checking, there was no deinit call before function return. Do check config file existance and type before YAML parser initialization, so we don't need to deinit parser before exiting the function. Bug: OISF#7302 (cherry picked from commit 87e6e93)
The profiling arrays are incorrectly sized by the number of thread modules. Since they contain app-layer protocol data, they should be sized by ALPROTO_MAX. (cherry picked from commit 799822c)
Current GetBlock degrees the sbb search from rb tree to line, which costs much cpu time, and could be replaced by SBB_RB_FIND_INCLUSIVE. It reduces time complexity from O(nlogn) to O(logn). Ticket: 7208. (cherry picked from commit 951bcff)
(cherry picked from commit b44fc62)
Ticket: 7366 Ticket: 6186 (cherry picked from commit dd71ef0)
Ticket: 7326 Having a lower progress than one where we actually can get occurences of the multibuffer made prefilter bail out too early, not having found a buffer in the multi-buffer that matiched the prefilter. For example, we registered http_request_header with progress 0 instad of progress HTP_REQUEST_HEADERS==2, and if the first packet had only the request line, we would consider that signatures with http_request_header as prefilter/fast_pattern could not match for this transaction, even if they in fact could have a later packet with matching headers. Hence, we got false negatives, if http.request_header or http.response_header was used as fast pattern, and if the request or response came in multiple packets, and the first of these packets did not have enough data (like only http request line), and the next packets did have the matching data. (cherry picked from commit cca59cd)
The returned event_id was being set to -1, but the function wasn't returning -1 to indicate error. Ticket: OISF#7361
instead of writing to a temporary buffer and then copying, to save the cost of copying. Ticket: 7229 Not a cherry-pick as we do not put the transforms in rust, but just do this optimization in C
Issue: 7295 The sticky buffer name was incorrectly set to method; this commit fixes the name typo with stat_code.
If there is a transform before dotprefix, it operates in place in a single buffer, and must therefore use memmove instead of memcpy to avoid UB. Ticket: 7229
Backport of commit 5d82521. Ticket: 7323
instead of stopping on the first message if it does not have a reason code, like conn and conn_ack Was fixed in master by big refactor 0a1062f
In the situation where the mem buffer cannot be expanded to the requested size, drop the log message. For each JSON log context, a warning will be emitted once with a partial bit of the log record being dropped to identify what event types may be leading to large log records. This also fixes the call to MemBufferExpand which is supposed be passed the amount to expand by, not the new size required. Ticket: OISF#7300 (cherry picked from commit d39e427)
(cherry picked from commit 287d836)
For example, "requires: foo bar" is an unknown requirement, however its not tracked, nor an error as it follows the syntax. Instead, record these unknown keywords, and fail the requirements check if any are present. A future version of Suricata may have new requires keywords, for example a check for keywords. Ticket: OISF#7418 (cherry picked from commit 820a3e5)
The new behavior in 8, and backported is to treat unknown requirements as unsatisfied requirements. For 7.0.8, add a configuration option, "ignore-unknown-requirements" to completely ignore unknown requirements, effectively treating them as available. Ticket: OISF#7434
jasonish
requested review from
jufajardini,
catenacyber,
victorjulien and
a team
as code owners
December 5, 2024 14:43
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Previous PR: #12226
Changes:
SV_BRANCH=OISF/suricata-verify#2162