Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

7.0.x: backport treating unknown requirements as unsatisfied with opt-out flag - v3 #12238

Closed
wants to merge 384 commits into from

Conversation

jasonish
Copy link
Member

@jasonish jasonish commented Dec 5, 2024

Previous PR: #12226
Changes:

  • Note that new option will only exist in 7.0 release series, and not 8.

SV_BRANCH=OISF/suricata-verify#2162

jasonish and others added 30 commits April 9, 2024 19:29
Sphinx embeds a date in the generated man pages, and to provide
reproducible builds this date needs to be provided to Sphinx,
otherwise it will use the current date.

If building from Git, extract the date from the most recent commit. In
a release, this commit would be the commit that sets the version so is
accurate.

If .git does not exist, use the most recent data found in the
ChangeLog.

The ChangeLog is not used when building from git, as the main/master
branch may not have recent enough timestamps.

This should provide a consistent date when re-building the
distribution from the same non-git archive, or from the same git
commit.

Ticket: OISF#6911
(cherry picked from commit b58dd5e)
In worktree scenarios, .git is a file. Assuming its a directory causes
the release date to check the ChangeLog instead of the last commit,
while not a big issue, can be confusing.
Function prototype has changed in a recent release. Rather than dealing
with detecting that, fall back to our regular pattern of using
pcre2_substring_copy_bynumber().

Bug: OISF#6918.
(cherry picked from commit b224209)
When --enable-unittests w/o --enable-debug is used.

(cherry picked from commit e651cf9)
When outputting a float, check if its infinity, or not a number and
output a null instead.

Using a null was chosen as this is what serde_yaml, Firefox, Chrome,
Node, etc. do.

Ticket: OISF#6921
(cherry picked from commit 71f59e5)
v4 was doing redundant recursion level setup.

v6 was missing PKT_REBUILT_FRAGMENT flag.

(cherry picked from commit af97316)
Eve's packet_info.linktype should correctly indicated what the `packet`
field contains. Until now it was using DLT_RAW even if Ethernet or other
L2+ headers were present.

This commit records the datalink of the packet creating the first
fragment, which can include the L2+ header data.

Bug: OISF#6887.
(cherry picked from commit 49c67b2)
This is just another variant of DLT_RAW.

Ticket: OISF#6943.
(cherry picked from commit 7632236)
Commit b8b8aa6 used tm_name of the
first StatsRecord of a thread block as key for the "threads" object.
However, depending on the type of thread, tm_name can be NULL and would
result in no entry being included for that thread at all. This caused
non-worker metrics to vanish from the "threads" object in the
dump-counters output.

This patch fixes this by remembering the first occurrence of a valid
tm_name within the per-thread block and adds another unittest to
cover this scenario.

(cherry picked from commit f172041)
New suricata-verify test listens on loopback interface, resulting
in the capture and in_iface fields in the stats and event objects.

(cherry picked from commit f9cf87a)
Issue: 6957

Rather than selecting the thread_id index by packets traveling to the
server, use the flow flags. If the flow has been reversed, the second
slot is represents the thread id to be used.

(cherry picked from commit c305ed1)
Ticket: 6878

Follow up on 1564942

When adding many sequence nodes, either from start or scalar event

We add "sequence nodes" whose name is an integer cf sequence_node_name
and then run ConfNodeLookupChild to see if it had been already set
(from the command line cf comment in the code)
And ConfNodeLookupChild iterates the whole linked list...

1. We add node 1
2. To add node 2, we check if node 1 equals this new node
3. To add node 3, we check if nodes 1, or 2 equals this new node's name
And so on...

This commits avoids these checks ig the list is empty at the beginning

(cherry picked from commit 240e068)
Datasets that hit the memcap limit need to be discarded if the memcap is
hit or otherwise the datasets are still loaded with partial data while
the signature is not loaded due to the memcap error.

Ticket: OISF#6678
(cherry picked from commit 1f9600e)
Minor changes to improve readability, remove extraneous include files.

(cherry picked from commit c27dee7)
Issue: 6864

Reduce complexity by eliminating the PCRE logic and adding a unittest to
validate null/empty string handling

(cherry picked from commit ee94239)
Issue: 6864

Multiple IP options were not handled properly as the value being OR'd
into the packet's ip option variable were enum values instead of bit
values.

(cherry picked from commit d7026b7)
(cherry picked from commit d4085fc)
Ticket: 6872

(cherry picked from commit 10590e6)
Ticket: 6948

http.response_body keyword did not enforce a direction, and thus
could match on files sent with POST requests

(cherry picked from commit e6895b8)
Unsafe handling of buffer offset and to be inserted data's length
could lead to a integer overflow. This in turn would skip growing
the target buffer, which then would be memcpy'd into, leading to
an out of bounds write.

This issue shouldn't be reachable through any of the consumers of
the API, but to be sure some debug validation checks have been
added.

Bug: OISF#6903.
(cherry picked from commit cf6278f)
Improve it for af-packet, dpdk, netmap. Check would not consider
an interface IDS if the `default` section contained a copy-mode
field.

(cherry picked from commit 58bff9b)
For the capture methods that support livedev and IPS,
livedev.use-for-tracking is not supported.

This setting causes major flow tracking issues, as both sides of
a flow would be tracked in different flows.

This patch disables the livedev.use-for-tracking setting if it
is set to true. A warning will be issued.

Ticket: OISF#6726.
(cherry picked from commit 08841f2)
- typo in comment
- remove debug function that is not used and no longer valid

(cherry picked from commit 276d3d6)
Make tests more readable for comparing to the paper "Target-Based
Fragmentation Reassembly".

(cherry picked from commit 6339dea)
Use a more consistent naming scheme between ipv4 and ipv6.

(cherry picked from commit 2f00b58)
(cherry picked from commit bdd17de)
Instead of breaking the loop when the current fragment does not have
any more fragments, set a flag and continue to the next fragment as
the next fragment may have data that occurs before this fragment, but
overlaps it.

Then break if the next fragment does not overlap the previous.

Bug: OISF#6668
(cherry picked from commit d0fd078)
zemeteri and others added 24 commits October 21, 2024 08:32
Detect engine tenant reloading function hasn't got engine release call
under error label, so it is possible memory leak in case of errors in
further new detect engine initialization.

Bug: OISF#7303
(cherry picked from commit adcac9e)
Commit changes are made to avoid possible memory leaks. If the parser
is initialized before configuration file checking, there was no deinit
call before function return. Do check config file existance and type
before YAML parser initialization, so we don't need to deinit parser
before exiting the function.

Bug: OISF#7302
(cherry picked from commit 87e6e93)
The profiling arrays are incorrectly sized by the number of thread
modules. Since they contain app-layer protocol data, they should be
sized by ALPROTO_MAX.

(cherry picked from commit 799822c)
Current GetBlock degrees the sbb search from rb tree to
line, which costs much cpu time, and could be replaced by
SBB_RB_FIND_INCLUSIVE. It reduces time complexity from
O(nlogn) to O(logn).

Ticket: 7208.
(cherry picked from commit 951bcff)
Rust was using i8 as the return type, while C uses int. As of Rust
1.82, the return value is turned to garbage over the FFI boundary.

Ticket: OISF#7338
(cherry picked from commit 45384ef)
Ticket: 7366
Ticket: 6186
(cherry picked from commit dd71ef0)
Ticket: 7326

Having a lower progress than one where we actually can get
occurences of the multibuffer made prefilter
bail out too early, not having found a buffer in the multi-buffer
that matiched the prefilter.

For example, we registered http_request_header with progress 0
instad of progress HTP_REQUEST_HEADERS==2, and if the first
packet had only the request line, we would consider
that signatures with http_request_header as prefilter/fast_pattern
could not match for this transaction, even if they in fact
could have a later packet with matching headers.

Hence, we got false negatives, if http.request_header or
http.response_header was used as fast pattern, and if the request
or response came in multiple packets, and the first of these packets
did not have enough data (like only http request line),
and the next packets did have the matching data.

(cherry picked from commit cca59cd)
The returned event_id was being set to -1, but the function wasn't
returning -1 to indicate error.

Ticket: OISF#7361
- not_a_request to not_request
- not_a_response to not_reponse

Ticket: OISF#7361
(cherry picked from commit 833c7c6)
- weak_crypto_nodh -> weak_crypto_no_dh
- weak_crypto_noauth -> weak_crypto_no_auth

Ticket: OISF#7361
(cherry picked from commit b44ba32)
The event "modbus.invalid_unit_identifier" no longer exists.

Ticket: OISF#7361
(cherry picked from commit a55960e)
Rename InvalidHTTP1Settings to InvalidHttp1Settings so it gets the
expected name transformation of "invalid_http1_settings".

Ticket: OISF#7361
(cherry picked from commit b1c26dc)
instead of writing to a temporary buffer and then copying,
to save the cost of copying.

Ticket: 7229

Not a cherry-pick as we do not put the transforms in rust,
but just do this optimization in C
Issue: 7295

The sticky buffer name was incorrectly set to method; this commit fixes
the name typo with stat_code.
If there is a transform before dotprefix, it operates in place
in a single buffer, and must therefore use memmove instead of memcpy
to avoid UB.

Ticket: 7229
Backport of commit 5d82521.

Ticket: 7323
instead of stopping on the first message if it does not
have a reason code, like conn and conn_ack

Was fixed in master by big refactor 0a1062f
In the situation where the mem buffer cannot be expanded to the
requested size, drop the log message.

For each JSON log context, a warning will be emitted once with a partial
bit of the log record being dropped to identify what event types may be
leading to large log records.

This also fixes the call to MemBufferExpand which is supposed be
passed the amount to expand by, not the new size required.

Ticket: OISF#7300
(cherry picked from commit d39e427)
For example, "requires: foo bar" is an unknown requirement, however
its not tracked, nor an error as it follows the syntax. Instead,
record these unknown keywords, and fail the requirements check if any
are present.

A future version of Suricata may have new requires keywords, for
example a check for keywords.

Ticket: OISF#7418
(cherry picked from commit 820a3e5)
The new behavior in 8, and backported is to treat unknown requirements
as unsatisfied requirements.

For 7.0.8, add a configuration option, "ignore-unknown-requirements"
to completely ignore unknown requirements, effectively treating them
as available.

Ticket: OISF#7434
@jasonish jasonish closed this Dec 5, 2024
@jasonish jasonish deleted the 7.0.x-7434-requires/v3 branch December 12, 2024 15:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.