-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fm row fix/v3 #12218
Fm row fix/v3 #12218
Conversation
In multi instance flow manager setups, each flow manager gets a slice of the hash table to manage. Due to a logic error in the chunked scanning of the hash slice, instances beyond the first would always rescan the same (first) subslice of their slice. The `pos` variable that is used to keep the state of what the starting position for the next scan was supposed to be, was treated as if it held a relative value. Relative to the bounds of the slice. It was however, holding an absolute position. This meant that when doing it's bounds check it was always considered out of bounds. This would reset the sub- slice to be scanned to the first part of the instances slice. This patch addresses the issue by correctly handling the fact that the value is absolute. Bug: OISF#7365. Fixes: e9d2417 ("flow/manager: adaptive hash eviction timing")
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #12218 +/- ##
=======================================
Coverage 83.18% 83.18%
=======================================
Files 912 912
Lines 257169 257171 +2
=======================================
+ Hits 213914 213935 +21
+ Misses 43255 43236 -19
Flags with carried forward coverage won't be shown. Click here to find out more. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Tests well. LGTM 🚀
Information: ERROR: QA failed on SURI_TLPR1_alerts_cmp.
Pipeline 23717 |
Information: ERROR: QA failed on SURI_TLPR1_alerts_cmp.
Pipeline 23720 |
@ct0br0 for the alert top 500 redistribution, how do the numbers look for the ones that dropped out of the top 500? Do they have at least the same number as in master? |
Merged in #12245, thanks! |
Flow manager multi instance fixes.
Fix start issue with #12208