-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request: paging/limit results sets #24
Comments
@edperry trying to think about this with you, wouldn't be an options to simply split the walk into multiple sub-walks which would produce individual documents pre sub-walks? |
Sorry for the delay, was on vacation That maybe a good way of doing it, I was sort of focused on the size of the payload going though Kafka, but let me give that a try, I totally missed the idea of handling it as a filter. 👍 |
Hmmm, just looked at the data structure and not sure how I would split this data, its not JSON array.
|
So thinking about what you wrote more. I think walking the mib by groups would be nicer, but you still have to worry about size of the results. For example, If I polled the "Packages" or "Processes" running it still could easily exceed any value. I think an option to just to have a flag to create and event for every MIB object. This would create a lot of little objects, but it also provides the ability to provide the META information/description for each MIB. |
Issue logstash-plugins/logstash-integration-snmp#25 I was thinking about converting from
to something like
so this would make 3 options, mode => batch |
IDK if making the total fields Limit and depth to higher values is good enough but let me know what you think
|
looking at this and #42 - will get back to you shortly 👀 |
I was just thinking about this
oid_value could be either string, int or etc..... so I don't think this format will work or maybe we just create multiple fields.
I am not sure if that is a great idea now to create a "mode => single_name_value: because it is not a straightforward string: value combination, when it comes to ELK |
When walking a Large TREE 1.3.6.1, sometimes the tree is very large and created Huge documents
so obviously I could Up the size of MAX messages sending to kafka, but I think it would be great if there was a feature to, split the document output by the input based on some criteria
Some ideas I have are:
The text was updated successfully, but these errors were encountered: