Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Filter randomly drops events #31

Open
michaeleino opened this issue Apr 30, 2017 · 9 comments
Open

Filter randomly drops events #31

michaeleino opened this issue Apr 30, 2017 · 9 comments

Comments

@michaeleino
Copy link

michaeleino commented Apr 30, 2017

using this filter is really great,
but i have noticed that it drops some events, & this is randomly I see the end tasks with the tag "elapsed_end_without_start".

I have tried to set the pipeline workers to 1, but it didn't help... just degrade the performance to the ground.
my system config:
Ubuntu 16.04
logstash 5.3.2
logstash-filter-elapsed (4.0.1)
Elasticsearch 5.3.1

elapsed {
periodic_flush => true
start_tag => "startevent"
end_tag => "endevent"
unique_id_field => "ID"
timeout => 600
new_event_on_match => false
add_tag => [ "autoelapsed" ]
}

@melbouhy
Copy link

melbouhy commented Jun 7, 2017

Hello,

Did you find ans solution ? Because I have the exact same problem .

Thanks

@michaeleino
Copy link
Author

seems no solution for elapsed plugin... adding it takes much CPU resource

I didn't use it anymore... i can calculate this duration by scripted fields in kibana

@melbouhy
Copy link

@michaeleino Could you please give me an example how are you doing this ? Which Kibana version are you using ?

@michaeleino
Copy link
Author

michaeleino commented Jun 15, 2017

from Kibana... go to managment > Index Patterns > Scripted fields
Add new field choose it to be painless & use this simple script

!doc['End_Time'].empty ? ((doc['End_Time'].value - doc['Start_Time'].value )/1000) : ''

replace the "End_Time/Start_Time" with your field names.

equation will output in milliseconds, I use /1000 to let the output in seconds

But this will lack the feature of directly calculate & aggregate on this field outside kibana, you will need to calculate it in each search.

@sivachandragit
Copy link

Hi,

I have only one field called timestamp and I want to know the processing time of each request which means time difference between 2 documents which contains a unique identifier.

@DirkRichter
Copy link

It seems, that tag "elapsed_end_without_start" occurs with multiple pipeline workers. Try to isolate the elapse-computation into it's own pipeline and use pipeline.workers=1 for this pipeline.

@gomclifton
Copy link

I have this issue also, after changing the pipeline.workers=1, it helped reduce a lot of the elapsed_end_without_start, but we still get a small # of the errors. Is there any more work being done on this?

@gomclifton
Copy link

Are there any plans to fix the elapsed_end_without_start errors? I've also reduced pipeline.workers to 1, but these errors still occur.

@gvorobyo
Copy link

still actual, same issue for me

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants