-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
crond job successful execute a few times, but stuck without any error log later #88
Comments
I assume you're using a logstash-integration-jdbc >=5.0.1? Any chance you could downgrade to 5.0.1 with:
And see if the issue still occurs? |
for the record we've seen a similar issue (using multiple pipelines/inputs), downgrading to |
@szmengran we're trying to reproduce scenarios such as yours but could use details if you still have some for us. where there any other pipelines involved in your LS installation besides the configuration you shared?
how long did you wait before restarting LS, did it never recover in after several minutes? also knowing your LS version ... |
It looks like we are facing a similar issue in our project.
Each pipeline have its own value instead of 30. The pipelines may execute without any issues for a long time, however sometimes some random pipeline out of those 25 suddenly stops its prepared statement execution.
and at some point the logs don't show up anymore. The pipeline doesn't recover and we need to re-start logstash instance in order to fix inconsistency between our DB and elasticsearch. Sometimes pipeline stays in the invalid state for a day or two when there are no new records in DB and there are no inconsistencies. Recently we've updated our Logstash docker image to the latest version available - 8.4.2, however the issue still persists without any errors in logs. @kares have you managed to resolve production issues you've mentioned in the logstash meta issue: improving scheduler performance and reliability? It looks like we have the similar configuration to the one listed in the issue:
If you managed to resolve the issues, does logstash require some additional configuration (usage of some specific rufus scheduler version, for instance) or does it work as expected out of the box? |
there might be an update from the LS team, the last resolution I was involved with ended up NOT being scheduler gem dependent but the user ended up having long query response times (e.g. due swapping). as a potential "improvement" I would recommend that you try spacing out your pipelines (using the cron expression) so they do not trigger JDBC execution at the exact same second. that's all I have - please do not ping me directly as it's been some time and most there is to the issue from my perspective is out in the comments - folks are watching the issue so you'll get help if anyone has anything to add, |
Logstash I would follow the suggestion from Kares to try to do not trigger multiple execution of JDBC at the same time and check if fixes. To investigate further, would be great if you can provide a kind of reproducible configuration that trigger the problem. |
FYI Is it possible to protect pipelines from such issues on logstash side? Is there a way to specify read timeout value for a query on pipeline level? |
config like this:
The results are as follows:
next job should execute at 2021-10-28T18:54:05,752, but always stuck without eny error log.
The text was updated successfully, but these errors were encountered: