-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error creating table. {:exception=>java.lang.NullPointerException} #49
Comments
I'm having the same issue in Logstash 6.3. @cagriersen-omma have you figured it out? |
@tomleib Unfortunately I didn't. But good news is it seems this issue is not related to dropped logs. |
@cagriersen-omma what caused the dropped logs? |
Any news guys? |
I recently hit this error as well. It's actually failing when it goes to check if the table already exists. I validated the dataset and table provided to this call were valid. This has me thinking it is actually an issue with the bigquery client. On a side note, the bigquery library referenced is on version 1.24.1, which is outdated. Perhaps updating to the latest 1.x version would fix the problem? My machine isn't really setup for plugin development at the moment. Otherwise I'd try to validate this myself. If anyone else can do this, it would be much appreciated. |
Hey, any updates just in case? |
I got the same issue |
Some news about this issue ? |
Hi all,
I've been testing the plugin to stream json based log content to bigquery for a while. As far as I've tested there is a minor glitch on creating tables dynamically. When it try to create table, it throws a NullPointerException and retry it again and again when batch_size or flush_interval_secs values are reached. Fortunately the table can be created in a few try and streaming begins. However, this situation leads to dropping some portion of the logs I think. Especially when the event per seconds increased to ~1000 eps.)
Also this problem happens every hour, since we use the date_pattern in a hourly fashion default. ("%Y-%m-%dT%H:00").
Although, dropped logs ratio relatively small (for example my batch size is 250 and generally first chunks are dropped), it's unacceptable for our use case.
Here is the details:
Version:
6.5.4
Operating System:
Amazon Linux 2 (logstash runs in a docker container)
Config File:
pipliene.yaml
/usr/share/logstash/pipeline/whatever.conf
Just start to streaming to a blank bigquery dataset.
The text was updated successfully, but these errors were encountered: