You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using Logstash to read data from Oracle DB & ingesting it into a Kafka topic, then reading the data from the topic & ingesting it into ES (basically Oracle -> Kafka, Kafka -> ES). The query used is capturing data for a specific timestamp which fetches 52 rows, everything works as expected till here but after inserting all the rows, Logstash starts inserting below garbage value in Kafka topic:
I checked Kafka topic data from consumer console & could see the garbage values continuously flowing in, so I removed the old data from topic(set the retention to 1000ms), used the same query & config parameters, this time directly from Oracle to ES & it worked fine without any garbage value. Below is the config I used:
I had opened consumer console along with this file & could see garbage data flowing in Kafka, & correct data in the logstash-kafka.txt. Same happened when I tried Kafka along with ES, data was getting correctly ingested in ES via Logstash in the absence of Kafka in between.
Another thing I observed in Kafka console was that below garbage value occured 52 times, which is the number of rows fetched from oracle, & then it stopped sending anymore data/garbage:
I'm using Logstash to read data from Oracle DB & ingesting it into a Kafka topic, then reading the data from the topic & ingesting it into ES (basically Oracle -> Kafka, Kafka -> ES). The query used is capturing data for a specific timestamp which fetches 52 rows, everything works as expected till here but after inserting all the rows, Logstash starts inserting below garbage value in Kafka topic:
2020-08-28T06:26:33.657Z %{host} 2020-08-28T06:26:33.523Z %{host} 2020-08-28T06:26:33.330Z %{host} 2020-08-28T06:26:33.174Z %{host} 2020-08-28T06:26:33.007Z %{host} 2020-08-28T06:26:32.863Z %{host} 2020-08-28T06:26:32.752Z %{host} 2020-08-28T06:26:32.638Z %{host} 2020-08-28T06:26:32.528Z %{host} 2020-08-28T06:26:32.387Z %{host} 2020-08-28T06:26:32.190Z %{host} 2020-08-28T06:26:32.072Z %{host} 2020-08-28T06:26:31.951Z %{host} 2020-08-28T06:26:31.775Z %{host} 2020-08-28T06:26:31.547Z %{host} 2020-08-28T06:26:22.638Z %{host} 2020-08-28T06:26:22.509Z %{host} 2020-08-28T06:26:22.397Z %{host} 2020-08-28T06:26:22.283Z %{host} 2020-08-28T06:26:22.169Z %{host} 2020-08-28T06:26:22.050Z %{host} 2020-08-28T06:26:21.925Z %{host} 2020-08-28T06:26:21.800Z %{host} 2020-08-28T06:26:21.622Z %{host} 2020-08-28T06:26:21.489Z %{host} 2020-08-28T06:26:21.349Z %{host} 2020-08-28T06:26:20.732Z %{host} %{message}
Configs that I'm using:
Oracle -> Kafka:
Kafka -> ES
I checked Kafka topic data from consumer console & could see the garbage values continuously flowing in, so I removed the old data from topic(set the retention to 1000ms), used the same query & config parameters, this time directly from Oracle to ES & it worked fine without any garbage value. Below is the config I used:
I didn't observe any errors in the logs, not even during the time garbage value was flowing in Kafka topic, just this:
Not sure if this has something to do with the config I'm using, could someone please help in fixing this?
For all general issues, please provide the following details for fast resolution:
The text was updated successfully, but these errors were encountered: