Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

"No Available connections" container vs host #16099

Closed
ronaldpetty opened this issue Apr 20, 2024 · 3 comments
Closed

"No Available connections" container vs host #16099

ronaldpetty opened this issue Apr 20, 2024 · 3 comments
Labels

Comments

@ronaldpetty
Copy link

Logstash information:

Please include the following information:

% docker container run logstash:8.13.0 --version
Using bundled JDK: /usr/share/logstash/jdk
logstash 8.13.0
  1. Logstash installation source (e.g. built from source, with a package manager: DEB/RPM, expanded from tar or zip archive, docker)

docker pull logstash:8.13.0

  1. How is Logstash being run (e.g. as a service/service manager: systemd, upstart, etc. Via command line, docker/kubernetes)
% docker container run -d -p 8080:8080 --name logstash logstash:8.13.0 -e 'input {  http {    port => 8080    codec => json {      target => "doc"    }  }}output {  elasticsearch {    hosts => ["https://172.17.0.2:9200"]    api_key => "A0R6-I4B5Jb3ztqmqrF8:URNxt_FcSFyb0utJwOQnrw"     index => "twitter"    document_id => "%{[doc][id]}"     ssl_verification_mode => none   } }'

Plugins installed: (bin/logstash-plugin list --verbose)

% docker container run logstash:8.13.0 /usr/share/logstash/bin/logstash-plugin list --verbose 
Using bundled JDK: /usr/share/logstash/jdk
logstash-codec-avro (3.4.1)
logstash-codec-cef (6.2.7)
logstash-codec-collectd (3.1.0)
logstash-codec-dots (3.0.6)
logstash-codec-edn (3.1.0)
logstash-codec-edn_lines (3.1.0)
logstash-codec-es_bulk (3.1.0)
logstash-codec-fluent (3.4.2)
logstash-codec-graphite (3.0.6)
logstash-codec-json (3.1.1)
logstash-codec-json_lines (3.1.0)
logstash-codec-line (3.1.1)
logstash-codec-msgpack (3.1.0)
logstash-codec-multiline (3.1.1)
logstash-codec-netflow (4.3.2)
logstash-codec-plain (3.1.0)
logstash-codec-rubydebug (3.1.0)
logstash-filter-aggregate (2.10.0)
logstash-filter-anonymize (3.0.7)
logstash-filter-cidr (3.1.3)
logstash-filter-clone (4.2.0)
logstash-filter-csv (3.1.1)
logstash-filter-date (3.1.15)
logstash-filter-de_dot (1.0.4)
logstash-filter-dissect (1.2.5)
logstash-filter-dns (3.2.0)
logstash-filter-drop (3.0.5)
logstash-filter-elastic_integration (0.1.8)
logstash-filter-elasticsearch (3.16.1)
logstash-filter-fingerprint (3.4.4)
logstash-filter-geoip (7.2.13)
logstash-filter-grok (4.4.3)
logstash-filter-http (1.5.1)
logstash-filter-json (3.2.1)
logstash-filter-kv (4.7.0)
logstash-filter-memcached (1.2.0)
logstash-filter-metrics (4.0.7)
logstash-filter-mutate (3.5.8)
logstash-filter-prune (3.0.4)
logstash-filter-ruby (3.1.8)
logstash-filter-sleep (3.0.7)
logstash-filter-split (3.1.8)
logstash-filter-syslog_pri (3.2.1)
logstash-filter-throttle (4.0.4)
logstash-filter-translate (3.4.2)
logstash-filter-truncate (1.0.6)
logstash-filter-urldecode (3.0.6)
logstash-filter-useragent (3.3.5)
logstash-filter-uuid (3.0.5)
logstash-filter-xml (4.2.0)
logstash-input-azure_event_hubs (1.4.5)
logstash-input-beats (6.8.2)
└── logstash-input-elastic_agent (alias)
logstash-input-couchdb_changes (3.1.6)
logstash-input-dead_letter_queue (2.0.0)
logstash-input-elastic_serverless_forwarder (0.1.4)
logstash-input-elasticsearch (4.20.2)
logstash-input-exec (3.6.0)
logstash-input-file (4.4.6)
logstash-input-ganglia (3.1.4)
logstash-input-gelf (3.3.2)
logstash-input-generator (3.1.0)
logstash-input-graphite (3.0.6)
logstash-input-heartbeat (3.1.1)
logstash-input-http (3.8.0)
logstash-input-http_poller (5.5.1)
logstash-input-jms (3.2.2)
logstash-input-pipe (3.1.0)
logstash-input-redis (3.7.0)
logstash-input-snmp (1.3.3)
logstash-input-snmptrap (3.1.0)
logstash-input-stdin (3.4.0)
logstash-input-syslog (3.7.0)
logstash-input-tcp (6.4.1)
logstash-input-twitter (4.1.1)
logstash-input-udp (3.5.0)
logstash-input-unix (3.1.2)
logstash-integration-aws (7.1.6)
 ├── logstash-codec-cloudfront
 ├── logstash-codec-cloudtrail
 ├── logstash-input-cloudwatch
 ├── logstash-input-s3
 ├── logstash-input-sqs
 ├── logstash-output-cloudwatch
 ├── logstash-output-s3
 ├── logstash-output-sns
 └── logstash-output-sqs
logstash-integration-elastic_enterprise_search (3.0.0)
 ├── logstash-output-elastic_app_search
 └──  logstash-output-elastic_workplace_search
logstash-integration-jdbc (5.4.9)
 ├── logstash-input-jdbc
 ├── logstash-filter-jdbc_streaming
 └── logstash-filter-jdbc_static
logstash-integration-kafka (11.3.4)
 ├── logstash-input-kafka
 └── logstash-output-kafka
logstash-integration-logstash (1.0.2)
 ├── logstash-input-logstash
 └── logstash-output-logstash
logstash-integration-rabbitmq (7.3.3)
 ├── logstash-input-rabbitmq
 └── logstash-output-rabbitmq
logstash-output-csv (3.0.10)
logstash-output-elasticsearch (11.22.3)
logstash-output-email (4.1.3)
logstash-output-file (4.3.0)
logstash-output-graphite (3.1.6)
logstash-output-http (5.6.0)
logstash-output-lumberjack (3.1.9)
logstash-output-nagios (3.0.6)
logstash-output-null (3.0.5)
logstash-output-pipe (3.0.6)
logstash-output-redis (5.0.0)
logstash-output-stdout (3.1.4)
logstash-output-tcp (6.2.0)
logstash-output-udp (3.2.0)
logstash-output-webhdfs (3.1.0)
logstash-patterns-core (4.3.4)

JVM (e.g. java -version):

container

% docker container run logstash:8.13.0 /usr/share/logstash/jdk/bin/java -version
openjdk version "17.0.10" 2024-01-16
OpenJDK Runtime Environment Temurin-17.0.10+7 (build 17.0.10+7)
OpenJDK 64-Bit Server VM Temurin-17.0.10+7 (build 17.0.10+7, mixed mode, sharing)

OS version (uname -a if on a Unix-like system):

% uname -a
Darwin memememem 23.4.0 Darwin Kernel Version 23.4.0: Fri Mar 15 00:11:05 PDT 2024; root:xnu-10063.101.17~1/RELEASE_X86_64 x86_64

Description of the problem including expected versus actual behavior:

Its seems logstash makes a connection (once) but then loses it. It is like host.docker.internal is there but then reloads the config and fails parsing and falls back to output elasticsearch? (just a hunch).

If I run ES in a container (docker desktop on mac) and instead run logstash with the exact same config it works (see below).

Steps to reproduce:

On MacBook

  • Start ES - docker container run -d --name elasticsearch -p 9200:9200 -m 1GB elasticsearch:8.13.0
  • Run Kibana - docker container run -d --name kibana -p 5601:5601 kibana:8.13.0
  • do password dance, get api key
  • Start Logstash
% docker container run -d -p 8080:8080 --name logstash logstash:8.13.0 -e 'input {  http {    port => 8080    codec => json {      target => "doc"    }  }}output {  elasticsearch {    hosts => ["https://host.docker.internal:9200"]    api_key => "A0R6-I4B5Jb3ztqmqrF8:URNxt_FcSFyb0uQnr<BREAKIT>"     index => "twitter"    document_id => "%{[doc][id]}"     ssl_verification_mode => none   } }'

Provide logs (if relevant):

% docker logs -f logstash
Using bundled JDK: /usr/share/logstash/jdk
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/concurrent-ruby-1.1.9/lib/concurrent-ruby/concurrent/executor/java_thread_pool_executor.rb:13: warning: method redefined; discarding old to_int
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/concurrent-ruby-1.1.9/lib/concurrent-ruby/concurrent/executor/java_thread_pool_executor.rb:13: warning: method redefined; discarding old to_f
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2024-04-20T00:59:29,060][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2024-04-20T00:59:29,065][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.13.0", "jruby.version"=>"jruby 9.4.5.0 (3.1.4) 2023-11-02 1abae2700f OpenJDK 64-Bit Server VM 17.0.10+7 on 17.0.10+7 +indy +jit [x86_64-linux]"}
[2024-04-20T00:59:29,067][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2024-04-20T00:59:29,069][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[2024-04-20T00:59:29,069][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[2024-04-20T00:59:29,078][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2024-04-20T00:59:29,080][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2024-04-20T00:59:29,238][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2024-04-20T00:59:29,248][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"76774059-c3d1-40c7-ad2e-cdbad12233d9", :path=>"/usr/share/logstash/data/uuid"}
[2024-04-20T00:59:29,636][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2024-04-20T00:59:29,637][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
Please configure Elastic Agent to monitor Logstash. Documentation can be found at: 
https://www.elastic.co/guide/en/logstash/current/monitoring-with-elastic-agent.html
[2024-04-20T00:59:29,988][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2024-04-20T00:59:30,035][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"elasticsearch: Name or service not known", :exception=>Manticore::ResolutionFailure, :cause=>#<Java::JavaNet::UnknownHostException: elasticsearch: Name or service not known>}
[2024-04-20T00:59:30,036][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2024-04-20T00:59:30,044][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"elasticsearch", :exception=>Manticore::ResolutionFailure, :cause=>#<Java::JavaNet::UnknownHostException: elasticsearch>}
[2024-04-20T00:59:30,045][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2024-04-20T00:59:30,046][WARN ][logstash.licensechecker.licensereader] Attempt to fetch Elasticsearch cluster info failed. Sleeping for 0.02 {:fail_count=>1, :exception=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
[2024-04-20T00:59:30,068][ERROR][logstash.licensechecker.licensereader] Unable to retrieve Elasticsearch cluster info. {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError}
[2024-04-20T00:59:30,069][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2024-04-20T00:59:30,079][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
[2024-04-20T00:59:30,167][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-04-20T00:59:30,420][INFO ][org.reflections.Reflections] Reflections took 101 ms to scan 1 urls, producing 132 keys and 468 values
[2024-04-20T00:59:30,649][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-04-20T00:59:30,661][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://host.docker.internal:9200"]}
[2024-04-20T00:59:30,662][WARN ][logstash.outputs.elasticsearch][main] You have enabled encryption but DISABLED certificate verification, to make sure your data is secure set `ssl_verification_mode => full`
[2024-04-20T00:59:30,669][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://host.docker.internal:9200/]}}
[2024-04-20T00:59:30,879][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://host.docker.internal:9200/"}
[2024-04-20T00:59:30,880][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.13.0) {:es_version=>8}
[2024-04-20T00:59:30,881][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2024-04-20T00:59:30,891][INFO ][logstash.outputs.elasticsearch][main] Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"twitter"}
[2024-04-20T00:59:30,891][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (`data_stream => auto` or unset) resolved to `false`
[2024-04-20T00:59:30,901][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8}
[2024-04-20T00:59:30,915][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>4, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>500, "pipeline.sources"=>["config string"], :thread=>"#<Thread:0x470a5c1a /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-04-20T00:59:31,446][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.53}
[2024-04-20T00:59:31,459][INFO ][logstash.codecs.json     ][main] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2024-04-20T00:59:31,552][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-04-20T00:59:31,556][INFO ][logstash.inputs.http     ][main][ff72fcf038f153649348cc7b47f7b71d480e6bafb3d4077c6071dc1c6ea292b7] Starting http input listener {:address=>"0.0.0.0:8080", :ssl=>"false"}
[2024-04-20T00:59:31,563][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2024-04-20T01:00:00,084][ERROR][logstash.licensechecker.licensereader] Unable to retrieve Elasticsearch cluster info. {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError}
[2024-04-20T01:00:00,084][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"elasticsearch: Name or service not known", :exception=>Manticore::ResolutionFailure, :cause=>#<Java::JavaNet::UnknownHostException: elasticsearch: Name or service not known>}
[2024-04-20T01:00:00,085][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2024-04-20T01:00:00,088][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}

Here is where I run logstash on the host instead of a container (elasticsearch is in a container still though).

% ./logstash-8.13.2/bin/logstash -e 'input {  http {    port => 8080    codec => json {      target => "doc"    }  }}output {  elasticsearch {    hosts => ["https://host.docker.internal:9200"]    api_key => "A0R6-I4B5Jb3ztqmqrF8:URNxt_FcSFyb0utJwOQnrw"     index => "twitter"    document_id => "%{[doc][id]}"     ssl_verification_mode => none   } }'
Using bundled JDK: /Users/ronaldpetty/github.com/rx-m/dantooine/app-logging/11-app-logging/logstash-8.13.2/jdk.app/Contents/Home
/Users/ronaldpetty/github.com/rx-m/dantooine/app-logging/11-app-logging/logstash-8.13.2/vendor/bundle/jruby/3.1.0/gems/concurrent-ruby-1.1.9/lib/concurrent-ruby/concurrent/executor/java_thread_pool_executor.rb:13: warning: method redefined; discarding old to_int
/Users/ronaldpetty/github.com/rx-m/dantooine/app-logging/11-app-logging/logstash-8.13.2/vendor/bundle/jruby/3.1.0/gems/concurrent-ruby-1.1.9/lib/concurrent-ruby/concurrent/executor/java_thread_pool_executor.rb:13: warning: method redefined; discarding old to_f
Sending Logstash logs to /Users/ronaldpetty/github.com/rx-m/dantooine/app-logging/11-app-logging/logstash-8.13.2/logs which is now configured via log4j2.properties
[2024-04-19T18:03:22,711][INFO ][logstash.runner          ] Log4j configuration path used is: /Users/ronaldpetty/github.com/rx-m/dantooine/app-logging/11-app-logging/logstash-8.13.2/config/log4j2.properties
[2024-04-19T18:03:22,715][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.13.2", "jruby.version"=>"jruby 9.4.5.0 (3.1.4) 2023-11-02 1abae2700f OpenJDK 64-Bit Server VM 17.0.10+7 on 17.0.10+7 +indy +jit [x86_64-darwin]"}
[2024-04-19T18:03:22,717][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2024-04-19T18:03:22,719][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[2024-04-19T18:03:22,719][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[2024-04-19T18:03:22,753][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2024-04-19T18:03:23,179][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-04-19T18:03:23,452][INFO ][org.reflections.Reflections] Reflections took 109 ms to scan 1 urls, producing 132 keys and 468 values
[2024-04-19T18:03:23,688][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-04-19T18:03:23,697][INFO ][logstash.outputs.elasticsearch][main] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>["https://host.docker.internal:9200"]}
[2024-04-19T18:03:23,700][WARN ][logstash.outputs.elasticsearch][main] You have enabled encryption but DISABLED certificate verification, to make sure your data is secure set `ssl_verification_mode => full`
[2024-04-19T18:03:23,773][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[https://host.docker.internal:9200/]}}
[2024-04-19T18:03:23,978][WARN ][logstash.outputs.elasticsearch][main] Restored connection to ES instance {:url=>"https://host.docker.internal:9200/"}
[2024-04-19T18:03:23,979][INFO ][logstash.outputs.elasticsearch][main] Elasticsearch version determined (8.13.0) {:es_version=>8}
[2024-04-19T18:03:23,979][WARN ][logstash.outputs.elasticsearch][main] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>8}
[2024-04-19T18:03:23,988][INFO ][logstash.outputs.elasticsearch][main] Not eligible for data streams because config contains one or more settings that are not compatible with data streams: {"index"=>"twitter"}
[2024-04-19T18:03:23,988][INFO ][logstash.outputs.elasticsearch][main] Data streams auto configuration (`data_stream => auto` or unset) resolved to `false`
[2024-04-19T18:03:23,996][INFO ][logstash.outputs.elasticsearch][main] Using a default mapping template {:es_version=>8, :ecs_compatibility=>:v8}
[2024-04-19T18:03:24,000][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>1500, "pipeline.sources"=>["config string"], :thread=>"#<Thread:0x4c0efd99 /Users/ronaldpetty/github.com/rx-m/dantooine/app-logging/11-app-logging/logstash-8.13.2/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-04-19T18:03:24,523][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.52}
[2024-04-19T18:03:24,542][INFO ][logstash.codecs.json     ][main] ECS compatibility is enabled but `target` option was not specified. This may cause fields to be set at the top-level of the event where they are likely to clash with the Elastic Common Schema. It is recommended to set the `target` option to avoid potential schema conflicts (if your data is ECS compliant or non-conflicting, feel free to ignore this message)
[2024-04-19T18:03:24,622][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-04-19T18:03:24,623][INFO ][logstash.inputs.http     ][main][ff72fcf038f153649348cc7b47f7b71d480e6bafb3d4077c6071dc1c6ea292b7] Starting http input listener {:address=>"0.0.0.0:8080", :ssl=>"false"}
[2024-04-19T18:03:24,634][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}

In either case, I hit logstash with curl

curl -H "content-type: application/json" -XPUT 'http://127.0.0.1:8080/twitter/tweet/2' -d '{"user" : "kimchy","post_date" : "2024-02-26T14:12:13","message" : "comparing loggers"}'

Its probably my setup, but just seeing the issue.

@mashhurs
Copy link
Contributor

In the logstash.yml config of the docker image, monitoring.elasticsearch.hosts: http://elasticsearch:9200 is defined where Logstash enables monitoring by default. You have no http://elasticsearch:9200 Elasticsearch host and so hitting this error.
To overcome this error, you need to either disable monitoring or set proper host, refer to this guide page:
https://www.elastic.co/guide/en/logstash/current/docker-config.html#docker-env-config

@ronaldpetty
Copy link
Author

@mashhurs awesome! Thank you, it didn't dawn on me to compare the default configuration. I was hung up on the -e acting like an (complete) override. I'll give it a shot in a little bit, thank you!

@ronaldpetty
Copy link
Author

ronaldpetty commented Apr 22, 2024

Got it working. Just for anyone else (who is green on all the various settings). I had to use XPACK_MONITORING_ENABLED=false instead of MONITORING_ENABLED=false. Here is my example working now:

$ docker container run -e "XPACK_MONITORING_ENABLED=false" -d -p 8080:8080 --name logstash logstash:8.13.0 -e 'input {  http {    port => 8080    codec => json { target => "doc" } }} output { elasticsearch { hosts => ["https://host.docker.internal:9200"] api_key => "A0R6-I4B5Jb3ztqmqrF8:URNxt_FcSFyb0utJwOQnrW" index => "twitter" document_id => "%{[doc][id]}" ssl_verification_mode => none } }'

Thank you once more @mashhurs

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants