Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LS_JAVA_OPTS env var could lead to duplication of java options #16078

Closed
kaisecheng opened this issue Apr 11, 2024 · 2 comments · Fixed by #16079
Closed

LS_JAVA_OPTS env var could lead to duplication of java options #16078

kaisecheng opened this issue Apr 11, 2024 · 2 comments · Fixed by #16079
Assignees
Labels

Comments

@kaisecheng
Copy link
Contributor

kaisecheng commented Apr 11, 2024

Environment variable LS_JAVA_OPTS is supposed to take the precedence to overwrite the default jvm settings, however, setting it to -Dio.netty.allocator.maxOrder=14 -Xmx2g -Xms2g is unable to change io.netty.allocator.maxOrder to 14, instead 11 is appended at the end.

version

8.13.2

Reproduce steps

docker run --rm -e LS_JAVA_OPTS="-Dio.netty.allocator.maxOrder=14 -Xmx2G" docker.elastic.co/logstash/logstash:8.13.2

You can see -Dio.netty.allocator.maxOrder=11 appended to the end in log.

Log

Using bundled JDK: /usr/share/logstash/jdk
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/concurrent-ruby-1.1.9/lib/concurrent-ruby/concurrent/executor/java_thread_pool_executor.rb:13: warning: method redefined; discarding old to_int
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/concurrent-ruby-1.1.9/lib/concurrent-ruby/concurrent/executor/java_thread_pool_executor.rb:13: warning: method redefined; discarding old to_f
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2024-04-11T21:48:05,290][INFO ][logstash.runner          ] Log4j configuration path used is: /usr/share/logstash/config/log4j2.properties
[2024-04-11T21:48:05,292][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"8.13.2", "jruby.version"=>"jruby 9.4.5.0 (3.1.4) 2023-11-02 1abae2700f OpenJDK 64-Bit Server VM 17.0.10+7 on 17.0.10+7 +indy +jit [aarch64-linux]"}
[2024-04-11T21:48:05,293][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, -Dlogstash.jackson.stream-read-constraints.max-number-length=10000, -Dls.cgroup.cpuacct.path.override=/, -Dls.cgroup.cpu.path.override=/, -Dio.netty.allocator.maxOrder=14, -Xmx2G, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED, -Dio.netty.allocator.maxOrder=11]
[2024-04-11T21:48:05,293][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-string-length` configured to `200000000`
[2024-04-11T21:48:05,294][INFO ][logstash.runner          ] Jackson default value override `logstash.jackson.stream-read-constraints.max-number-length` configured to `10000`
[2024-04-11T21:48:05,296][INFO ][logstash.settings        ] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2024-04-11T21:48:05,297][INFO ][logstash.settings        ] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2024-04-11T21:48:05,366][INFO ][logstash.agent           ] No persistent UUID file found. Generating new UUID {:uuid=>"4cf3d751-2e9f-4e49-8f04-a874818ba35f", :path=>"/usr/share/logstash/data/uuid"}
[2024-04-11T21:48:05,504][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2024-04-11T21:48:05,505][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
Please configure Elastic Agent to monitor Logstash. Documentation can be found at:
https://www.elastic.co/guide/en/logstash/current/monitoring-with-elastic-agent.html
[2024-04-11T21:48:05,609][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2024-04-11T21:48:05,632][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"elasticsearch: Name or service not known", :exception=>Manticore::ResolutionFailure, :cause=>#<Java::JavaNet::UnknownHostException: elasticsearch: Name or service not known>}
[2024-04-11T21:48:05,633][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error {:url=>"http://elasticsearch:9200/", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2024-04-11T21:48:05,635][INFO ][logstash.licensechecker.licensereader] Failed to perform request {:message=>"elasticsearch", :exception=>Manticore::ResolutionFailure, :cause=>#<Java::JavaNet::UnknownHostException: elasticsearch>}
[2024-04-11T21:48:05,635][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2024-04-11T21:48:05,636][WARN ][logstash.licensechecker.licensereader] Attempt to fetch Elasticsearch cluster info failed. Sleeping for 0.02 {:fail_count=>1, :exception=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
[2024-04-11T21:48:05,657][ERROR][logstash.licensechecker.licensereader] Unable to retrieve Elasticsearch cluster info. {:message=>"No Available connections", :exception=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::NoConnectionAvailableError}
[2024-04-11T21:48:05,657][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2024-04-11T21:48:05,665][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
[2024-04-11T21:48:05,698][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2024-04-11T21:48:05,766][INFO ][org.reflections.Reflections] Reflections took 43 ms to scan 1 urls, producing 132 keys and 468 values
/usr/share/logstash/vendor/bundle/jruby/3.1.0/gems/amazing_print-1.6.0/lib/amazing_print/formatter.rb:37: warning: previous definition of cast was here
[2024-04-11T21:48:05,870][INFO ][logstash.javapipeline    ] Pipeline `main` is configured with `pipeline.ecs_compatibility: v8` setting. All plugins in this pipeline will default to `ecs_compatibility => v8` unless explicitly configured otherwise.
[2024-04-11T21:48:05,879][INFO ][logstash.javapipeline    ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>6, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>750, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x72e7012 /usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:134 run>"}
[2024-04-11T21:48:06,119][INFO ][logstash.javapipeline    ][main] Pipeline Java execution initialization time {"seconds"=>0.24}
[2024-04-11T21:48:06,123][INFO ][logstash.inputs.beats    ][main] Starting input listener {:address=>"0.0.0.0:5044"}
[2024-04-11T21:48:06,128][INFO ][logstash.javapipeline    ][main] Pipeline started {"pipeline.id"=>"main"}
[2024-04-11T21:48:06,142][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2024-04-11T21:48:06,159][INFO ][org.logstash.beats.Server][main][0710cad67e8f47667bc7612580d5b91f691dd8262a4187d9eca8cf87229d04aa] Starting server on port: 5044
^C[2024-04-11T21:48:07,572][WARN ][logstash.runner          ] SIGINT received. Shutting down.
[2024-04-11T21:48:12,578][WARN ][logstash.runner          ] Received shutdown signal, but pipeline is still waiting for in-flight events
to be processed. Sending another ^C will force quit Logstash, but this may cause
data loss.
[2024-04-11T21:48:13,754][INFO ][logstash.javapipeline    ][main] Pipeline terminated {"pipeline.id"=>"main"}
[2024-04-11T21:48:14,685][INFO ][logstash.pipelinesregistry] Removed pipeline from registry successfully {:pipeline_id=>:main}
[2024-04-11T21:48:14,694][INFO ][logstash.runner          ] Logstash shut down.
@andsel
Copy link
Contributor

andsel commented Apr 12, 2024

Tested with

andrea:logstash_andsel (tags/v8.13.2) % export LS_JAVA_OPTS="-Dio.netty.allocator.maxOrder=6" && bin/Logstash -e "input{stdin{}} output{stdout{codec => rubydebug}}"
Using system java: /Users/andrea/.sdkman/candidates/java/current/bin/java
.....
[2024-04-12T10:00:56,760][INFO ][logstash.runner          ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, 
-Djruby.compile.invokedynamic=true, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, 
-Dlog4j2.isThreadContextMapInheritable=true, -Dlogstash.jackson.stream-read-constraints.max-string-length=200000000, 
-Dlogstash.jackson.stream-read-constraints.max-number-length=10000, 

-Dio.netty.allocator.maxOrder=6, 

-Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, 
--add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, 
--add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, 
--add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, 
--add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, 
--add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, 
--add-opens=java.base/java.security=ALL-UNNAMED, 
--add-opens=java.base/java.io=ALL-UNNAMED, 
--add-opens=java.base/java.nio.channels=ALL-UNNAMED, 
--add-opens=java.base/sun.nio.ch=ALL-UNNAMED, 
--add-opens=java.management/sun.management=ALL-UNNAMED]

The env var LS_JAVA_OPTS is not listed as Docker configuration environment: https://www.elastic.co/guide/en/logstash/current/docker-config.html#docker-env-config.

However all Logstash settings are exposed as environment variables (-e switch in Docker command line) through the env2yaml tool https://github.com/elastic/logstash/blob/v8.13.2/docker/data/logstash/env2yaml/env2yaml.go#L27 but OS environment variale LS_JAVA_OPTS has to be exported in another way.

So I would consider this a feature request other than a bug.

Running a simple bash the env vars are present:

% docker run -ti --rm -e LS_JAVA_OPTS="-Dio.netty.allocator.maxOrder=14" docker.elastic.co/logstash/logstash:8.13.2 bash
logstash@8d9113afd0b1:~$ env
HOSTNAME=8d9113afd0b1
ELASTIC_CONTAINER=true
PWD=/usr/share/logstash
HOME=/usr/share/logstash
LANG=en_US.UTF-8
TERM=xterm
LS_JAVA_OPTS=-Dls.cgroup.cpuacct.path.override=/ -Dls.cgroup.cpu.path.override=/ -Dio.netty.allocator.maxOrder=14
SHLVL=1
LC_ALL=en_US.UTF-8
PATH=/usr/share/logstash/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
_=/usr/bin/env
logstash@8d9113afd0b1:~$

So I think it's related to the fact that

handleJvmOptions(args, System.getenv("LS_JAVA_OPTS"));
for some not clear (to me) reason doesn't pick up the environment variable when run inside Docker.

@andsel
Copy link
Contributor

andsel commented Apr 12, 2024

Description of the problem

The root cause is related to the fact that the JvmOptionsParser consider the LS_JAVA_OPTS as a single options, doesn't consider it could be a list of space delimited options:

.
So it adds the entire content of the environment variable as whole.
When it's launched from local host outside any container, with just one value, it works expected.
However examining the value of LS_JAVA_OPTS inside the container:

% docker run -ti --rm -e LS_JAVA_OPTS="-Dio.netty.allocator.maxOrder=13" docker.elastic.co/logstash/logstash:8.13.2 bash
logstash@0729b9bb4079:~$ /usr/share/logstash/jdk/bin/jshell
Apr 12, 2024 9:10:17 AM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
|  Welcome to JShell -- Version 17.0.10
|  For an introduction type: /help intro

jshell> System.getenv("LS_JAVA_OPTS")
$1 ==> "-Dls.cgroup.cpuacct.path.override=/ -Dls.cgroup.cpu.path.override=/ -Dio.netty.allocator.maxOrder=13"

jshell>

become evident that that the LS_JAVA_OPTS contains multiple variable definitions, while the JvmOptionsParser use it as whole.

@andsel andsel self-assigned this Apr 12, 2024
@andsel andsel changed the title -Dio.netty.allocator.maxOrder in LS_JAVA_OPTS got overwritten LS_JAVA_OPTS env var could lead to duplication of java options Apr 12, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants