You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Description
I'm testing a traffic scenario with TPS of ~1000, the HTTP/2 server has a high delay hence all the message send by Jetty HTTP/2 client to the server gets timed out. Below is my observation with this run.
1000+ Jetty threads are created
Among them ~200 threads were in blocked state (traceback added below)
The application becomes unresponsive at this stage and eventually dies (OOM)
Request:
Control Mechanism: Is there a mechanism to limit the maximum number of threads that can be spawned?
Root Cause Analysis: Are there any tools or methods to help identify the reasons behind the thread blocking?
Additional Information:
Stacktrace of the blocked threads
"@365a6a43-35" #35 prio=5 os_prio=0 cpu=28.09ms elapsed=689.64s tid=0x00007f9741f42d20 nid=0x3e waiting for monitor entry [0x00007f970c8fa000]
java.lang.Thread.State: BLOCKED (on object monitor)
at java.io.PrintStream.write([email protected]/PrintStream.java:695)
- waiting to lock <0x0000000680c98068> (a java.io.PrintStream)
at java.io.PrintStream.print([email protected]/PrintStream.java:863)
at java.lang.ThreadGroup.uncaughtException([email protected]/ThreadGroup.java:1084)
at java.lang.ThreadGroup.uncaughtException([email protected]/ThreadGroup.java:1077)
at reactor.core.scheduler.Schedulers.handleError(Schedulers.java:1208)
at reactor.core.scheduler.ExecutorScheduler.schedule(ExecutorScheduler.java:70)
at reactor.core.publisher.MonoPublishOn$PublishOnSubscriber.trySchedule(MonoPublishOn.java:146)
at reactor.core.publisher.MonoPublishOn$PublishOnSubscriber.onNext(MonoPublishOn.java:120)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:200)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onNext(FluxOnErrorResume.java:79)
at reactor.core.publisher.Operators$ScalarSubscription.request(Operators.java:2398)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.set(Operators.java:2194)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onSubscribe(FluxOnErrorResume.java:74)
at reactor.core.publisher.MonoJust.subscribe(MonoJust.java:55)
at reactor.core.publisher.Mono.subscribe(Mono.java:4400)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:103)
at reactor.core.publisher.SerializedSubscriber.onError(SerializedSubscriber.java:124)
at reactor.core.publisher.SerializedSubscriber.onError(SerializedSubscriber.java:124)
at reactor.core.publisher.FluxTimeout$TimeoutMainSubscriber.onError(FluxTimeout.java:219)
at reactor.core.publisher.MonoFlatMap$FlatMapMain.onError(MonoFlatMap.java:172)
at reactor.core.publisher.Operators$MultiSubscriptionSubscriber.onError(Operators.java:2063)
at reactor.core.publisher.FluxOnAssembly$OnAssemblySubscriber.onError(FluxOnAssembly.java:544)
at reactor.core.publisher.FluxMap$MapSubscriber.onError(FluxMap.java:134)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:106)
at reactor.core.publisher.Operators.error(Operators.java:198)
at reactor.core.publisher.MonoErrorSupplied.subscribe(MonoErrorSupplied.java:56)
at reactor.core.publisher.Mono.subscribe(Mono.java:4400)
at reactor.core.publisher.FluxOnErrorResume$ResumeSubscriber.onError(FluxOnErrorResume.java:103)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:222)
at reactor.core.publisher.FluxPeek$PeekSubscriber.onError(FluxPeek.java:222)
at reactor.core.publisher.MonoIgnoreThen$ThenIgnoreMain.onError(MonoIgnoreThen.java:278)
at org.eclipse.jetty.reactive.client.internal.AbstractSingleProcessor.onError(AbstractSingleProcessor.java:120)
at org.eclipse.jetty.reactive.client.internal.ResponseListenerProcessor.onComplete(ResponseListenerProcessor.java:141)
at org.eclipse.jetty.client.ResponseNotifier.notifyComplete(ResponseNotifier.java:213)
at org.eclipse.jetty.client.ResponseNotifier.notifyComplete(ResponseNotifier.java:205)
at org.eclipse.jetty.client.HttpReceiver.terminateResponse(HttpReceiver.java:477)
at org.eclipse.jetty.client.HttpReceiver.terminateResponse(HttpReceiver.java:457)
at org.eclipse.jetty.client.HttpReceiver.abort(HttpReceiver.java:553)
at org.eclipse.jetty.client.HttpChannel.abortResponse(HttpChannel.java:149)
at org.eclipse.jetty.client.HttpSender.terminateRequest(HttpSender.java:298)
at org.eclipse.jetty.client.HttpSender.abortRequest(HttpSender.java:280)
at org.eclipse.jetty.client.HttpSender.abort(HttpSender.java:385)
at org.eclipse.jetty.client.HttpSender.lambda$executeAbort$0(HttpSender.java:252)
at org.eclipse.jetty.client.HttpSender$$Lambda$2278/0x0000000801a3a400.run(Unknown Source)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:894)
at org.eclipse.jetty.util.thread.QueuedThreadPool$Runner.run(QueuedThreadPool.java:1038)
at java.lang.Thread.run([email protected]/Thread.java:833)
The text was updated successfully, but these errors were encountered:
What is your configuration for the Jetty HttpClientExecutor?
I'm testing a traffic scenario with TPS of ~1000, the HTTP/2 server has a high delay hence all the message send by Jetty HTTP/2 client to the server gets timed out.
Can you please detail exactly what you are doing?
How long is the timeout?
Is it the client or the server timing out?
How much is "high delay" on the server?
Jetty will not spawn more threads than configured in the HttpClientExecutor, so if you see a larger number of threads than configured, it is not Jetty.
Do you have a full thread dump?
How do you know there are 1000+ threads? Via thread dump or some monitoring console?
On the client or on the server?
Jetty version(s)
11.0.11
Java version/vendor
17.0.7
OS type/version
Linux
Description
I'm testing a traffic scenario with TPS of ~1000, the HTTP/2 server has a high delay hence all the message send by Jetty HTTP/2 client to the server gets timed out. Below is my observation with this run.
Request:
Control Mechanism: Is there a mechanism to limit the maximum number of threads that can be spawned?
Root Cause Analysis: Are there any tools or methods to help identify the reasons behind the thread blocking?
Additional Information:
Stacktrace of the blocked threads
The text was updated successfully, but these errors were encountered: