Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ViteWebsocketConnection hangs when running with low resource #20586

Closed
mcollovati opened this issue Dec 1, 2024 · 2 comments · Fixed by #20587
Closed

ViteWebsocketConnection hangs when running with low resource #20586

mcollovati opened this issue Dec 1, 2024 · 2 comments · Fixed by #20587

Comments

@mcollovati
Copy link
Collaborator

Description of the bug

When running in dev mode on a machine with low resources, ViteWebsocketConnection sometimes hangs because of waiting on locks.

I've seen this happen mostly on Quarkus projects running tests in dev-mode on GH actions.
But it can be reproduced locally by decreasing the size of the fork join pool and vertx event loop pool.
It looks like the problem is that sometime the clientWebsocket future in ViteWebsocketConnection never completes, probably because a send is invoked before whenComplete can finish it work.

A potential solution is to complete the future in onOpen, so that a send operation will not block indefinitely waiting for whenComplete.

The issue does not happen with Vaadin 24.5, so it could be related to changes introduced by Vite 6.

Below, a thread dump from a hanging app

"ForkJoinPool.commonPool-worker-2" #98 daemon prio=5 os_prio=0 cpu=175,10ms elapsed=86,24s tid=0x000077438c001710 nid=0x2c4a in Object.wait()  [0x00007744a4fef000]
   java.lang.Thread.State: WAITING (on object monitor)
	at java.lang.Object.wait([email protected]/Native Method)
	- waiting on <no object reference available>
	at java.lang.Object.wait([email protected]/Object.java:338)
	at io.netty.util.concurrent.DefaultPromise.await(DefaultPromise.java:254)
	- locked <0x0000000719c82c70> (a io.netty.channel.DefaultChannelPromise)
	at io.netty.channel.DefaultChannelPromise.await(DefaultChannelPromise.java:131)
	at io.netty.channel.DefaultChannelPromise.await(DefaultChannelPromise.java:30)
	at io.netty.util.concurrent.DefaultPromise.get(DefaultPromise.java:338)
	at io.undertow.websockets.WebSocketSessionRemoteEndpoint$BasicWebSocketSessionRemoteEndpoint.sendText(WebSocketSessionRemoteEndpoint.java:257)
	at com.vaadin.base.devserver.viteproxy.ViteWebsocketProxy.lambda$new$0(ViteWebsocketProxy.java:55)
	at com.vaadin.base.devserver.viteproxy.ViteWebsocketProxy$$Lambda$1617/0x00007744548b77e0.accept(Unknown Source)
	at com.vaadin.base.devserver.viteproxy.ViteWebsocketConnection.onText(ViteWebsocketConnection.java:117)
	at jdk.internal.net.http.websocket.WebSocketImpl$ReceiveTask.processText([email protected]/WebSocketImpl.java:635)
	at jdk.internal.net.http.websocket.WebSocketImpl$ReceiveTask.run([email protected]/WebSocketImpl.java:443)
	at jdk.internal.net.http.common.SequentialScheduler$CompleteRestartableTask.run([email protected]/SequentialScheduler.java:149)
	at jdk.internal.net.http.common.SequentialScheduler$TryEndDeferredCompleter.complete([email protected]/SequentialScheduler.java:347)
	at jdk.internal.net.http.common.SequentialScheduler$CompleteRestartableTask.run([email protected]/SequentialScheduler.java:151)
	at jdk.internal.net.http.common.SequentialScheduler$SchedulableTask.run([email protected]/SequentialScheduler.java:230)
	at jdk.internal.net.http.HttpClientImpl$DelegatingExecutor.execute([email protected]/HttpClientImpl.java:158)
	at jdk.internal.net.http.common.SequentialScheduler.runOrSchedule([email protected]/SequentialScheduler.java:305)
	at jdk.internal.net.http.common.SequentialScheduler.runOrSchedule([email protected]/SequentialScheduler.java:274)
	at jdk.internal.net.http.websocket.WebSocketImpl.signalOpen([email protected]/WebSocketImpl.java:713)
	at jdk.internal.net.http.websocket.WebSocketImpl.newInstance([email protected]/WebSocketImpl.java:155)
	at jdk.internal.net.http.websocket.WebSocketImpl.lambda$newInstanceAsync$0([email protected]/WebSocketImpl.java:125)
	at jdk.internal.net.http.websocket.WebSocketImpl$$Lambda$1621/0x000077445471a488.apply([email protected]/Unknown Source)
	at java.util.concurrent.CompletableFuture$UniApply.tryFire([email protected]/CompletableFuture.java:646)
	at java.util.concurrent.CompletableFuture.postComplete([email protected]/CompletableFuture.java:510)
	at java.util.concurrent.CompletableFuture.postFire([email protected]/CompletableFuture.java:614)
	at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire([email protected]/CompletableFuture.java:844)
	at java.util.concurrent.CompletableFuture$Completion.exec([email protected]/CompletableFuture.java:483)
	at java.util.concurrent.ForkJoinTask.doExec([email protected]/ForkJoinTask.java:373)
	at java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec([email protected]/ForkJoinPool.java:1182)
	at java.util.concurrent.ForkJoinPool.scan([email protected]/ForkJoinPool.java:1655)
	at java.util.concurrent.ForkJoinPool.runWorker([email protected]/ForkJoinPool.java:1622)
	at java.util.concurrent.ForkJoinWorkerThread.run([email protected]/ForkJoinWorkerThread.java:165)




"vert.x-eventloop-thread-0" #118 prio=5 os_prio=0 cpu=191,55ms elapsed=84,71s tid=0x00007742f87003f0 nid=0x2c76 waiting on condition  [0x000077449a6fa000]
   java.lang.Thread.State: WAITING (parking)
	at jdk.internal.misc.Unsafe.park([email protected]/Native Method)
	- parking to wait for  <0x0000000716270af0> (a java.util.concurrent.CompletableFuture$Signaller)
	at java.util.concurrent.locks.LockSupport.park([email protected]/LockSupport.java:211)
	at java.util.concurrent.CompletableFuture$Signaller.block([email protected]/CompletableFuture.java:1864)
	at java.util.concurrent.ForkJoinPool.unmanagedBlock([email protected]/ForkJoinPool.java:3465)
	at java.util.concurrent.ForkJoinPool.managedBlock([email protected]/ForkJoinPool.java:3436)
	at java.util.concurrent.CompletableFuture.waitingGet([email protected]/CompletableFuture.java:1898)
	at java.util.concurrent.CompletableFuture.get([email protected]/CompletableFuture.java:2072)
	at com.vaadin.base.devserver.viteproxy.ViteWebsocketConnection.send(ViteWebsocketConnection.java:134)
	at com.vaadin.base.devserver.viteproxy.ViteWebsocketProxy.onMessage(ViteWebsocketProxy.java:88)
	at com.vaadin.base.devserver.viteproxy.ViteWebsocketProxy.onMessage(ViteWebsocketProxy.java:33)
	at io.undertow.websockets.FrameHandler$5.run(FrameHandler.java:355)
	at io.undertow.websockets.ServerWebSocketContainer$1.call(ServerWebSocketContainer.java:143)
	at io.undertow.websockets.ServerWebSocketContainer$1.call(ServerWebSocketContainer.java:140)
	at io.quarkus.websockets.client.runtime.WebsocketCoreRecorder$4$1.call(WebsocketCoreRecorder.java:181)
	at io.undertow.websockets.ServerWebSocketContainer.invokeEndpointMethod(ServerWebSocketContainer.java:536)
	at io.undertow.websockets.ServerWebSocketContainer.invokeEndpointMethod(ServerWebSocketContainer.java:525)
	at io.undertow.websockets.FrameHandler.invokeTextHandler(FrameHandler.java:335)
	at io.undertow.websockets.FrameHandler.onText(FrameHandler.java:250)
	at io.undertow.websockets.FrameHandler.processFrame(FrameHandler.java:156)
	at io.undertow.websockets.FrameHandler.channelRead0(FrameHandler.java:131)
	at io.undertow.websockets.FrameHandler.channelRead0(FrameHandler.java:61)
	at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:346)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:318)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:444)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
	at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:289)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:442)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:412)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:440)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:420)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:788)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
	at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
	at java.lang.Thread.run([email protected]/Thread.java:840)

Expected behavior

Vite websocket connection should complete successfully and not block the application.

Minimal reproducible example

  • Clone the base-starter-flow-quarkus project
  • Add the following to application.properties
    quarkus.log.category."com.vaadin.base.devserver.viteproxy".level=TRACE
    quarkus.vertx.event-loops-pool-size=2
    
  • Fix the size of the fork join pool in quarkus-maven-plugin configuration and enable Vaadin frontend hot deploy
            <plugin>
                <groupId>io.quarkus.platform</groupId>
                <artifactId>quarkus-maven-plugin</artifactId>
                <version>${quarkus.platform.version}</version>
                <extensions>true</extensions>
                <executions>
                    <execution>
                        <goals>
                            <goal>build</goal>
                            <goal>generate-code</goal>
                            <goal>generate-code-tests</goal>
                        </goals>
                    </execution>
                </executions>
                <configuration>
                    <jvmArgs>-Dvaadin.frontend.hotdeploy=true -Djava.util.concurrent.ForkJoinPool.common.parallelism=1</jvmArgs>
                </configuration>
            </plugin>
  • Start the application in dev mode with mvn quarkus:dev
  • Open the browser at http://localhost:8080 and then reload the page a couple of time until the application does not respond. Take a thread dump and verify calls from ViteWebsocketConnection/ViteWebsocketProxy waiting for locks to be released

Versions

  • Vaadin / Flow version: 24.6-SNAPSHOT
  • Java version: 21
  • OS version: Linux
  • Application Server: Quarkus in dev mode
@mcollovati
Copy link
Collaborator Author

Logs from a completed connection

2024-11-30 14:42:10,061 DEBUG [com.vaa.bas.dev.vit.ViteWebsocketEndpoint] (vert.x-eventloop-thread-0) Browser (wWEcg1CIhgOC3FDg9Hmcuxp4MjBiBm9npVmGFv_f) connected to Vite proxy
2024-11-30 14:42:10,071 DEBUG [com.vaa.bas.dev.vit.ViteWebsocketProxy] (vert.x-eventloop-thread-0) Got message from browser: {"type":"custom","event":"vite-plugin-checker","data":{"event":"runtime-loaded"}}
2024-11-30 14:42:10,120 DEBUG [com.vaa.bas.dev.vit.ViteWebsocketConnection] (ForkJoinPool.commonPool-worker-1) Connected using the vite-hmr protocol
2024-11-30 14:42:10,120 DEBUG [com.vaa.bas.dev.vit.ViteWebsocketConnection] (ForkJoinPool.commonPool-worker-1) Connection to ws://127.0.0.1:39371/VAADIN/ using the vite-hmr protocol established
2024-11-30 14:42:10,121 DEBUG [com.vaa.bas.dev.vit.ViteWebsocketConnection] (HttpClient-5-Worker-2) Message from Vite: {"type":"connected"}
2024-11-30 14:42:10,123 DEBUG [com.vaa.bas.dev.vit.ViteWebsocketProxy] (vert.x-eventloop-thread-0) Sent message to Vite: {"type":"custom","event":"vite-plugin-checker","data":{"event":"runtime-loaded"}}
2024-11-30 14:42:10,124 DEBUG [com.vaa.bas.dev.vit.ViteWebsocketProxy] (HttpClient-5-Worker-2) Message sent to browser: {"type":"connected"}

Logs from a blocked connection attempt

2024-11-30 14:42:32,732 DEBUG [com.vaa.bas.dev.vit.ViteWebsocketEndpoint] (vert.x-eventloop-thread-0) Browser (mh2uuDc_ozJN6sSfkbyZuSkIitVYCRhgE4K--zcF) connected to Vite proxy
2024-11-30 14:42:32,748 DEBUG [com.vaa.bas.dev.vit.ViteWebsocketProxy] (vert.x-eventloop-thread-0) Got message from browser: {"type":"custom","event":"vite-plugin-checker","data":{"event":"runtime-loaded"}}
2024-11-30 14:42:32,760 DEBUG [com.vaa.bas.dev.vit.ViteWebsocketConnection] (ForkJoinPool.commonPool-worker-1) Connected using the vite-hmr protocol
2024-11-30 14:42:32,761 DEBUG [com.vaa.bas.dev.vit.ViteWebsocketConnection] (ForkJoinPool.commonPool-worker-1) Message from Vite: {"type":"connected"}

You can see in the first logs that there's Connected using the vite-hmr protocol from onOpen method and immediately after Connection to ws://127.0.0.1:39371/VAADIN/ using the vite-hmr protocol established for the whenCompleted callback, and then Message from Vite: {"type":"connected"} from onText.

In the other logs, there's only Connected using the vite-hmr protocol and Message from Vite: {"type":"connected"}, meaning that the client connection future is never completed.

mcollovati added a commit that referenced this issue Dec 1, 2024
Completing the websocket future in onOpen event prevents the connection to hang
indefintely when application run on low resources.

Fixes #20586
mcollovati added a commit that referenced this issue Dec 1, 2024
Completing the websocket future in onOpen event prevents the connection to hang
indefintely when application run on low resources.

Fixes #20586
@mshabarov mshabarov added the bug label Dec 3, 2024
@mshabarov mshabarov moved this from 🆕 Needs triage to 🏗 WIP in Vaadin Flow bugs & maintenance (Vaadin 10+) Dec 3, 2024
@mshabarov mshabarov moved this to 🔎Iteration reviews in Vaadin Flow ongoing work (Vaadin 10+) Dec 3, 2024
@mshabarov mshabarov added Impact: Low Severity: Major vite Tickets related to vite support labels Dec 3, 2024
mcollovati added a commit that referenced this issue Dec 10, 2024
Completing the websocket future in onOpen event prevents the connection to hang
indefintely when application run on low resources.

Fixes #20586
mshabarov pushed a commit that referenced this issue Dec 10, 2024
* fix: complete client websocket future on open

Completing the websocket future in onOpen event prevents the connection to hang
indefintely when application run on low resources.

Fixes #20586

* apply review suggestions
@github-project-automation github-project-automation bot moved this from 🔎Iteration reviews to Done in Vaadin Flow ongoing work (Vaadin 10+) Dec 10, 2024
vaadin-bot pushed a commit that referenced this issue Dec 10, 2024
* fix: complete client websocket future on open

Completing the websocket future in onOpen event prevents the connection to hang
indefintely when application run on low resources.

Fixes #20586

* apply review suggestions
vaadin-bot added a commit that referenced this issue Dec 10, 2024
* fix: complete client websocket future on open

Completing the websocket future in onOpen event prevents the connection to hang
indefintely when application run on low resources.

Fixes #20586

* apply review suggestions

Co-authored-by: Marco Collovati <[email protected]>
@vaadin-bot
Copy link
Collaborator

This ticket/PR has been released with Vaadin 24.6.0.rc1 and is also targeting the upcoming stable 24.6.0 version.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Development

Successfully merging a pull request may close this issue.

3 participants