After switching to the fixed thread pool dispatcher in ktor(https://github.com/ktorio/ktor/commit/990ccfb2ab5371d6b1babf7645b2c5da17a46236) GlobalScope.launch cpu utilization is much higher than client.launch:
val client = HttpClient {
install(WebSockets)
}
/* client.launch */
GlobalScope.launch {
client.ws {
while(true) {
// ping
session?.send(Frame.Text("Hello"))
// pong
val frame = session?.incoming?.receive()
}
}
}
It looks like the same issue as in #840, I will investigate further
have same issue with ktor - here is stack trace for thread which does heavy cpu utilization
"DefaultDispatcher-worker-1" #38 daemon prio=5 os_prio=0 cpu=567406063.38ms elapsed=714692.38s tid=0x00007fdc4c01b000 nid=0x56ea runnable [0x00007fdc50b38000]
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPoll.wait([email protected]/Native Method)
at sun.nio.ch.EPollSelectorImpl.doSelect([email protected]/EPollSelectorImpl.java:120)
at sun.nio.ch.SelectorImpl.lockAndDoSelect([email protected]/SelectorImpl.java:124)
- locked <0x00000000fcdb65e8> (a sun.nio.ch.Util$2)
- locked <0x00000000fcdb6498> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.selectNow([email protected]/SelectorImpl.java:146)
at io.ktor.network.selector.ActorSelectorManager.process(ActorSelectorManager.kt:77)
at io.ktor.network.selector.ActorSelectorManager$process$1.invokeSuspend(ActorSelectorManager.kt)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:32)
at kotlinx.coroutines.DispatchedTask.run(Dispatched.kt:233)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:594)
at kotlinx.coroutines.scheduling.CoroutineScheduler.access$runSafely(CoroutineScheduler.kt:60)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:742)
related code is:
webSocket("/api/ws-endpoint") {
...
try {
while (true) {
val frame = incoming.receive()
if (frame is Frame.Text) call.log.trace("message received: ${frame.readText()}")
}
} catch (ex: ClosedReceiveChannelException) {
call.log.info("ws: ${ex.message}")
}
}
I will investigate further
@qwwdfsad when your investigation will be ended?
Same here, this project is only using the "Raw Sockets" (this instance was using only the client) feature from ktor.
(Submitted it here because the issue on ktor is waiting for the fix on the coroutines side)
By the way, running on JDK 11

I'm having the same issue as @MrPowerGamerBR above with a code like this:
withTimeout(timeoutMillis) {
aSocket(ActorSelectorManager(Dispatchers.IO))
.tcp()
.connect(InetSocketAddress(host, port)).use { socket ->
val writeChannel = socket.openWriteChannel()
val readChannel = socket.openReadChannel()
writeChannel.writeStringUtf8(input.vowpalWabbitFormat) // just a string ending with two newlines (`\n`)
writeChannel.flush()
val payload = readChannel.readUTF8Line()
RawOutput(payload!!) // return the result
}
}
I've also tried to force close the read channel like
val payload = readChannel.readUTF8Line()
readChannel.cancel()
鈥ut it seems useless.
Is there's any progress on this issue? This is making me avoid using some libraries that rely on coroutines (like ktor, even tho I really like it) and because this is a major bug it becomes kinda difficult to stay with ktor if it makes your application consume 100% CPU for no reason while any other library works fine.
Yes, we are targeting the fix to 1.3.2
Looks like this fix didn't make it into 1.3.2. Any updated forecasts?
As announced in Slack, we've shifted all the changes for 1.3.2 to 1.3.3 :(
1.3.2 release had no API changes or bug fixes at all and contained only changes necessary for Spring 5.2 GA (some of the reactive Flow bridges became public).
Meanwhile, most of the work is done and now I am mostly in evaluation and small performance tweaks phase.
I'm having the same issue using ktor. 100% cpu utilization
Its appears randomly and stays at 100% until process is restarted. Is there any workaround until 1.3.3?
DefaultDispatcher-worker-5" - Thread t@45
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <752867a8> (a sun.nio.ch.Util$3)
- locked <5fce1119> (a java.util.Collections$UnmodifiableSet)
- locked <63b5314> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.selectNow(SelectorImpl.java:105)
at io.ktor.network.selector.ActorSelectorManager.process(ActorSelectorManager.kt:81)
at io.ktor.network.selector.ActorSelectorManager$process$1.invokeSuspend(ActorSelectorManager.kt)
at kotlin.coroutines.jvm.internal.BaseContinuationImpl.resumeWith(ContinuationImpl.kt:33)
at kotlinx.coroutines.DispatchedTask.run(Dispatched.kt:241)
at kotlinx.coroutines.scheduling.CoroutineScheduler.runSafely(CoroutineScheduler.kt:594)
at kotlinx.coroutines.scheduling.CoroutineScheduler.access$runSafely(CoroutineScheduler.kt:60)
at kotlinx.coroutines.scheduling.CoroutineScheduler$Worker.run(CoroutineScheduler.kt:740)
`
I've tried to close the ActorSelectorManager implicitly and it seems to help:
ActorSelectorManager(Dispatchers.IO).use {
aSocket(it)
.tcp()
.connect(InetSocketAddress(host, port)).use { socket ->
val writeChannel = socket.openWriteChannel()
val readChannel = socket.openReadChannel()
writeChannel.writeStringUtf8(input.vowpalWabbitFormat) // just a string ending with two newlines (`\n`)
writeChannel.flush()
val payload = readChannel.readUTF8Line()
RawOutput(payload!!) // return the result
}
}
This issue looks like https://stackoverflow.com/questions/25922809/glassfish-4-grizzly-threads-heavy-cpu-usage
It looks like the issue is being worked on there.
Most helpful comment
Yes, we are targeting the fix to 1.3.2