micronaut-core
micronaut-core copied to clipboard
Consider Project Loom Support
Issue description
Allow support for configuring VirtualThreadPool when configuring micronaut.executors
. This way you could configure the I/O thread pool or the Netty worker thread pool to use Virtual threads instead of traditional thread pools
https://cr.openjdk.java.net/~rpressler/loom/Loom-Proposal.html
This would be great for those of use that aren't using the reactive model.
I'm not familiar with Netty or Micronaut internals, but will it be as simple as swapping out the executor? On the Netty project I haven't found much in the way of answers. There was this discussion and then this PR where it seems a patch to OpenJDK was needed.
I'm not familiar with Micronaut internals, so maybe it is too tied to Netty for this to work, but I wonder if something like Helidon Nima could work as a drop-in replacement for Netty for those using the blocking model. From the introduction post on Medium:
The Helidon Níma web server intends to replace Netty in the Helidon ecosystem. It also can be used by other frameworks as an embedded web server component.
we have already discussed with the Helidon team that possibility, but it is too early at this stage for anything concrete to emerge
Worked for server threads for me some time ago. Maybe with version 2.x of Micronaut, and definitely with loom of previous version (JDK version 17 based).
@Factory
class ThreadingConfig {
/**
* FIXME: Needed to isKeepAlive = true
* TODO: Report to Micronaut.
*/
@Primary
@Bean
@Replaces(NettyHttpServer::class)
fun NettyHttpServer(
serverConfiguration: NettyHttpServerConfiguration,
applicationContext: ApplicationContext,
router: Router,
requestArgumentSatisfier: RequestArgumentSatisfier,
mediaTypeCodecRegistry: MediaTypeCodecRegistry,
customizableResponseTypeHandlerRegistry: NettyCustomizableResponseTypeHandlerRegistry,
resourceResolver: StaticResourceResolver,
@Named(TaskExecutors.IO) ioExecutor: ExecutorService,
@Named(NettyThreadFactory.NAME) threadFactory: ThreadFactory,
executorSelector: ExecutorSelector,
@Nullable serverSslBuilder: ServerSslBuilder?,
outboundHandlers: List<ChannelOutboundHandler>,
eventLoopGroupFactory: EventLoopGroupFactory,
httpCompressionStrategy: HttpCompressionStrategy,
httpContentProcessorResolve: HttpContentProcessorResolver
): NettyHttpServer {
return object : NettyHttpServer(
serverConfiguration,
applicationContext,
router,
requestArgumentSatisfier,
mediaTypeCodecRegistry,
customizableResponseTypeHandlerRegistry,
resourceResolver,
ioExecutor,
threadFactory,
executorSelector,
serverSslBuilder,
outboundHandlers,
eventLoopGroupFactory,
httpCompressionStrategy,
httpContentProcessorResolve
) {
override fun isKeepAlive(): Boolean {
return true;
}
}
}
@Primary
@Bean("netty")
@Replaces(ThreadFactory::class)
fun threadFactory(): ThreadFactory =
Thread.builder().virtual().name("task-vt#", 0).factory()
// Thread.ofVirtual().name("task-vt#", 0).factory() // with newer JDK API
}
Hey @soberich thanks for posting that!
@graemerocher @sdelamo could you comment on whether this approach seems OK?
Will have to verify this approach works. @yawkat will take a look at this before 4.0
Hi, I am not familiar with micronaut , but currently replacing thread to virtual thread may not receive benefits in performance because virtual thread will pin( block the scheduler thread to finish the operation, like monitor wait etc) on object monitor and Epoll wait.
So I have did some experiments to modify netty so that netty with loom could receive some benefits, but this is not mandatory, I believe most frame works could start up with virtual thread, but not sure how much they could benefit from that.
@joeyleeeeeee97 do you have references to those experiments?
https://github.com/netty/netty/issues/12348 @graemerocher
This could be reproducible and is based on https://github.com/TechEmpower/FrameworkBenchmarks/ .
some data in spring experiments: https://github.com/openjdk/loom/pull/166
@joeyleeeeeee97 the best netty perf is offered by netty's own epoll transport, which uses netty's own native code, which would pin any loom thread as you say. There are also some conceptual issues with the design of the event loop that make it less suitable to run on virtual threads.
I haven't had the time to look at this in detail yet, but I expect that we will integrate loom further downstream, e.g. by making controller methods execute on a virtual thread pool by default, but keeping the actual netty side unchanged.
right, and we would need to profile if that is a good default since the thread context switch might not be ideal if you application is fully non-blocking. For Blocking operations when you switch to a different thread to run controller methods virtual threads may make sense.
Hi all, I noticed #8180 got merged. This looks awesome and I'm really looking forward to using it - thank you @yawkat and everyone else involved! :tada:
Should we close this issue now?
we have already discussed with the Helidon team that possibility, but it is too early at this stage for anything concrete to emerge
Hi @graemerocher, since Helidon 4 was released in October, are there any updates on this?
@pkomuda there are fundamental issues related to thread management and loom at the moment that prevent netty from becoming "fully" loom-compatible. At the same time nima is slower than netty and will likely remain so for the same reasons. So we don't really have a path forward in the short term.