Loïc Mathieu
Loïc Mathieu
More information to help diagnose the issue. A single `StreamingByetBody` is handling 6 millions `DelayedExecutionFloImplt$OnErrorResume` objects into a `RequestLifecycle` lambda retaining 1.6GB. 
@yawkat it's very problematic as I didn't succeed in reproducing the problem. That's why I added as much information as I could; users seem to not using form/multipart that much,...
Thanks @yawkat we will test it, meanwhile I'll try my best to make a reproducer
@yawkat we cannot use `micronaut.server.netty.server-type: full_content` it crash for all requests with: ``` 2024-04-09 11:33:36,466 WARN default-nioEventLoopGroup-1-3 io.netty.channel.ChannelInitializer Failed to initialize a channel. Closing: [id: 0x646fd7cb, L:/[0:0:0:0:0:0:0:1]:8080 - R:/[0:0:0:0:0:0:0:1]:48850] java.lang.IllegalArgumentException:...
@katoquro to check if it's the same issue you can try the following command to see if the same objects are accumulating: ``` jmap -histo:live | grep io.micronaut.core.execution ```
@yawkat on user confirm that using the following configuration fixes the issue (or works around it): ```yaml configuration: micronaut: server: max-request-size: 1GB netty: server-type: full_content ```
@yawkat with this configuration, file of more than 1GB lead to a request that seems to be "blocked forever" without an exception. So it's a workaround for some of our...
@katoquro remove the grep and look at the most present objects in the histogram: `jmap -histo:live ` you took multiple one and check which objects grow in number this could...
@graemerocher unfortunately, no, that's why I added as much information as I could have.
@graemerocher yes, you can either run it from its [repository](https://github.com/kestra-io/kestra) or its [docker image](https://kestra.io/docs/installation/docker). But what very annoy me is that I cannot reproduce it myself, some users report the...