micronaut-core
micronaut-core copied to clipboard
[LEAK] Chunked Multipart upload produces memory leaks
Issue description
I have a controller, that recieves a file and a set of fields
And I've created a curl command for testing this endpoint
curl "http://localhost:8087/api/storage/production%2Fdrive%2F2640%2F1033819%2Fcontent%2Fac4667be031b80e6830b8638becd72bb%2F2e79d1250160d4449cb411a13af4e53" \
-H 'Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7' \
-H 'Connection: keep-alive' \
-H 'x-spanning-correlationid: d4r32f5s' \
-H 'Content-Type: multipart/form-data' \
-H 'Origin: http://localhost:8087' \
-H 'Transfer-Encoding: chunked' \
-H 'accept: application/json' \
-F 'tenantId=1' \
-F 'resourceId=1' \
-F 'objectType=MAIL' \
-F 'encryptionKey=1' \
-F 'retentionTime=1' \
-F 'tags={"Compression-Type": "ZSTD"}' \
-F 'file=@file_beggier_than_100_mb.tar.gz' \
-v \
--compressed &
I've tried to do it using micronaut.server.multipart.mixed: true upload type, since content length is not known, by default it is using Memory file upload and produces Memory Leaks before breakpoint in the controller was hit.
LEAK: ByteBuf.release() was not called before it's garbage-collected. See https://netty.io/wiki/reference-counted-objects.html for more information.\nRecent access records: \nCreated at:
io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:403)
io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:188)
io.netty.buffer.UnsafeByteBufUtil.copy(UnsafeByteBufUtil.java:436)
io.netty.buffer.PooledUnsafeDirectByteBuf.copy(PooledUnsafeDirectByteBuf.java:216)
io.netty.buffer.AbstractByteBuf.copy(AbstractByteBuf.java:1194)
io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.loadDataMultipartOptimized(HttpPostMultipartRequestDecoder.java:1196)
io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.getFileUpload(HttpPostMultipartRequestDecoder.java:926)
io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.decodeMultipart(HttpPostMultipartRequestDecoder.java:572)
io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.parseBodyMultipart(HttpPostMultipartRequestDecoder.java:463)
io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.parseBody(HttpPostMultipartRequestDecoder.java:432)
io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.offer(HttpPostMultipartRequestDecoder.java:347)
io.netty.handler.codec.http.multipart.HttpPostMultipartRequestDecoder.offer(HttpPostMultipartRequestDecoder.java:54)
io.micronaut.http.server.netty.FormDataHttpContentProcessor.onData(FormDataHttpContentProcessor.java:148)
io.micronaut.http.server.netty.AbstractHttpContentProcessor.doOnNext(AbstractHttpContentProcessor.java:79)
io.micronaut.http.server.netty.AbstractHttpContentProcessor.doOnNext(AbstractHttpContentProcessor.java:36)
io.micronaut.core.async.subscriber.CompletionAwareSubscriber.onNext(CompletionAwareSubscriber.java:56)
io.micronaut.http.netty.reactive.HandlerPublisher.publishMessage(HandlerPublisher.java:393)
io.micronaut.http.netty.reactive.HandlerPublisher.flushBuffer(HandlerPublisher.java:470)
io.micronaut.http.netty.reactive.HandlerPublisher.publishMessageLater(HandlerPublisher.java:360)
io.micronaut.http.netty.reactive.HandlerPublisher.channelRead(HandlerPublisher.java:323)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
io.micronaut.http.netty.stream.HttpStreamsHandler.handleReadHttpContent(HttpStreamsHandler.java:316)
io.micronaut.http.netty.stream.HttpStreamsHandler.channelRead(HttpStreamsHandler.java:282)
io.micronaut.http.netty.stream.HttpStreamsServerHandler.channelRead(HttpStreamsServerHandler.java:134)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:93)
io.netty.handler.codec.http.websocketx.extensions.WebSocketServerExtensionHandler.channelRead(WebSocketServerExtensionHandler.java:99)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:103)
io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:93)
io.netty.handler.codec.http.HttpServerKeepAliveHandler.channelRead(HttpServerKeepAliveHandler.java:64)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
io.netty.handler.flow.FlowControlHandler.dequeue(FlowControlHandler.java:200)
io.netty.handler.flow.FlowControlHandler.channelRead(FlowControlHandler.java:162)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436)
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:327)
io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:314)
io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:435)
io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:279)
io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:286)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357)
io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379)
io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365)
io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919)
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:166)
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:722)
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:658)
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:584)
io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:496)
io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:995)
io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
java.base/java.lang.Thread.run(Thread.java:833)
I've connected to the java process using visualVM, and I see, that every upload leaves chunk of 4Mb in memory

After that I've decided to use micronaut.server.multipart.disk: true leak disappeared, but it started to trow OOM, I've tried to debug the issue and I found out, that on NettyHttpRequest.release() it calls buiildBody() and reads whole the file into memory

All of this chunk wasn't released and after some upload produces out of memory

OS: Monterey 12.3.1
Java:
openjdk 17.0.3 2022-04-19
OpenJDK Runtime Environment Temurin-17.0.3+7 (build 17.0.3+7)
OpenJDK 64-Bit Server VM Temurin-17.0.3+7 (build 17.0.3+7, mixed mode, sharing)
Is it a bug, or am I doing something wrong?
Amazing issue, we also had the similar problem.
Still do not know how to resolve it. It would be nice to understand can we improve the Netty wrapper or handle the lazy loading of the file.
Thanks a lot for the help!
What Micronaut version do you have?
Sorry, I haven't mentioned it.
<io.micronaut.version>3.5.3</io.micronaut.version>
Can you provide an example that reproduces the issue please?
https://github.com/andrey-kozel/micronaut-error-demo Here is an example of the simple controller and tests for it.
In all the test I'm uploading file of ~ 20Mb as bytes array
First test works as expected

The second test is used apache HttpClient MultipartUpload and produces leak in the logs (Happens after a few retries)

The third test with works good disk=true throws OOM on 500Xmx

The fourth example works as expected using apache Http client on the 600Xmx

curl request also produces leak with file ~20Mb

curl "http://localhost:8087/api/storage/production%2Fdrive%2F2640%2F1033819%2Fcontent%2Fac4667be031b80e6830b8638becd72bb%2F2e79d1250160d4449cb411a13af4e53" \
-H 'Accept-Language: ru-RU,ru;q=0.9,en-US;q=0.8,en;q=0.7' \
-H 'Connection: keep-alive' \
-H 'x-spanning-correlationid: d4r32f5s' \
-H 'Content-Type: multipart/form-data' \
-H 'Origin: http://localhost:8087' \
-H 'Transfer-Encoding: chunked' \
-H 'accept: application/json' \
-F 'tenantId=1' \
-F 'resourceId=1' \
-F 'objectType=MAIL' \
-F 'encryptionKey=1' \
-F 'retentionTime=1' \
-F 'tags={"Compression-Type": "ZSTD"}' \
-F '[email protected]' \
-v \
--compressed
Looks like it depends on the client.
Hi @yawkat,
Do you have any ideas about the possible fix and the timeline?
Thanks for your assist here.
i can reproduce the issue, but it might take a bit to find the root cause.
@yawkat Thank you for the feedback!!
Caused by https://github.com/netty/netty/issues/12627
I can fix this in micronaut, but it's ugly, so I'd prefer to wait on a netty fix. Easy workaround is to not use mixed mode for upload management.
@yawkat Thank you! But I've attached a demo where disk upload throws OOM, am I configured client wrong?
there's some issues in the micronaut HTTP client as well but I don't see OOM with the apache client
@sdelamo what's the current timeline for 3.6? I think the netty issue will take a while still, we could implement my workaround for 3.6. it involves copying the relevant classes from netty and adjusting their behavior, so it's a bit hacky, but we could remove them again once the issue is fixed on the netty side.
@yawkat a temporary workaround implementation is ok
Could you tell us, please, when it is going to be released?
with 3.6
I mean, may be you have some timeline for this release?
I mean, may be you have some timeline for this release?
We target by the end of this week.
Thank you!
@sdelamo Could you please share the timeline for this release with the fix?
The plan is today
Was the release postponed for some reasons?
@yawkat @graemerocher I found another issue related to this problem.
I have a controller with definition like this

If I'm uploading files and set all the fields it behaves normal, but if I'm not sending field "tags", the next problem happened:
- AbstractRouteMatch treats it as an unresolved argument

- And again reads body with extremely huge "file" into memory

- OOM

This looks unrelated. Please make a new issue
Here is the issue https://github.com/micronaut-projects/micronaut-core/issues/7939