fix OOME when sending huge files over network
Motivation
Sending huge files (Hundreds of MBs) over the network causes an OOME Receiving a lot of packets also makes the packetqueue longer and longer if the PacketDispatchers can't keep up
Modification
I set a max length for the PacketDispatchers (A little over one packet per thread to ensure using all available performance)
The sending part is fixed by reducing the default chunk size to 1MB (from 50MB). This has to be this low because every ChunkSize byte needs at least 3 bytes of memory (because of copying from one array to another) I also tested with 5MB default chunk size, but even that caused an OOME. We could, when sending files, also check if there is enough memory available, and if not try trigger the GC and simply wait for the memory, however this is not yet in the commits
Result
No more OOME
Other context
How huge are your files? I've just used a 4gb template on two nodes with 256mb memory each.
(After fixing an integer overflow)
My files were about 400mb large. Your setup might work because you have faster hard drive write speeds and the wrapper doesn't have a problem keeping up (There are less - probably one - packets in flight). How did you send/receive the files? Because I used the TemplateStorage#zipTemplate.
Here the OOME in the node:
[21.08 14:48:18.794] SEVERE: Exception in thread "pool-10-thread-1"
[21.08 14:48:18.794] SEVERE: java.lang.OutOfMemoryError: Cannot reserve 52428867 bytes of direct buffer memory (allocated: 253784980, limit: 268435456)
[21.08 14:48:18.795] SEVERE: at java.base/java.nio.Bits.reserveMemory(Bits.java:178)
[21.08 14:48:18.795] SEVERE: at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:121)
[21.08 14:48:18.795] SEVERE: at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:332)
[21.08 14:48:18.795] SEVERE: at io.netty5.buffer.bytebuffer.ByteBufferMemoryManager.allocateShared(ByteBufferMemoryManager.java:51)
[21.08 14:48:18.795] SEVERE: at io.netty5.buffer.pool.UnpooledUntetheredMemory.<init>(UnpooledUntetheredMemory.java:36)
[21.08 14:48:18.795] SEVERE: at io.netty5.buffer.pool.PoolArena.allocateHuge(PoolArena.java:226)
[21.08 14:48:18.795] SEVERE: at io.netty5.buffer.pool.PoolArena.allocate(PoolArena.java:126)
[21.08 14:48:18.795] SEVERE: at io.netty5.buffer.pool.PooledBufferAllocator.allocateUntethered(PooledBufferAllocator.java:343)
[21.08 14:48:18.795] SEVERE: at io.netty5.buffer.pool.PooledBufferAllocator.allocate(PooledBufferAllocator.java:319)
[21.08 14:48:18.795] SEVERE: at io.netty5.buffer.bytebuffer.NioBuffer.ensureWritable(NioBuffer.java:454)
[21.08 14:48:18.795] SEVERE: at io.netty5.buffer.Buffer.writeBytes(Buffer.java:471)
[21.08 14:48:18.795] SEVERE: at eu.cloudnetservice.driver.network.netty.buffer.NettyMutableDataBuf.writeByteArray(NettyMutableDataBuf.java:133)
[21.08 14:48:18.795] SEVERE: at eu.cloudnetservice.driver.network.chunk.network.ChunkedPacket.createChunk(ChunkedPacket.java:99)
[21.08 14:48:18.795] SEVERE: at eu.cloudnetservice.driver.network.chunk.network.ChunkedPacket.createChunk(ChunkedPacket.java:64)
[21.08 14:48:18.795] SEVERE: at eu.cloudnetservice.driver.network.chunk.defaults.DefaultFileChunkPacketSender.lambda$transferChunkedData$0(DefaultFileChunkPacketSender.java:93)
[21.08 14:48:18.795] SEVERE: at eu.cloudnetservice.common.concurrent.Task.lambda$supply$1(Task.java:71)
[21.08 14:48:18.795] SEVERE: at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
[21.08 14:48:18.795] SEVERE: at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
[21.08 14:48:18.795] SEVERE: at java.base/java.lang.Thread.run(Thread.java:833)
[21.08 14:59:54.087] INFO:
[21.08 14:59:54.088] INFO: CloudNet Blizzard 4.0.0-RC9 f6ca4c38
[21.08 14:59:54.088] INFO: Discord: <https://discord.cloudnetservice.eu/>
[21.08 14:59:54.088] INFO:
[21.08 14:59:54.089] INFO: ClusterId: 9da8725e-****-481a-****-4b9b7fbc608f
[21.08 14:59:54.089] INFO: NodeId: Node-1
[21.08 14:59:54.089] INFO: Head-NodeId: Node-1
[21.08 14:59:54.089] INFO: CPU usage: (P/S) 1.08/17.11/100%
[21.08 14:59:54.090] INFO: Node services memory allocation (U/R/M): 1024/1024/4096 MB
[21.08 14:59:54.090] INFO: Threads: 40
[21.08 14:59:54.090] INFO: Heap usage: 60/256MB
[21.08 14:59:54.090] INFO: JVM: Eclipse Adoptium 17 (OpenJDK 64-Bit Server VM 17.0.6+10)
[21.08 14:59:54.091] INFO: Update Repo: CloudNetService/launchermeta, Update Branch: beta
[21.08 14:59:54.091] INFO:
If I give the node enough memory:
[21.08 16:08:56.808] INFO: &b[lobby-1] [16:08:55 ERROR]: Exception in network handler
[21.08 16:08:56.809] INFO: &b[lobby-1] java.lang.OutOfMemoryError: Cannot reserve 67043328 bytes of direct buffer memory (allocated: 233459679, limit: 268435456)
[21.08 16:08:56.809] INFO: &b[lobby-1] at java.nio.Bits.reserveMemory(Bits.java:178) ~[?:?]
[21.08 16:08:56.810] INFO: &b[lobby-1] at java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:121) ~[?:?]
[21.08 16:08:56.810] INFO: &b[lobby-1] at java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:332) ~[?:?]
[21.08 16:08:56.810] INFO: &b[lobby-1] at io.netty5.buffer.bytebuffer.ByteBufferMemoryManager.allocateShared(ByteBufferMemoryManager.java:51) ~[netty5-buffer-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.810] INFO: &b[lobby-1] at io.netty5.buffer.pool.UnpooledUntetheredMemory.<init>(UnpooledUntetheredMemory.java:36) ~[netty5-buffer-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.810] INFO: &b[lobby-1] at io.netty5.buffer.pool.PoolArena.allocateHuge(PoolArena.java:226) ~[netty5-buffer-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.811] INFO: &b[lobby-1] at io.netty5.buffer.pool.PoolArena.allocate(PoolArena.java:126) ~[netty5-buffer-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.811] INFO: &b[lobby-1] at io.netty5.buffer.pool.PooledBufferAllocator.allocateUntethered(PooledBufferAllocator.java:343) ~[netty5-buffer-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.811] INFO: &b[lobby-1] at io.netty5.buffer.pool.PooledBufferAllocator.allocate(PooledBufferAllocator.java:319) ~[netty5-buffer-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.811] INFO: &b[lobby-1] at io.netty5.buffer.bytebuffer.NioBuffer.ensureWritable(NioBuffer.java:454) ~[netty5-buffer-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.811] INFO: &b[lobby-1] at io.netty5.buffer.Buffer.ensureWritable(Buffer.java:675) ~[netty5-buffer-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.812] INFO: &b[lobby-1] at io.netty5.handler.codec.ByteToMessageDecoder$MergeCumulator.expandCumulationAndWrite(ByteToMessageDecoder.java:526) ~[netty5-codec-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.812] INFO: &b[lobby-1] at io.netty5.handler.codec.ByteToMessageDecoder$MergeCumulator.cumulate(ByteToMessageDecoder.java:508) ~[netty5-codec-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.812] INFO: &b[lobby-1] at io.netty5.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:200) ~[netty5-codec-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.812] INFO: &b[lobby-1] at io.netty5.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:455) ~[netty5-transport-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.812] INFO: &b[lobby-1] at io.netty5.channel.DefaultChannelHandlerContext.findAndInvokeChannelRead(DefaultChannelHandlerContext.java:445) ~[netty5-transport-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.813] INFO: &b[lobby-1] at io.netty5.channel.DefaultChannelHandlerContext.fireChannelRead(DefaultChannelHandlerContext.java:426) ~[netty5-transport-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.813] INFO: &b[lobby-1] at io.netty5.channel.ChannelHandler.channelRead(ChannelHandler.java:235) ~[netty5-transport-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.813] INFO: &b[lobby-1] at io.netty5.channel.DefaultChannelHandlerContext.invokeChannelRead(DefaultChannelHandlerContext.java:455) ~[netty5-transport-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.813] INFO: &b[lobby-1] at io.netty5.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:838) ~[netty5-transport-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.814] INFO: &b[lobby-1] at io.netty5.channel.AbstractChannel$ReadSink.processRead(AbstractChannel.java:1975) ~[netty5-transport-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.814] INFO: &b[lobby-1] at io.netty5.channel.epoll.EpollSocketChannel.epollInReadyBytes(EpollSocketChannel.java:443) ~[netty5-transport-classes-epoll-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.814] INFO: &b[lobby-1] at io.netty5.channel.epoll.EpollSocketChannel.epollInReady(EpollSocketChannel.java:411) ~[netty5-transport-classes-epoll-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.814] INFO: &b[lobby-1] at io.netty5.channel.epoll.AbstractEpollChannel.doReadNow(AbstractEpollChannel.java:305) ~[netty5-transport-classes-epoll-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.814] INFO: &b[lobby-1] at io.netty5.channel.AbstractChannel$ReadSink.readLoop(AbstractChannel.java:2035) ~[netty5-transport-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.815] INFO: &b[lobby-1] at io.netty5.channel.AbstractChannel.readNow(AbstractChannel.java:910) ~[netty5-transport-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.815] INFO: &b[lobby-1] at io.netty5.channel.epoll.AbstractEpollChannel.access$000(AbstractEpollChannel.java:48) ~[netty5-transport-classes-epoll-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.815] INFO: &b[lobby-1] at io.netty5.channel.epoll.AbstractEpollChannel$1.run(AbstractEpollChannel.java:56) ~[netty5-transport-classes-epoll-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.815] INFO: &b[lobby-1] at io.netty5.util.concurrent.SingleThreadEventExecutor.runTask(SingleThreadEventExecutor.java:338) ~[netty5-common-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.815] INFO: &b[lobby-1] at io.netty5.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:361) ~[netty5-common-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.816] INFO: &b[lobby-1] at io.netty5.channel.SingleThreadEventLoop.run(SingleThreadEventLoop.java:180) ~[netty5-transport-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.816] INFO: &b[lobby-1] at io.netty5.util.concurrent.SingleThreadEventExecutor.lambda$doStartThread$4(SingleThreadEventExecutor.java:774) ~[netty5-common-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.816] INFO: &b[lobby-1] at io.netty5.util.internal.ThreadExecutorMap.lambda$apply$1(ThreadExecutorMap.java:68) ~[netty5-common-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.817] INFO: &b[lobby-1] at io.netty5.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) [netty5-common-5.0.0.Alpha5.jar:5.0.0.Alpha5]
[21.08 16:08:56.817] INFO: &b[lobby-1] at java.lang.Thread.run(Thread.java:833) [?:?]
The lobby-1
[21.08 16:12:56.235] INFO:
[21.08 16:12:56.236] INFO: * CloudService: 79a0d506-7820-4a29-bc78-36a3c6e0ad3c
[21.08 16:12:56.236] INFO: * Name: lobby-1
[21.08 16:12:56.236] INFO: * Node: Node-1
[21.08 16:12:56.236] INFO: * Address: 37.114.47.76:44967
[21.08 16:12:56.236] INFO: * Connected: 21.08.2023 16:10:09
[21.08 16:12:56.236] INFO: * Lifecycle: RUNNING
[21.08 16:12:56.236] INFO: * Groups: lobby
[21.08 16:12:56.236] INFO:
[21.08 16:12:56.236] INFO: * ServiceInfoSnapshot | 21.08.2023 16:12:56
[21.08 16:12:56.236] INFO: PID: 12697
[21.08 16:12:56.236] INFO: CPU usage: 1.4%
[21.08 16:12:56.236] INFO: Threads: 48
[21.08 16:12:56.236] INFO: Heap usage: 132/256MB
[21.08 16:12:56.236] INFO:
And the node (Just to show the memory)
[21.08 16:14:04.036] INFO:
[21.08 16:14:04.037] INFO: CloudNet Blizzard 4.0.0-RC9 f6ca4c38
[21.08 16:14:04.037] INFO: Discord: <https://discord.cloudnetservice.eu/>
[21.08 16:14:04.038] INFO:
[21.08 16:14:04.038] INFO: ClusterId: 9da8725e-****-481a-****-4b9b7fbc608f
[21.08 16:14:04.038] INFO: NodeId: Node-1
[21.08 16:14:04.038] INFO: Head-NodeId: Node-1
[21.08 16:14:04.038] INFO: CPU usage: (P/S) .67/9.4/100%
[21.08 16:14:04.038] INFO: Node services memory allocation (U/R/M): 768/768/4096 MB
[21.08 16:14:04.038] INFO: Threads: 45
[21.08 16:14:04.038] INFO: Heap usage: 454/2560MB
[21.08 16:14:04.038] INFO: JVM: Eclipse Adoptium 17 (OpenJDK 64-Bit Server VM 17.0.6+10)
[21.08 16:14:04.038] INFO: Update Repo: CloudNetService/launchermeta, Update Branch: beta
[21.08 16:14:04.038] INFO:
Just to add more context, it does work sometimes also with 50MB buffer size To make it more prone to trigger, starting multiple transfers at the same time is an option Also in the wrapper not every OOEM gets logged, most of the times (even though there is an OOME) the InputStream after the transfer completed is just null.
And another reason to reduce the buffer size is that if there is a cluster that is NOT on the same local network, 50MB packets with TCP use up the entire bandwidth and no packets get sent while a file is transferred. Using a smaller chunk size will help to let packets through more often
So basically you're proposing the solutation to be: in case there are too many messages, block the netty event loop threads (by processing incoming packets on them) and hope that the memory consumption is not too high? I could see disabling auto read in case the memory consumption is too high (or maybe there is a better way to solve this, I didn't look into the specifics yet), but this doesn't solve the issue with the outgoing memory consumption being too high...
Yes for incoming packets I propose to put them in a queue with a max size, so they do not use up memory. Uploading will take longer, but tbh that's better than having an infinitely growing queue of unhandled packets using up memory. Also you will be able to roughly see how many packets have been handled by looking at how many packets were uploaded. There will be no more "Hey I uploaded this packets 5min ago, why is it just being handled now) in case of a giant packet queue on the receiving end. For outgoing packets, normal packets should just stay the way they are and for files I'd just reduce the buffer size from 50MB, which is a lot if we have 256MB of RAM. We need the array here, the buffer here, the buffer here and the buffer here. With all this copy-pasting we have a total of roughly 4 bytes of required per byte we want to send. So 200MB for every 50MB file chunk. Then we also have to hope that the garbage collector runs to clean up the references to the ByteBuffers, which is not guaranteed Reducing the chunk size would greatly reduce the memory cost for the little overhead of sending more packets.
Just want to point out 2 things here:
- the byte array for reading is shared, the 50MB are only allocated once and then re-used for each chunk.
- we're using direct buffers for netty, therefore the memory is released immediately and there is no need for the GC to run
But I'm still thinking how we can prevent memory related issues cleanly in the future. At the 19.09 - when Java 21 is released and we're going to switch - then we're using virtual threads anyway, which might even increase the issues that we have with memory usage. I think we need to somehow keep track of memory and just don't read messages until enough memory is available to receive the next bytes. But this is easier said than done...
Uhm either I or you do not understand how netty handles memory. To answer your first point: Yeah, still +50MB on the heap for each upload task If you take a closer look at ByteBuffer.allocateDirect, you will see that the memory behind the buffer will only get freed once the ByteBuffer isn't referenced any more. This essentially means the garbage collector frees the memory, be it direct or not. So we still have to rely on the garbage collector... If the garbage collector can't keep up, or if it just decided not to trigger when only 40MB memory are available, then allocating 50MB will fail
Another options is to discard the custom chunk protocol completely and do not send it over the network via packets, but rather directly. This way we'd be avoiding errors due to copy-paste like these:
[22.08 16:12:59.520] SEVERE: Caused by: java.lang.OutOfMemoryError: Cannot reserve 4194304 bytes of direct buffer memory (allocated: 266401649, limit: 268435456)
[22.08 16:12:59.520] SEVERE: at java.base/java.nio.Bits.reserveMemory(Bits.java:178)
[22.08 16:12:59.520] SEVERE: at java.base/java.nio.DirectByteBuffer.<init>(DirectByteBuffer.java:121)
[22.08 16:12:59.520] SEVERE: at java.base/java.nio.ByteBuffer.allocateDirect(ByteBuffer.java:332)
[22.08 16:12:59.520] SEVERE: at io.netty5.buffer.bytebuffer.ByteBufferMemoryManager.allocateShared(ByteBufferMemoryManager.java:51)
[22.08 16:12:59.520] SEVERE: at io.netty5.buffer.pool.PoolChunk.<init>(PoolChunk.java:196)
[22.08 16:12:59.520] SEVERE: at io.netty5.buffer.pool.PoolArena.newChunk(PoolArena.java:454)
[22.08 16:12:59.520] SEVERE: at io.netty5.buffer.pool.PoolArena.allocateNormal(PoolArena.java:212)
[22.08 16:12:59.520] SEVERE: at io.netty5.buffer.pool.PoolArena.tcacheAllocateNormal(PoolArena.java:180)
[22.08 16:12:59.520] SEVERE: at io.netty5.buffer.pool.PoolArena.allocate(PoolArena.java:121)
[22.08 16:12:59.520] SEVERE: at io.netty5.buffer.pool.PooledBufferAllocator.allocateUntethered(PooledBufferAllocator.java:343)
[22.08 16:12:59.520] SEVERE: at io.netty5.buffer.pool.PooledBufferAllocator.allocate(PooledBufferAllocator.java:319)
[22.08 16:12:59.520] SEVERE: at io.netty5.buffer.DefaultBufferAllocators$UncloseableBufferAllocator.allocate(DefaultBufferAllocators.java:122)
[22.08 16:12:59.520] SEVERE: at eu.cloudnetservice.driver.network.netty.codec.VarInt32FramePrepender.allocateBuffer(VarInt32FramePrepender.java:37)
[22.08 16:12:59.520] SEVERE: at eu.cloudnetservice.driver.network.netty.codec.VarInt32FramePrepender.allocateBuffer(VarInt32FramePrepender.java:26)
[22.08 16:12:59.520] SEVERE: at io.netty5.handler.codec.MessageToByteEncoder.write(MessageToByteEncoder.java:79)
[22.08 16:12:59.520] SEVERE: ... 15 more
Also I wanted to remark that if a file transfer fails due to an exception, that exception is ignored silently and the transfer is marked as failed without displaying any reason. Might want to also change that to display at least the message
Another option to actually free the ByteBuffers as soon as possible is to use netty's UnsafeMemoryManager instead of the ByteBufferMemoryManager
We did enable that already (https://github.com/CloudNetService/CloudNet-v3/commit/ae1735e8abfe4ecde9b8ad9f64c14adcfa0c37f9), but it's still not a full fix for the issue. We're looking into properly implementing this memory handling when we have time.
Didn't see that one, but it does work wonders. I'm leaving this open until you have time to implement it
We need the array here, the buffer here, the buffer here and the buffer here.
I went ahead and removed the last full copy of the packet buffer into a new buffer with the frame length prepended. Wasn't able to test this yet, but it should do the trick: https://github.com/CloudNetService/CloudNet-v3/commit/f603e1f7c577476db9d6e3af216cbebadbf93640
Edit: was just looking at the commit, don't even need to split the message again, I think simply retaining it should be sufficent Edit 2: went ahead and improved even it futher, removed the need for the list as we always know how many items are gonna be returned: https://github.com/CloudNetService/CloudNet-v3/commit/10e5c485cfe582d20181a7071d4f7cc1ed383692
Seems promising so far, have you figured out yet if you want to remove the need for the intermediate Buffer in ChunkedPacket? Or maybe do something similar in the NettyPacketEncoder?
We need to copy the buffer at least once as we allow (and chunked packet actually relies on it) that the same packet is sent to multiple channels, thus the same buffer is re-used. If we don't copy the buffer there, than the buffer might get closed by one write, or, changes after sending the packet for the first time are visible to the other packets.
Maybe instead of storing the data of the packet directly in a field, we could store it in a supplier that can create as many Buffers as needed?
But this introduces other assumptions: lets stay with the chunked packet sending: in that case we have a single backing array and would have to allocate the buffer x times + wait until the packet was actually processed as we cannot continue until each packet was sent. Or am I understanding something wrong?
I was just throwing ideas out there but yes so far I think you got it. Don't yet see the problem (other than breaking the contract of the Packet class ofc) with that approach. Allocating a buffer shouldn't be too expensive (and we'd be allocating less, not more, as we'd be able to take ownership of the buffer generated from the Packet and just keep using it)
Again, another option is to ditch the entire packet approach (for file transfer) and use a (single, or multiple) byte at the beginning of each packet to specify flags (kind of like prioritized). Then one bit is to specify if the input is a Packet or some raw data to be handled by some other way. That would remove the need to modify any Packet contracts, and we could include that in the NettyPacketEncoder/Decoder without modifying any other behaviour
You mind accepting my friend request on discord? 😄
The network refactor that I was working on for the past weeks now actually immediately frees the allocated memory & implements your packet queue size setting. This should prevent further OOMs from occuring, even when running on small amounts of memory.