rabbitmq-dotnet-client icon indicating copy to clipboard operation
rabbitmq-dotnet-client copied to clipboard

Reduce allocations by using pooled memory and recycling memory streams

Open stebet opened this issue 4 years ago • 8 comments

The RabbitMQ client currently allocates a lot of unnecessary memory and has a lot of GC overhead.

I'm currently working on a PR to reduce the allocations being made, and will probably introduce some *Async overloads as well as they help reduce lock contention.

Here is the current progress I have made with a test app that opens two connections. One connection bulk publishes 50000 messages in 100 message batches containing just an 4-byte integer as payload. The other connection receives those messages so it's mostly just measureing the frame serialize/deserialize overhead.

Before: image

After: image

I'll follow up this issue with discussions and link the PR once it's ready for review.

stebet avatar Jan 23 '20 11:01 stebet

Some explanations: The before uses the current NuGet release of the RabbitMQ.Client library. The after uses code based on the latest master branch which is the 6.0 release.

stebet avatar Jan 23 '20 11:01 stebet

So a 25% reduction with this specific run. Looks promising!

michaelklishin avatar Jan 23 '20 16:01 michaelklishin

ralated to https://github.com/rabbitmq/rabbitmq-dotnet-client/pull/452

lechu445 avatar Jan 23 '20 19:01 lechu445

I have got System.IO.Pipelines working on the socket connection as well, will push the PR soon for review and testing. Improvements are quite impressive, will add details and screenshots later :)

stebet avatar Jan 24 '20 18:01 stebet

So here's where I'm at. Similar scenario as above, one sender, one receiver. Bulk Send 50000 messages in 500x100 message batches, but now with a 512 byte, 4kb and 16kb payloads.

Before

512 byte payload

image

4kb payload

image

16kb payload

image

After (using pooled arrays, recyclable memorystreams and System.IO.Pipelines)

512 byte payload

image

4kb payload

image

16kb payload

image

stebet avatar Jan 25 '20 00:01 stebet

More progress :)

512 byte payloads

image

4kb payloads

image

16kb payloads

image

To summarize: 512 byte payloads are down from 556mb to 238mb (57% reduction) 4kb payloads are down from 1.96gb to 411mb (79% reduction) 16kb payloads are down from 7.14gb to 1.00 gb (86% reduction)

What's left To finish up the PR I need to get all the tests to run which will require a little refactoring around catching the exceptions/errors that happen now that the Pipelines are taking care of reading and writing the sockets, as they are little harder to reach and parse. Once that's ready, I'll submit the PR for further work and discussions on what APIs (if any) might need to change.

stebet avatar Jan 29 '20 16:01 stebet

@stebet impressive 💪

michaelklishin avatar Jan 29 '20 18:01 michaelklishin

This will be addressed by either #706 or #707

lukebakken avatar Feb 06 '20 01:02 lukebakken

#1445 appears to be the "final word" on this issue. Closing.

lukebakken avatar Dec 28 '23 00:12 lukebakken