firecracker
firecracker copied to clipboard
[Bug] Massive memory allocation in IO hotpath
Describe the bug
In IOVecBuffer::from_descriptor_chain, it allocates a vector without a default capacity. In such IO hot path, any allocation should be avoided.
This is a trade-off between memory usage and latency, we do not know the required capacity before hand so it is unclear where meaningful gains can be made.
Could you provide more details on the performance impact and use case?
This is a vector contains the buffer elements not the buffer itself. A reasonable uplimit can be determined. If there are more requests needs larger memory, you can always break that into two requests to underly IO.
I use heaptrack to check the memory usage. I don't have perf number on hand as I don't have fix.
Thanks for raising this up @howard0su. I'm trying to understand what is that you are worried about here. Is it the amount of memory (heap size) or the time overhead we spend in making the memory allocations. In any case, it would be interesting if you could post here the results of your analysis (heaptrack output, maybe).
Hi, sorry for accidentally closing this, didn't realize linking the issue as related to my PR would close it. We've merged a heuristic fix for this that tries to allocate the buffer on the stack for short descriptor chains (e.g. up to length 4). Depending on how long the guest driver's descriptor chains are, this might already reduce the number of allocations.
Hi @howard0su, thanks again for the report. We have committed a complete fix for the memory allocations on the network TX path in #4589