neolink
neolink copied to clipboard
feat(rtsp): fix file descriptor exhaustion and memory fragmentation
Previously, a new buffer pool was allocated for every received frame-sized package. This led to several critical issues:
-
Excessive memory consumption: H264 frames have large variations in size, causing significant memory usage over time.
-
File descriptor leaks: Each invocation of
gstreamer::BufferPool::new()created a new socketpair, resulting in a steady increase in open file descriptors. This can be observed with:watch -n1 "ls -l /proc/PID/fd | wc -l"
Over time, this would exhaust available file descriptors.
-
Application instability: On devices such as the Lumus cam, memory usage would continuously rise (over 7-8GiB after 5 hours), eventually leading to a crash.
This commit resolves these issues by reusing buffer pools where possible and preventing unnecessary allocation of resources. This allocates a little bit too much memory for the frames, as it takes the next power of two for the buffer, but its worth it to stabilize the application
Tested-by: Wadim Mueller [email protected]
Seems like even the max buffer isnt large enough for some 4K cameras, as it filled instantly and still was unable to stream. I have 16 of them though, so maybe my use-case is an outlier.
I've seen packets up to ~500KB as well, so it seems the 64KB size is too limited.
Fair enough! I increased the Bucket size to 1M, hopefully this should be enough for all frames. But is this project dead? Looks like nobody is planning to merge this :(
@wafgo I'll give it a try today! Don't lose hope :)
edit: no luck, still running into an absolutely obliterated buffer even when testing 1 camera:
[2025-10-12T17:11:16Z WARN neolink_core::bc_protocol::connection::bcconn] Reaching limit of channel
[2025-10-12T17:11:16Z INFO neolink::rtsp::factory] Buffer full on vidsrc pausing stream until client consumes frames
[2025-10-12T17:11:16Z INFO neolink::rtsp::factory] Failed to send to source: App source is closed
[2025-10-12T17:11:16Z WARN neolink_core::bc_protocol::connection::bcconn] Remaining: 0 of 100 message space for 28 (ID: 3)
[2025-10-12T17:11:16Z WARN neolink_core::bc_protocol::connection::bcconn] Reaching limit of channel
[2025-10-12T17:11:16Z WARN neolink_core::bc_protocol::connection::bcconn] Remaining: 0 of 100 message space for 28 (ID: 3)
[2025-10-12T17:11:16Z INFO neolink::rtsp::factory] New BufferPool (Bucket) allocated: size=65536
[2025-10-12T17:11:16Z INFO neolink::rtsp::factory] Buffer full on audsrc pausing stream until client consumes frames
[2025-10-12T17:11:16Z INFO neolink::rtsp::factory] Buffer full on audsrc pausing stream until client consumes frames
[2025-10-12T17:11:16Z INFO neolink::rtsp::factory] Buffer full on audsrc pausing stream until client consumes frames
[2025-10-12T17:11:16Z INFO neolink::rtsp::factory] Buffer full on audsrc pausing stream until client consumes frames
[2025-10-12T17:11:18Z INFO neolink::rtsp::factory] Failed to send to source: App source is closed
[2025-10-12T17:11:18Z INFO neolink::rtsp::factory] New BufferPool (Bucket) allocated: size=65536
[2025-10-12T17:11:18Z INFO neolink::rtsp::factory] New BufferPool (Bucket) allocated: size=1024
[2025-10-12T17:11:18Z INFO neolink::rtsp::factory] New BufferPool (Bucket) allocated: size=512
This is even after attempting const MAX_BUCKET: usize = 131072 * 1024; so I'm unsure if this means we need to go even bigger or if we're approaching it wrong. (I know nothing about rust)