Libvips does not release memory
I am writing an image processing server libvips and hyper. After processing around 1k requests the memory usage climbs to ~1.2 GB and stays the same after. None of the images are larger than 500Kb. I am using the thumbnail_source_with_opts function. Disabling libvips cache has no effect. Libvips concurrency is set to 2.
Monitoring the process with top, I can see that number of threads drop down to 5-6 after the operation but the memory is not cleared. I have also tried setting the max_blocking_threads to 1 for tokio runtime to no avail.
Is there a way to lower this memory usage?
Not that I'm aware of. I also developed a web server for image processing using it and I noticed this behavior very occasionally.
I couldn't figure out why. The implementation of all structs have a custom Drop that does what should be done according to the libvips docs.
There was a problem with the way I was configuring the threads. Using tokio::task::spawn_blocking for the image resize operation seems to be the culprit. Removing that and limiting the number of threads for the tokio runtime massively improved the memory consumption.
let app = VipsApp::new("Test Libvips", true).expect("Cannot initialize libvips");
app.concurrency_set(3);
app.cache_set_max(0);
app.cache_set_max_mem(0);
Builder::new_multi_thread()
.worker_threads(2)
.enable_io()
.build()
.unwrap()
.block_on(async {
// hyper server setup code
});
It seems like the server takes around 150mb of memory for every worker thread in the runtime.
I've also noticed that libvips spins different threads for every worker.
Example:
app.concurrency_set(3) and worker_threads(2) gives 13 total threads
app.concurrency_set(3) and worker_threads(3) gives 19 total threads
app.concurrency_set(3) and worker_threads(4) gives 25 total threads
Which is basically (app.concurrency * worker_threads * 2) + 1
Is this expected?
For this you have to ask the libvips author. I have no knowledge of how it works internally.
I have never seen a Rust project segfault so often. Undefined behavior everywhere, it's almost impossible to successfully clone an image. Even creating a new image from the same file will lead to changes happening to both instances. Killing a single instance off an image will lead to other clones breaking as well.
Anyway @ramitmittal you might have some luck with image.set_kill(true) which seems to free the image. The Drop implementation of VipsImage should be doing this I think but doesn't.
Perhaps instead of complain and criticise you could submit a PR to fix any issue you're facing @LevitatingBusinessMan
I obviously didn't have time or resources to test every function generated by this bindings...
@augustocdias Yes I might look into the issues later today, maybe I can contribute. It's not just certain functions though. There seems to be a problem in the whole implementation of the VipsApp.
I confirm the presence of a problem, memory is constantly increasing. I’m not quite sure how to catch, on flamegraph I do not see VipsApp memory, only my application. But I'm sure it's not a leak in my application, since it's easy to reproduce through the CLI application.