John Cupitt
John Cupitt
> We are in the process of implementing a chunked input API for encoding (see Great! Thank you! > On the decoder side, there already is a callback-based decoding API,...
> The fact that the current push model may deliver decoded lines in any order makes it difficult to optimize around. Ah that's very hard. Then I think JXL will...
Sure, that sounds great. libvips is demand-driven, so "please fetch this rectangle of pixels" is exactly what works best. If the rectangles are on a regular grid, even better. If...
libvips has it's own threading system, and parallel decode in libjxl would "fight" that somewhat. For us, the best thing would be for libjxl to do no threading at all...
At the moment we're doing a compromise: the parallel runner is given 50% of hardware threads (I think?) and libvips mostly shuts down during calls into libjxl.
The libvips threadpool is auto-sizing, so it watches mutexes and grows and shrinks depending on lock contention (using that as a proxy for available parallelism). I think we'd need to...
Then I agree a parallel runner which pushed tasks to the libvips threadpool is the best solution. Thanks!
Hi @Tulzke, libvips is still using the original one-shot encode mode for libjxl, so yes, memory consumption is high for large images. I tried with a largish image here: ```...
I tried your image and saw: ``` $ /usr/bin/time -f %M:%e vips copy ~/to_jxl_example.jpg x.jxl memory: high-water mark 1.22 GB 48146648:66.13 ``` So 48gb of memory and 66s of real...
Yes, it's a nice improvement! We've started building libvips binaries with 0.10.1, eg.: https://github.com/jcupitt/vipsdisp/releases/tag/v3.0.4