Inconsistent output of exported images compared to darktable's preview
I stumbled across this after I got struck by #14215. I did some experiments to figure out what's going on an maybe alleviate the issue but it turns out I've maybe hit another bug. (Original: https://github.com/darktable-org/darktable/issues/14215#issuecomment-1685275139)
I tried 3.8.1 (the earliest flatpak'ed version) and 4.4.2. Turning of masks entirely did not make a visible difference for the issue at hand.
After playing around a bit more I think it's related to the local contrast module. Please take a look at the following screenshots. To the left the exported image (Full res TIF) displayed with Geeqie, to the right darktable's preview):
-
When zoomed in, the contrast differs notably, depending on zoom level:
-
However, on certain zoom levels the pictures look almost the same
-
All masks disabled, note that the pictures still differ in contrast
-
After disabling
local contrastthe pictures look the same
I use two instances of local contrast, one masked and one without:
Turning off the mask does not make difference for the outcome described above! Hence I suspect this might be a different bug or a user error. In that case I'd be happy about some advice. I tried with 3.8.1 and 4.4.2 (latest flatpak version). There is no difference. Both version generate the following OpenCL error for local contrast:
48.3955 [dt_opencl_enqueue_kernel_2d] kernel 34 on device 0: CL_MEM_OBJECT_ALLOCATION_FAILURE
48.3955 [local laplacian cl] couldn't enqueue kernel! CL_MEM_OBJECT_ALLOCATION_FAILURE
48.6234 [pixelpipe_process_CL] [export] bilat ( 0/ 0) 4932x6533 scale=1.0000 --> ( 0/ 0) 4932x6533 scale=1.0000 couldn't run module on GPU, falling back to CPU
Thanks!
Darktable uses a downsampled version of the image for speed when zoomed out. This affects the results of any algorithms which use the neighborhood around a pixel to determine its value, particularly when one of the algorithm's parameters is the size of the neighborhood (e.g. any time there is a "radius" slider). dt does try to compensate for that by scaling the radius, and it does so if you use the "bilateral" method in the local contrast module. But there isn't an explicit radius for the "local laplacians" method, just a varying number of wavelet levels depending on the overall resolution of the input image.
To be absolutely sure that what you see in darkroom view matches the exported output, you need to view at 100% and enable high quality on the export, so that both ways use all of the pixels of the image file at 1:1.
@ralfbrown Hi Ralf, thanks for the explanation!
I tried your recommendations but I'm still experiencing inconsistent output, even for 100% 1:1 crops.
To rule out any color profile issues I've exported the image without any processing (all images are now 100% 1:1 crops / screenshots, left export, right dt's preview):
The same image, but with local contrast applied:
settings:
Out of curiosity I took the bilateral grid mode for a spin and I'd say it produces a consistent output, even on heavy settings.
Very strange that 100% view doesn't match an un-resized (output dimensions set to 0x0) export for local laplacians. We'll need the image and sidecar to investigate further. Does this happen with OpenCL disabled? Do you get the OpenCL error you mentioned in your original post both in darkroom and while exporting?
Very strange that 100% view doesn't match an un-resized (output dimensions set to 0x0) export for local laplacians. We'll need the image and sidecar to investigate further. Does this happen with OpenCL disabled? Do you get the OpenCL error you mentioned in your original post both in darkroom and while exporting?
Hi Ralf,
- OpenCL en/disabled does not make a difference
- I get the error on export. I think that's because for preview darktable uses the
bilatmode:
9.8343 [pixelpipe_process_CL] [full] bilat ( 0/ 0) 857x1285 scale=0.1613 --> ( 0/ 0) 857x1285 scale=0.1613 cl input data to host
I've uploaded some samples here (the link expires in 1 week, just tell me if you need it after that and I'll upload it again): https://we.tl/t-iIY4hon0iB
There's a RAW file and a cropped version which shows the problem a bit more exaggerated, at least when zooming in and out.
I noticed that the output and the preview start to look more alike as long as darktable's preview isn't cropped, i.e. <= scaled to fit. I've made two short Screencasts to highlight this:
Screencast from 23.08.2023 15:12:57.webm Screencast from 23.08.2023 15:14:06.webm
I hope this helps to narrow this. Let me know if I can do anything else!
"bilat" is just the internal name of the module, it does both modes (local laplacians was a later addition).
I think I know what the issue is now - when you're zoomed in and viewing only a part of the image, only a part of the image gets processed. So the highest levels of the laplacian pyramid don't get info fed in from the area outside the viewport, or get skipped entirely if the viewport is small enough. Unfortunately, the fix for that, while simple, will have a major impact on editing performance (everything before the local contrast module will have to process the entire image).
@TurboGit what are your thoughts on having the module expand roi_in to cover the entire image in local laplacian mode if my analysis is correct? Perhaps add a checkbox for "high quality preview"?
@ralfbrown Thank you for your assessment!
If I may add my suggestions from a user perspective: If you add a high quality preview option then it would be very helpful if it could be toggled quickly, e.g. from the bottom where the buttons for over/underexposure, gamut checking, etc reside:
In addition it would be nice to have small warning (maybe floating right above those buttons) when we hit a condition were a high quality preview would be favorable, something like "Warning: preview might be inaccurate, use hq preview for 100% accuracy".
Finally, I thought a bit about the way the algorithm works, at least from what you explained above. Would it be possible (and make sense) to first downscale the picture and then feed it into the processing pipeline? Or even simpler, leave out every second, third... nth pixel to generate a preview?
For local laplacians at least, both downscaling and cropping (which is effectively what the region-of-interest processing does) affect the result. Downscaling by just dropping pixels is merely a much cruder form of what is currently being done. It would be pretty simple (but basically pointless) to add as an alternative rescaling algorithm to go with the bilinear, bicubic, and Lanczos you can select in global preferences. It's called nearest-neighbor if you ever want to look it up.
I think @ralfbrown your analysis is fully correct. Though I would think we could probably use the downscaler module used for exports to scale down "finally" and implement full roi_in there.
Edit: we also have modules behaving differently depending on full data, we have fast pipe mode too, so this issue overall is a duplicate of many issues before.
Also don't underestimate the downscaling algo effects.
@TurboGit what are your thoughts on having the module expand roi_in to cover the entire image in local laplacian mode if my analysis is correct? Perhaps add a checkbox for "high quality preview"?
I'm really not sure, I mean no strong opinion. Such option feel appealing but I fear the performance could be very bad. Let me check what are the modules before local laplacian. Also IIRC we have the same issue for the haze removal module.
I fear indeed:
The local contrast is very high in the pipe.
The fact that local contrast is so late in the pipe is what led me to suggest having a check-box, so that we don't take the big performance hit all the time.
Haze removal doesn't have a global dependence in the same way that local laplacians do - it computes box min and box_max over regions with radius 9 pixels, so worst-case, roi_in would need to be expanded by 18 pixels in each dimension (I just looked, and its tiling_callback doesn't specify any overlap, which it probably should). The D&S dehaze preset uses a large radius, but D&S already expands its roi_in to account for that.
Hello,
I'm facing with the same bug with haze removal module. I understand why this is happening technically but as a stupid end user I can't tolerate different result between dark room and exported jpg. There can be no room for chance here! Otherwise I can't trust dark room anymore which is the core concept of dark table.
Moreover, according to @ralfbrown, the correction seems possible without having an heavy performance duty.
Please, could this bug be analyzed and fix in a future version?
By the way, thanks a lot at every developpers for this great software! :)
The fact that local contrast is so late in the pipe is what led me to suggest having a check-box
Where would the check box be located? I would be more in favor of having it in the bottom action buttons in darkroom than in the module itself. This could be then used also for hazeremoval for example. A high quality preview would then start a full pipe rendering. But again I have no strong opinion about this.
@TurboGit i started this, full roi processing until late in the pipe. Using a button at the low darkroom right side. Quite processing heavy...
This issue has been marked as stale due to inactivity for the last 60 days. It will be automatically closed in 300 days if no update occurs. Please check if the master branch has fixed it and report again or close the issue.
Resolved by #15910.
Resolved by #15910.
Thank you very much! :partying_face: