--icc-force-contrast does not work with --vo=gpu-next
icc-profile-auto
icc-force-contrast=inf
--vo=gpu:

--icc-force-contrast=inf should turn BT.1886 gamma curve into pure power 2.4, but it doesn't with --vo=gpu-next:

--vf=format:gamma=gamma2.4 can be used as a workaround.
Log: output.txt
latest mpv & libplacebo git-master as of today. Tested on Linux Xorg.
I btw. still think that pure power gamma 2.4 would be a better default for rec709 content, as BT.1886 just looks too bright with most of the content on any display that doesn't have an extremely low black point.
Please remove label "os:linux", it applies also to windows. In addition "gamma-factor" doesn't work.
Is the "bt.1886" treated like something in-between sRGB and gamma2.2 because it has hard-coded 1000 contrast over the spec's formula? That's definitely how it feels to me after experimenting.
Is the "bt.1886" treated like something in-between sRGB and gamma2.2 because it has hard-coded 1000 contrast over the spec's formula? That's definitely how it feels to me after experimenting.
Yes and no. libplacebo assumes 1000:1 contrast by default for SDR displays with unknown contrast information. If you use an ICC profile, it will instead take the effective contrast of the ICC profile's LUTs. (Note that --icc-force-contrast is currently ignored by --vo=gpu-next, this is a known limitation.)
In theory BT.1886 can be tuned to any contrast point you want, but the mpv integration does not expose all options.
If I understand it correctly, the reason why the bt.1886 EOTF depends on the display's minimum and maximum brightness is because dark areas should have a similar contrast on displays with unperfect (e.g. LCD) and perfect (e.g. OLED) black such that on LCDs it is possible to see objects in a dark scene as easily as on a CRT.
Perhaps, instead of applying a display-dependent bt.1886 EOTF, it is possible to use a pure 2.4 gamma and a shader which sharpens edges in dark areas, where the sharpening strength depends on the contrast difference between a display with perfect black and the currently used display. I assume sharpening is a way to make objects easier to distinguish in dark areas.
In comparison to bt.1886, this may make perceptual color hues more consistent across displays. On the other hand, if the author of the video has created it such that with a bt.1886 EOTF it looks good on various displays, bt.1886 should be the better choice. I guess most videos are from authors who don't bother to do this.
I don't know if my idea of using sharpening instead of display-dependent bt.1886 is good or bad (probably it's bad).
If I understand it correctly, the reason why the bt.1886 EOTF depends on the display's minimum and maximum brightness is because dark areas should have a similar contrast on displays with unperfect (e.g. LCD) and perfect (e.g. OLED) black such that on LCDs it is possible to see objects in a dark scene as easily as on a CRT.
Perhaps, instead of applying a display-dependent bt.1886 EOTF, it is possible to use a pure 2.4 gamma and a shader which sharpens edges in dark areas, where the sharpening strength depends on the contrast difference between a display with perfect black and the currently used display. I assume sharpening is a way to make objects easier to distinguish in dark areas.
In comparison to bt.1886, this may make perceptual color hues more consistent across displays. On the other hand, if the author of the video has created it such that with a bt.1886 EOTF it looks good on various displays, bt.1886 should be the better choice. I guess most videos are from authors who don't bother to do this.
I don't know if my idea of using sharpening instead of display-dependent bt.1886 is good or bad (probably it's bad).
Not really sure what you mean by sharpening dark areas, you would like to do pure 2.4 and then what exactly? What can a shader do? If you start processing that image after you do pure 2.4 you end up applying another curve most probably, and that makes no sense. Things are not linear so you can't do things in a shader other than making things look worse most probably.
I think after applying the 2.4 gamma, the values are linear. The sharpening shader would modify the linear values using neighbouring pixels without changing the average brightness at smooth regions. The linear values would then be mapped to values for the display. Since the display cannot (or not necessarily) display the darkest values, a toe (like in common tonemapping curves) would squash them together near the darkest black supported by the display. Clipping dark values instead of using a toe should also work, but can lead to information loss (in addition to the loss caused by quantisation).
I made an image with the GIMP to illustrate my idea:
Top: Image with perfect black, i.e. the ideal image
Middle: Image where values are clamped to an elevated black level, i.e. the image where values are the same as the ideal image if they are sufficiently bright. Details are no longer visible in dark areas because of the unperfect darkness.
Bottom: Image where dark areas are sharpened and then values are clamped to an elevated black level, i.e. my suggested idea. The details are visible even though there is unperfect darkness like in the middle image.
I think after applying the 2.4 gamma, the values are linear. The sharpening shader would modify the linear values using neighbouring pixels without changing the average brightness at smooth regions. The linear values would then be mapped to values for the display. Since the display cannot (or not necessarily) display the darkest values, a toe (like in common tonemapping curves) would squash them together near the darkest black supported by the display. Clipping dark values instead of using a toe should also work, but can lead to information loss (in addition to the loss caused by quantisation).
I made an image with the GIMP to illustrate my idea:
Top: Image with perfect black, i.e. the ideal image
Middle: Image where values are clamped to an elevated black level, i.e. the image where values are the same as the ideal image if they are sufficiently bright. Details are no longer visible in dark areas because of the unperfect darkness.
Bottom: Image where dark areas are sharpened and then values are clamped to an elevated black level, i.e. my suggested idea. The details are visible even though there is unperfect darkness like in the middle image.
After applying the 2.4 gamma curve, the values remain non-linear. Gamma correction is specifically designed to match the non-linear way humans perceive light, so the data is adjusted accordingly. The idea that these values become linear after gamma correction is incorrect.
Because the data stays non-linear, applying sharpening in this space could introduce artifacts or inconsistent results across different displays. Just knowing the brightest and darkest points isn't sufficient for accurate image adjustment; mid-tones, color accuracy, and overall contrast are equally critical. A shader applied in this context would likely disrupt those aspects, potentially leading to worse outcomes on certain displays.
This is what you might end up with IMO: https://github.com/user-attachments/assets/66fc817c-9314-4926-92c9-cd625b29a5f0
