ddcutil regression: setvcp no longer functions (bisected)
Hello,
Using setvcp on my monitor fails after upgrading to Debian 13, which brought me from ddcutil 1.4.1 to 2.2.1.
I performed git bisect and found the first bad commit to be 790c8e8530b0aff891689cfb090b5cbd8839c0f1.
Reverting this commit with a simple conflict resolution restores functionality.
What debugging information would maintainers like to see?
commit dc55ec527662a1b82b6bd21851ef08ea46becd56 (HEAD -> master)
Author: Aaron Opfer <[email protected]>
Date: Fri Aug 15 19:04:51 2025 -0500
Revert "ddc_get_nontable_vcp_value(): use MAX_DDC_PACKET_SIZE for maximum bytes to read"
This reverts commit 790c8e8530b0aff891689cfb090b5cbd8839c0f1.
diff --git a/src/ddc/ddc_vcp.c b/src/ddc/ddc_vcp.c
index 97eb9230..cebc86cb 100644
--- a/src/ddc/ddc_vcp.c
+++ b/src/ddc/ddc_vcp.c
@@ -223,8 +223,13 @@ ddc_get_nontable_vcp_value(
Byte expected_response_type = DDC_PACKET_TYPE_QUERY_VCP_RESPONSE;
Byte expected_subtype = feature_code;
-
- int max_read_bytes = MAX_DDC_PACKET_SIZE;
+ // int max_read_bytes = 20; // actually 3 + 8 + 1, or is it 2 + 8 + 1?
+ // expected response size:
+ // (src addr == x6e) (length) (response contents) (checkbyte)
+ // 1 + 1 + 8 + 1 == 11
+ // alternative is DDC Null Response, which is shorter
+ // N. response does not include initial destination address byte of DDC/CI spec
+ int max_read_bytes = 11;
// DBGTRC_NOPREFIX(debug, TRACE_GROUP,
// "before ddc_write_read_with_retry(): communication flags: %s",
So I did a little bit more experimentation: it seems like setting max_read_bytes to 38 or to 40 both work, but setting it to exactly 39 (the value of MAX_DDC_PACKET_SIZE) does not.
Thank you for the detailed report. I have some questions.
Background: It is desirable to read more that the expected number of bytes from the I2C bus to aid in diagnosing invalid DDC response packets, e.g. doubled bytes in response.
How does the failure manifest itself? Invalid response value, ddcutil error message, segfault, or some other way.
Does the failure occur with a specific feature or multiple features?
What is the exact command used that manifests the error?
Please execute sudo ddcutil interrogate and submit the output as an attachment.
Thank you.
$ ddcutil --bus 7 setvcp 10 1
When I perform this command on the Debian 13 ddcutil 2.2.1, nothing happens other than this output:
(set_connector_for_businfo_using_edid) Failed to find connector name for /dev/i2c-7 using EDID 0x55c0ff0b3784
Failed to find connector name for /dev/i2c-7, set_connector_for_businfo_using_edid at line 1236 in file i2c_bus_core.c.
I2C_Bus_Info at: 0x55c0ff0a9330
Flags: I2C_BUS_EXISTS | I2C_BUS_ACCESSIBLE | I2C_BUS_PROBED | I2C_BUS_X50_EDID
Bus /dev/i2c-7 found: true
Bus /dev/i2c-7 probed: true
errno for open: OK(0): success
drm_connector_found_by: DRM_CONNECTOR_NOT_CHECKED (0)
last_checked_asleep: false
Display connectors reported by /sys:
When I perform this command on the ddcutil built from source with my patch, there is no output at all, to stderr or stdout, and the command works.
When I perform this command on the UNPATCHED ddcutil from git, there is no output at all, but the command does NOT work.
ddcutil interrogate is attached, this was ran using my patched version ddcutil built from source.
Ah, I should mention, that this problem only affects one of my two monitors, the BenQ. the ASUS monitor works fine with setvcp.
Sorry for leaving all of my research splayed across a billion different comments 🙏
First, I want you to know that I don't expect to have much time for ddcutil in the coming week. I do regard this as a significant issue however, so let me apologize in advance if I am not very responsive for the next several days.
You're using the proprietary nvidia driver, with DRM mode setting disabled. The "Failed to find connector name" message in 2.2.1 existed to draw loud attention to the fact that expected /sys/class/drm nodes do not exist. That terminal message is no longer issued in 2.2.2-dev, but you will see a log file entry. The lack of /sys/class/drm nodes becomes critical in libddcutil, where they are used for detecting display connection and disconnection. For command line ddcutil the only impact is on some minor performance operations.
The function you've patched, ddc_get_nontable_vcp_value(), is used throughout ddcutil. If it's not working properly, no command that performs DDC communication will work.
Please perform tests using the 2.2.2-dev branch without your branch.
Using option --verbose on setvcp will provide more information on what is happening.
There are two ways to invoke driver i2c-dev, one using ioctl io and one using file io. They actually use different entry points into the nvidia driver, and it has happened that one works but the other doesn't. By default, ddcutil uses ioctl io. Try using option -use-file-io. Does it make a difference?
When setvcp fails, what do you see in the system log (journalctl -b -t ddcutil)?
Does specifying option --verify or --no-verify make a difference?
Finally, try the following command:
ddcutil setvcp --bus 7 setvcp 10 1 --verbose --trcfrom app_set_vcp_value --syslog debug
Please submit the output, as well as the output from journalctl -b -t ddcutil.
Thank you.
I have time again to devote to ddcutil. Please submit the output of the tests I outlined and I'll try to figure out what's going on.
I'm no longer able to reproduce this bug as I no longer have an NVIDIA Graphics Card.