XRT
XRT copied to clipboard
VITIS-5884 XGQ: Add clock throttling feature support on versal platforms
Adding clock throttling feature support on versal platforms. Added all required changes in tools, user space driver, and in xgq driver. This PR enables user to check the clock throttling(scaling) feature through xbutil/xbmgmt tools "cmc" report and also through XGQ's sysfs node.
Signed-off-by: Rajkumar Rampelli [email protected]
Problem solved by the commit
Adding clock scaling feature support in XRT
Bug / issue (if any) fixed, which PR introduced the bug, how it was discovered
VITIS-5870
How problem was solved, alternative solutions (if any) and why they were rejected
Added all required support in XRT to enable clock scaling feature support. End user can read clock scaling feature default settings through xbutil/xbmgmt tool's reports, also can configure clock throttling limits through XGQ's sysfs node.
Risks (if any) associated the changes in the commit
NA
What has been tested and how, request additional testing if necessary
xbutil/xbmgmt reports, reading/configuring clock scaling feature limits.
Documentation impact (if any)
NA
Since the supported changes are not ready in VMR, below results are observed. Also, tried with temporary data set in XGQ driver manually in the clock_scaling's opcode response.
Reading clock scaling default & override settings bash-4.2# cat /sys/bus/pci/devices/0000:d8:00.0/xgq_vmr.m.54525952/clk_scaling_stat_raw HAS_CLOCK_THROTTLING:0 CLOCK_THROTTLING_ENABLED:0 POWER_SHUTDOWN_LIMIT:0 TEMP_SHUTDOWN_LIMIT:0 POWER_THROTTLING_LIMIT:0 TEMP_THROTTLING_LIMIT:0 POWER_THROTTLING_OVRD_LIMIT:0 TEMP_THROTTLING_OVRD_LIMIT:0 POWER_THROTTLING_OVRD_ENABLE:0 TEMP_THROTTLING_OVRD_ENABLE:0 CLOCK_THROTTLING_MODE:0
bash-4.2# xbmgmt examine -r cmc -d
1/1 [0000:d8:00.0] : xilinx_vck5000_gen4x8_qdma_base_1
CMC Heartbeat information unavailable Runtime Clock Scaling : Not supported
So, tried setting clock scaling configurations manually in XGQ driver and checked how the output looks if VMR sends proper response to clock scaling opcode. Below is the result, values are temporarily set in XGQ driver, so don't consider these values.
configuring clock scaling settings
Disable feature
Enable feature
Enable feature & set power limit to 60, temp limit to 50
Build Passed!
retest this please
Build Failed! :(
retest this please
Build Passed!
Verified the PR on U2 board
I have verified the PR using VMR.elf (local drop from Sandeep) which has clock scaling support. Able to read clock scaling default configurations during driver load time and dmesg will show if the clock scaling feature is supported|enabled. By default, clock scaling is not enabled.
[ 271.563056] xclmgmt 0000:d8:00.0: xgq_vmr.m.54525952 ffff95238c286010 xgq_vmr_probe: clock scaling feature is supported, and enable status: 0
Also, override values set to threshold limits, which is expected.
Enable & set power & temp overrides
dmesg log [Aug 8 22:17] xclmgmt 0000:d8:00.0: xgq_vmr.m.54525952 ffff9527a0ba9810 clk_scaling_configure_store: clock scaling feature is enabled
-bash-4.2# cat /sys/bus/pci/devices/0000:d8:00.0/xgq_vmr.m.54525952/clk_scaling_stat_raw HAS_CLOCK_THROTTLING:1 CLOCK_THROTTLING_ENABLED:1 POWER_SHUTDOWN_LIMIT:69 TEMP_SHUTDOWN_LIMIT:100 POWER_THROTTLING_LIMIT:290 TEMP_THROTTLING_LIMIT:95 POWER_THROTTLING_OVRD_LIMIT:200 TEMP_THROTTLING_OVRD_LIMIT:80 POWER_THROTTLING_OVRD_ENABLE:200 TEMP_THROTTLING_OVRD_ENABLE:80 CLOCK_THROTTLING_MODE:2
Disable clock scaling and recheck the status using cmc report
dmesg log [Aug 8 22:18] xclmgmt 0000:d8:00.0: xgq_vmr.m.54525952 ffff9527a0ba9810 clk_scaling_configure_store: clock scaling feature is disabled
Build Passed!
Verified the PR using latest qdma base-1 package taken from 2022.2_daily_latest builds.
Flashable partitions running on FPGA Platform : xilinx_vck5000_gen4x8_qdma_base_1 SC Version : 4.4.35 Platform UUID : 9A3B9518-4E5A-B131-5FC0-D76AFF4C8991 Interface UUID : 527F6489-4668-77D5-E224-FDB2330ACB6B
Build Passed!
Hi @rozumx could you please review this PR again. I have taken care of your review comments. Thank you.
Hi @rajkumar-xilinx , As discussed offline, Please address if there is any cleanup required in next PR