OpenROAD
OpenROAD copied to clipboard
ispd19_test6: global_route increases runtime from 50min to 2h45 compared to using route_guides
Describe the bug
This is a follow up to https://github.com/The-OpenROAD-Project/OpenROAD/issues/1888.
I've been running ispd19_test6 without read_guides and saw the runtime go from 50min to 2h45. Matt suggested to run without M1, but that reduced the runtime only from 2h45 to 2h15.
global_route seems to severely overestimate the routing resources available for the std cell area.
M1/M2 power pins, M1 vertical, M2 horizontal, even at the low std cell utilization, there is really no M1/M2 routing resource available, only room for pin access. Credits to the detailed router to be able to resolve this, but at the expense of 2.5x times the runtime compared to the route guides (from a different tool).
It should also be noted that in May 2022 with route guides, the drt finished in 30min. See https://github.com/The-OpenROAD-Project/OpenROAD/issues/3535
Expected Behavior
global_route should block all of M1 and nearly all of M2 for anything but pin access.
log_groute_M1M9_Jun2023.gz log_groute_M2M9_Jun2023.gz
Environment
Jun 2023 OR build --local on M1 16GB MacOS
To Reproduce
read_lef ispd19_test6/ispd19_test6.input.lef.gz read_def ispd19_test6/ispd19_test6.input.def.gz set_thread_count 8 #set_routing_layers -signal Metal2-Metal9 -clock Metal2-Metal9 #read_guides ispd19_test6/ispd19_test6.guide global_route -verbose detailed_route -output_drc 5_route_drc.rpt -verbose 1
Relevant log output
No response
Screenshots
No response
Additional Context
No response
Can you try
set_global_routing_layer_adjustment Metal2-Metal9 0.5
to see if it helps?
That did help, 2h50 -> 0h46 !! [INFO DRT-0267] cpu time = 05:19:51, elapsed time = 00:46:20, memory = 6077.78 (MB), peak = 7793.19 (MB)
log_groute_M2M9_0.5_Jun2023.gz
Revisiting the May log's: back then runtimes were 20min with route guides and 30min with global route, both leaving one macro pin access violation.
feature request: it would be very helpful if the log would contain the CMD's that were issued. And since they're currently not, here's the OR script I ran
read_lef ispd19_test6/ispd19_test6.input.lef.gz read_def ispd19_test6/ispd19_test6.input.def.gz set_thread_count 8 set_routing_layers -signal Metal2-Metal9 -clock Metal2-Metal9 set_global_routing_layer_adjustment Metal2-Metal9 0.5 global_route -verbose #read_guides ispd19_test6/ispd19_test6.guide detailed_route -output_drc 5_route_drc.rpt -verbose 1 write_db route.db
Would you check if the fix for #3535 resolves this?
Why would it ? I'm not using read_guides. You stated in https://github.com/The-OpenROAD-Project/OpenROAD/issues/3535
The problem originates in GlobalRouter::updateDbCongestionFromGuides() which we only call when guides are read from a file. This will affect ispd benchmarks but not ORFS flows.
This bug report has nothing to do with reading route guides.
It's all about groute being way too optimistic about the routing resources in M1/M2 in light of std cells with horizontal M1/M2 power pins, vertical M1 routing and M2 really only useful for short wires connecting M3 veritical down to the M1 pins.
Triggered by the (now closed) discussion about estimated congestion and the recent RUDY improvements https://github.com/The-OpenROAD-Project/OpenROAD/discussions/4372 I've revisited this testcase using OpenROAD v2.0-11827-geee2ebfc8 from Jan 12 2024.
Looking at the routing congestions, the 3rd party route guide congestion and the groute congestion look very similar, great.
But the fundamental problem of groute has not yet been addressed. It completely ignores the fact that for a design with std cells with M2 power rails, there are no routing resources in M1 or M2.
All the routing should be done in M3/4 and M5/6.
[INFO GRT-0096] Final congestion report:
Layer Resource Demand Usage (%) Max H / Max V / Total Overflow
---------------------------------------------------------------------------------------
Metal1 4325352 327236 7.57% 0 / 0 / 0
Metal2 6412381 1684746 26.27% 0 / 0 / 0
Metal3 7418859 1459620 19.67% 0 / 0 / 0
Metal4 7423216 490442 6.61% 0 / 0 / 0
Metal5 4807823 158454 3.30% 0 / 0 / 0
Metal6 3316645 87470 2.64% 0 / 0 / 0
Metal7 5444456 119793 2.20% 0 / 0 / 0
Metal8 5578624 31906 0.57% 0 / 0 / 0
Metal9 799092 2027 0.25% 0 / 0 / 0
---------------------------------------------------------------------------------------
Total 45526448 4361694 9.58% 0 / 0 / 0
Let's see whether detail route will be able to route the design as it did before, but I have the suspicion that it's wasting a lot of time to go up in the metal stack compared to the initial guides.
I also get a lot of the following warnings ...
[INFO DRT-0169] Post process guides.
..
[WARNING DRT-0225] pin1 867 pin not visited, fall back to feedthrough mode.
[WARNING DRT-0225] net135949 6 pin not visited, fall back to feedthrough mode.
[WARNING DRT-0225] net145841 6 pin not visited, fall back to feedthrough mode.
[WARNING DRT-0225] net38809 3 pin not visited, fall back to feedthrough mode.
[WARNING DRT-0225] net4417 1 pin not visited, fall back to feedthrough mode.
[WARNING DRT-0225] net10953 2 pin not visited, fall back to feedthrough mode.
..
[WARNING DRT-0225] message limit reached, this message will no longer print
...