OpenROAD icon indicating copy to clipboard operation
OpenROAD copied to clipboard

mpl2: add support for array of interconnected macros

Open AcKoucher opened this issue 1 year ago • 3 comments
trafficstars

This draft is to show the approach so far now that the mixed leaves function was refactored.

What is being done here:

  1. Adding another classification for single macro clusters based on interconnection during mixed leaves splitting.
  2. Using this classification to merge macro clusters (this will be prioritized over merging due to same connection signature)
  3. Adding a tight packing tiling computation for arrays of interconnected macros.

With this we're able to generate the correct shape for mock-array. Next step is to centralize it.

Screenshot from 2024-01-16 16-46-43 Screenshot from 2024-01-16 16-49-04

AcKoucher avatar Jan 16 '24 22:01 AcKoucher

@chevillotn Can you provide some more information on your test and dut? It is hard to tell from this exactly what you are trying to do. If you are yielding on trigger and that is where the simulation seems to "hang" then it is possible the trigger is never firing.

leftink avatar Jan 11 '16 17:01 leftink

Well no that's not the problem. The simulation never starts, i.e. it is stuck at 0ns! so no simulation at all. This happens when i add one more trigger on a signal i know is toggling. If i comment out this new added python line then simulation is working. I would tend to think cocotb and questa schedulers are independant, aren't they? So why adding some more python code would block questa completly? Looking on the internet about __select_nocancel i found this: http://stackoverflow.com/questions/19738300/what-is-the-issue-of-select-using-so-much-cpu-power not sure is relevant here.. Is this call stack showing some exchange between the 2 schedulers? How could i force output of cocotb log messages? at the moment i get no cocotb log messages whatsoever.

chevillotn avatar Jan 12 '16 09:01 chevillotn

@chevillotn When you "yield" on a signal, i.e. yield RisingEdge(signal_hdl), the python scheduler will request to be notified by the simulator when the "Rising Edge" even fires in the simulator for that signal. The python scheduler will then wait until that even occurs before returning control.

To get some Debug information, add COCOTB_LOG_LEVEL=DEBUG to your make command and then you should see some debug logging.

Another thing you can do that will point to an issue with how you are triggering is to "fork" that extra yield off so it won't block the rest of your test. While this isn't a solution, it will show that the issue is the fact that the Trigger is never firing.

For example, if your code is something like:

@cocotb.test()
def example_test(dut):
    ...
    yield RisingEdge(dut.clk)  # Assume this yield is what hangs everything up
    ...

Then you could do the following as a test:

@cocotb.coroutine
def example_RE_fork(signal):
    yield RisingEdge(signal)

@cocotb.test()
def example_test(dut):
    ...
    frk = cocotb.fork(example_RE_fork(dut.clk))  #Rest of simulation will continue
    ....
    yield Join(frk)  #This line will block until the forked coroutine has completed
    ...

As I said, this is not the solution, but will show that COCOTB is still running and you should look into what you are yielding on and verify that the Trigger should definitely be firing.

leftink avatar Jan 12 '16 15:01 leftink

I have already tried to enable the debug passing COCOTB_LOG_LEVEL=DEBUG and all my yield on RisingEdge or FallingEdge are done through a fork. As said, the simulation starts and stays at 0ns hence there cannot be any edge at all... However there should be messages from cocotb in debug mode but they are not dumped.. and the simulator is blocked, i.e. i have to kill it completely from the command line. Hence my question about forcing the dump to be done.. flush maybe?

chevillotn avatar Jan 12 '16 16:01 chevillotn

@chevillotn At this point without actually seeing your code (python and vhdl) it is hard to say. Is it possible for you to attach it so we can look at what you are trying to do?

leftink avatar Jan 12 '16 16:01 leftink

@chevillotn from the output I'm not sure that it is stuck in any cocotb code. Seems to be in the TCL interpreter and questa. Can you share any output, for instance the part where it prints the cocotb version number?

stuarthodgson avatar Jan 12 '16 18:01 stuarthodgson

Well i know it's not easy to really understand what i'm trying to describe but the questa is completely stucj when i push the start simulation button... and the log is not dumped.. so i have nothing to show... but i will try more..

chevillotn avatar Jan 13 '16 08:01 chevillotn

ok so what i've done in put a raise_error in decorators.py line 194 in the send function. So the test never starts. Using my faulty python test i manage to start simulation and it stops fine in the send function:

#      0.00ns INFO     cocotb.gpi                                  gpi_embed.c:244  in embed_sim_init                  Running on ModelSim for Questa-64 version 10.2c_5 2013.11
#      0.00ns INFO     cocotb.gpi                                  gpi_embed.c:245  in embed_sim_init                  Python interpreter initialised and cocotb loaded!
#      0.00ns INFO     cocotb                                      __init__.py:112  in _initialise_testbench           Unable to determine Cocotb version from Unknown
#      0.00ns INFO     cocotb                                      __init__.py:131  in _initialise_testbench           Seeding Python random module with 1452675158
#      0.00ns DEBUG    cocotb.gpi                                GpiCommon.cpp:212  in gpi_get_root_handle             Looking for root handle 'fpga_tb_v' over 1 impls
#      0.00ns DEBUG    cocotb.gpi                                GpiCommon.cpp:220  in gpi_get_root_handle             Got a Root handle (fpga_tb_v) back from VPI
#      0.00ns DEBUG    cocotb.gpi                                GpiCommon.cpp:50   in check_and_store                 Checking fpga_tb_v exists
#      0.00ns DEBUG    cocotb.fpga_tb_v                              handle.py:90   in __init__                        Created
#      0.00ns INFO     cocotb.regression                         regression.py:161  in initialise                      Found test fpga_tb.fpga_test
#      0.00ns INFO     cocotb.regression                         regression.py:262  in execute                         [44m[30mRunning test 1/1:[39m[49m fpga_test
#      0.00ns INFO     ..oroutine.fpga_test.0x7fffeb3692d0       decorators.py:189  in send                            Starting test: "fpga_test"
#                                                                                                                      Description: 
#                                                                                                                          Description of testbench, TBD
#                                                                                                                          
#      0.00ns DEBUG    ..oroutine.fpga_test.0x7fffeb3692d0       decorators.py:193  in send                            Sending trigger None
#      0.00ns ERROR    ..oroutine.fpga_test.0x7fffeb3692d0           result.py:51   in raise_error                     DBG exit

I will track in which function i get stuck.. that should help a little bit.

chevillotn avatar Jan 13 '16 08:01 chevillotn

If i comment the following: int GpiCbHdl::run_callback(void) { ... //this->gpi_function(m_cb_data); ... }

The simulation is functional. Of course nothing is really generated.. but something's wrong in the callbacks.

chevillotn avatar Jan 13 '16 10:01 chevillotn

So i have enable scheduler profiling and debug. I have forked a class that yield an Edge on a clock signal. Looking at the log, it looks the scheduler is going in a loop and does not advance... The clock signal is ttc_320_clk.

(arguments before the questa call: COCOTB_SCHEDULER_DEBUG=1 COCOTB_ENABLE_PROFILING=1 COCOTB_LOG_LEVEL=DEBUG)

#      0.00ns DEBUG    cocotb.gpi                                 GpiCbHdl.cpp:128  in run_callback                    Generic run_callback
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:243  in react                           Trigger fired: _ReadWrite(readwritesync)
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:263  in react                           Writing cached signal updates
#      0.00ns DEBUG    cocotb.gpi                                 GpiCbHdl.cpp:134  in run_callback                    Generic run_callback done
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:243  in react                           Trigger fired: _Edge(ttc_320_clk)
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:322  in react                           1 pending coroutines for event _Edge(ttc_320_clk)
#                                                                                                                       _start
#      0.00ns DEBUG    ..b.coroutine._start.0x7fffe8698990        scheduler.py:460  in schedule                        Scheduling with _Edge(ttc_320_clk)
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:466  in schedule                        Coroutine _start yielded _Edge(ttc_320_clk) (mode 1)
# DBG1: handle=60280944 callback=<bound method Scheduler.react of <cocotb.scheduler.Scheduler object at 0x7fffe8afcf10>>
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:341  in react                           Scheduled coroutine _start
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:354  in react                           All coroutines scheduled, handing control back to simulator
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:243  in react                           Trigger fired: _Edge(ttc_320_clk)
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:322  in react                           1 pending coroutines for event _Edge(ttc_320_clk)
#                                                                                                                       _start
#      0.00ns DEBUG    ..b.coroutine._start.0x7fffe8698990        scheduler.py:460  in schedule                        Scheduling with _Edge(ttc_320_clk)
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:466  in schedule                        Coroutine _start yielded _Edge(ttc_320_clk) (mode 1)
# DBG1: handle=60280944 callback=<bound method Scheduler.react of <cocotb.scheduler.Scheduler object at 0x7fffe8afcf10>>
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:341  in react                           Scheduled coroutine _start
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:354  in react                           All coroutines scheduled, handing control back to simulator
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:243  in react                           Trigger fired: _Edge(ttc_320_clk)
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:322  in react                           1 pending coroutines for event _Edge(ttc_320_clk)
#                                                                                                                       _start
#      0.00ns DEBUG    ..b.coroutine._start.0x7fffe8698990        scheduler.py:460  in schedule                        Scheduling with _Edge(ttc_320_clk)
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:466  in schedule                        Coroutine _start yielded _Edge(ttc_320_clk) (mode 1)
# DBG1: handle=60280944 callback=<bound method Scheduler.react of <cocotb.scheduler.Scheduler object at 0x7fffe8afcf10>>
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:341  in react                           Scheduled coroutine _start
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:354  in react                           All coroutines scheduled, handing control back to simulator

The code for the class using the Edge is:

class TBWire(object):
    def __init__(self, signal_in, signal_out):
        self.signal_in = signal_in
        self.signal_out = signal_out
        self.log = SimLog("cocotb.%s.%s.%s" % (self.__class__.__name__, self.signal_in.name, self.signal_out.name))
        self.log.info("Starting TBWire: source: %s destination: %s" % (self.signal_in, self.signal_out))

@cocotb.coroutine
def _start(self):
    while True:
        yield Edge(self.signal_in)
        value = self.signal_in.value
        self.signal_out <= value

@cocotb.coroutine
def start(self):
    return cocotb.fork(self._start())

The signal ttc_clk_320 is generated by a fork of cocotb.clock

chevillotn avatar Jan 13 '16 15:01 chevillotn

Here is the narrowed down version of my test code which hangs: #--------------------------------------------------------- #--! @file fpga_tb.py #--! @brief Module fpga testbench #---------------------------------------------------------

import cocotb from cocotb.triggers import Timer, RisingEdge, FallingEdge, Edge from cocotb.result import TestFailure from cocotb.clock import Clock from cocotb.log import SimLog

class TBClockGenerator(Clock): def init(self, signal, frequency, name = "Clock"): # Period in ns for cocotb period = (1/frequency)/1e-12 Clock.init(self, signal, period) self.name = name

@cocotb.coroutine
def start(self):
    self.log.info("Starting clock \"%s\"" % (self.name))
    return cocotb.fork(Clock.start(self))

class TBWire(object): def init(self, signal_in, signal_out): self.signal_in = signal_in self.signal_out = signal_out self.log = SimLog("cocotb.%s.%s.%s" % (self.class.name, self.signal_in.name, self.signal_out.name)) self.log.info("Starting TBWire: source: %s destination: %s" % (self.signal_in, self.signal_out))

@cocotb.coroutine
def _start(self):
    while True:
        self.log.info("TBWire: signal_in=%s signal_out=%s (before yield)" % (self.signal_in, self.signal_out))
        yield Edge(self.signal_in)
        self.log.info("TBWire: signal_in=%s signal_out=%s (after yield)" % (self.signal_in, self.signal_out))
        value = self.signal_in.value
        self.signal_out <= value

@cocotb.coroutine
def start(self):
    return cocotb.fork(self._start())

class test_edge(object): def init(self, signal): self.signal = signal self.log = SimLog("cocotb.%s" % (self.class.name)) self.log.info("Starting test_edge: signal: %s" % (self.signal))

@cocotb.coroutine
def _start(self):
    while True:
        yield FallingEdge(self.signal)
        self.log.info("    test_edge: rising edge: signal=%s" % (self.signal))

@cocotb.coroutine
def start(self):
    return cocotb.fork(self._start())

@cocotb.test() def fpga_test(dut): """ Description of testbench, TBD """

TBClockGenerator(dut.ttc_320_clk, 320000000.0, "ttc_320_clk: 320MHz").start()

TBWire(dut.ttc_320_clk, dut.lli_istage_ltdb_data_st_xcvr_rx_320_clk_a[0]).start()

test_edge(dut.ttc_320_clk).start()

yield Timer((3000e-6)/1e-12)

chevillotn avatar Jan 13 '16 16:01 chevillotn

if test_edge is commented out then it works. If it is not commented out, i get this in the log:

#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:243  in react                           Trigger fired: _Edge(ttc_320_clk)
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:322  in react                           1 pending coroutines for event _Edge(ttc_320_clk)
#                                                                                                                       _start
#      0.00ns DEBUG    ..b.coroutine._start.0x7fffeb37b5d0        scheduler.py:460  in schedule                        Scheduling with _Edge(ttc_320_clk)
#      0.00ns INFO     ..ltdb_data_st_xcvr_rx_320_clk_a[0]          fpga_tb.py:36   in _start                          TBWire: signal_in=ttc_320_clk @0x30799b0 signal_out=lli_istage_ltdb_data_st_xcvr_rx_320_clk_a[0] @0x3972410 (after yield)
#      0.00ns INFO     ..ltdb_data_st_xcvr_rx_320_clk_a[0]          fpga_tb.py:34   in _start                          TBWire: signal_in=ttc_320_clk @0x30799b0 signal_out=lli_istage_ltdb_data_st_xcvr_rx_320_clk_a[0] @0x3972410 (before yield)
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:466  in schedule                        Coroutine _start yielded _Edge(ttc_320_clk) (mode 1)
# DBG1: handle=50829744 callback=<bound method Scheduler.react of <cocotb.scheduler.Scheduler object at 0x7fffeb3a2f10>>
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:341  in react                           Scheduled coroutine _start
#      0.00ns DEBUG    cocotb.scheduler                           scheduler.py:354  in react                           All coroutines scheduled, handing control back to simulator
``
I would expect that there are 2 coroutines pending here? maybe not...

TBWire uses Edge as trigger and test_edge uses FallingEdge.
If i use Edge for test_edge class, there is no problem , i.e. i have 2 pending coroutines.
If i use RisingEdge for test_edge i also have the problem but i get alternating:
Trigger fired: _Edge(ttc_320_clk)
Trigger fired: _RisingEdge(ttc_320_clk)
...
still the times never advances...

Summary:
fork of Edge on Clock + fork on FallingEdge on Clock -> endless loop, time never advances
fork of Edge on Clock + fork on RisingEdge on Clock -> endless loop, time never advances
fork of Edge on Clock + fork on Edge on Clock -> working

chevillotn avatar Jan 13 '16 17:01 chevillotn

@chevillotn I have managed to create a test that duplicates the issue you are seeing. I haven't had a chance to dig in to see what the issue is, but from the logs I was able to capture it looks like it is an issue with QuestaSim and their VPI implementation.

What I saw is that if multiple triggers were yielded on the same signal for the same edge (multiple coroutines forked), all callbacks fired as expected and time progressed normally, but if I yielded on the same signal for different edges (multiple coroutines forked), i.e. one yielding on RisingEdge and the other yielding on FallingEdge, then one of the Triggers would continue to fire immediately and only after I had only one Coroutine yielding on an edge would the time then progress.

So in you example everything will hang because you have two coroutines constantly re-scheduling the Triggers for different edges and that Trigger firing immediately.

It looks like the issue does not affect the FLI implementation and I tried a VPI test using IUS and that seem to work correctly as well. You can see this in the attached logs.

questasim_vhdl.txt ius_verilog.txt questasim_verilog.txt

@stuarthodgson I have my simple tests that show the problem and produced the logs above on my branch leftink/cocotb@8df653c7b5049bc44f4746d07ec8488e676c6397. While IUS seemed to work ok, could you check some of the simulators you guys have access to?

leftink avatar Jan 15 '16 02:01 leftink

IUS, aldec, work as expected, I'll need to download VCS and check that.

I'm inclined to write a small test program using VPI directly and supply that for you to try on Questa since I do not have access. This would rule out Cocotb as the cause and allow you to raise a bug with the vendor.

Might need a couple of days to find the time though.

stuarthodgson avatar Jan 15 '16 21:01 stuarthodgson

@stuarthodgson I had a similar thought on writing a simple vpi application to try. I figure if nothing else it gives me something to provide to mentor for debugging. I may have some cycles to try this, but you could probably provide me something quicker.

I have posed the issue to Mentor to see if they are aware of a problem in this area.

leftink avatar Jan 15 '16 22:01 leftink

Thanks for your efforts, it looks like you've identified the same problem as I. In the meantime i think i will implement Edge only and check the value of the signal so i can make it like RisingEdge or FallingEdge. I thought also of trying on a more recent version of Questa, i.e. 10.4b, but i have issues compiling the cocotb library. Will be happy to try the VPI test on Questa whenever available.

chevillotn avatar Jan 18 '16 08:01 chevillotn

@chevillotn Questa 10.4b is what I used to reproduce the problem so a later version won't help.

leftink avatar Jan 18 '16 13:01 leftink

I see... was worth checking anyway. I'm using Edge only now.. no mixing of different edge type as workaround.

chevillotn avatar Jan 18 '16 13:01 chevillotn

@stuarthodgson I was able to create a simple vpi application that demonstrates this problem. It looks to be an issue with "removing" the callback and then "registering" the callback in the callback itself. Somehow that is causing the callback to fire immediately one more time. However if you have more than on callback doing this they seem to trigger each other and you get in this endless loop.

I have provided the example to Mentor and will let you know what they say.

leftink avatar Jan 22 '16 18:01 leftink

Hi Stuart, have you got any feedback from Mentor about this issue?

chevillotn avatar Mar 08 '16 08:03 chevillotn

@leftink did Mentor get back with anything?

stuarthodgson avatar Mar 09 '16 09:03 stuarthodgson

@stuarthodgson @chevillotn

From Mentor:
"Our R&D has filed a bug for this issue that I’ll be linking to your service request. They believe the root of this behavior is that we’re allowing adding callbacks to the list of active callbacks while callback functions are executing for all but the last callback left in the list(which is why it doesn’t loop with only one function registering)."

I informed them we need the ability to register callbacks while in callback functions so the solution really needs to address this. I venture to guess this will make it into one of their future releases.

If you have access to see their tickets: SR 2780103251

leftink avatar Mar 09 '16 14:03 leftink

I see, thanks for the feedback. I guess there is no workaround possible in cocotb? I have used only one type of Edge, actually Edge itself and banned Rising/Falling so i'm checking the value of the signal to know which of the edge has triggered. That works for me.

chevillotn avatar Mar 09 '16 14:03 chevillotn

@chevillotn and @stuarthodgson I haven't tried the fix, but Mentor informed me that the fix for this is available in 10.5a release and should also be included in the upcoming 10.4e release.

leftink avatar Apr 11 '16 22:04 leftink

Ah this is good information, however i don't have access to those versions right now. Let's leave this open until i get hold of one of those versions then i can test. Thanks

chevillotn avatar Apr 12 '16 07:04 chevillotn

@chevillotn Could you try with the current version and report if this issue is still there?

themperek avatar Apr 16 '20 07:04 themperek

@themperek I rebased the test Lance wrote (leftink/cocotb@8df653c) on master and ran with 10.5a, 10.7c, and 2020.1, and the problem wasn't solved. 10.5a crashes instead of hangs, 10.7c stacktraces, as does 2020.1.

ktbarrett avatar Apr 23 '20 18:04 ktbarrett