cocotb-test
cocotb-test copied to clipboard
Draft PR: Add generic testbench CLI
This is something that I have been mulling over for a while to replace the need for makefiles. The idea is simple, set things up such that testbenches can be run both via the command line and via pytest with a minimal amount of per-testbench code, while enabling parameter values and other settings to be easily adjusted.
For reference, with Makefiles, you can specify parameter overrides via the command line and pass these through in a simulator-specific way. Usually I do something along these lines:
ifeq ($(SIM), icarus)
COMPILE_ARGS += $(foreach v,$(filter PARAM_%,$(.VARIABLES)),-P $(TOPLEVEL).$(subst PARAM_,,$(v))=$($(v)))
else ifeq ($(SIM), verilator)
COMPILE_ARGS += $(foreach v,$(filter PARAM_%,$(.VARIABLES)),-G$(subst PARAM_,,$(v))=$($(v)))
endif
This is great because parameters can be specified both in the makefile itself and via the command line, and parameter values can also be computed based on other parameter values. For example, the makefile could contain some lines like so:
export PARAM_DATA_WIDTH := 8
export PARAM_KEEP_ENABLE := $(shell echo $$(( $(PARAM_DATA_WIDTH) > 8 )))
export PARAM_KEEP_WIDTH := $(shell echo $$(( ( $(PARAM_DATA_WIDTH) + 7 ) / 8 )))
export PARAM_DEPTH := $(shell echo $$(( 1024 * $(PARAM_KEEP_WIDTH) )))
export PARAM_LAST_ENABLE := 1
export PARAM_ID_ENABLE := 1
Individual parameters can also be overridden on the command line like so:
make PARAM_DATA_WIDTH=16
And in the above example, this will also automatically propagate to KEEP_ENABLE
and KEEP_WIDTH
.
The question, then, is how to replicate this functionality in Python. It's simple enough to add a CLI entry point and use argparse, but handling the parameter overriding in a similar way is a bit more complex. And also it makes no sense to carry around a bunch of boilerplate CLI code in every testbench. Additionally, due to how pytest decorators work, it's also necessary to support multiple entry points so that both the CLI and pytest can coexist.
There are several things that need to be done for this to work correctly. The first is that cocotb_test.simulator.run
cannot be called from the testbench itself, that has to be abstracted away a bit. The testbench configuration has to be defined piecemeal - part of it statically in the testbench script (names, paths, files, registering parameters, etc.), then it can be augmented as needed by both the CLI entry point as well as one or more pytest entry points. Ideally, the CLI entry point would be a single function call that passes control to a generic implementation, which handles argparse internally.
Parameters (and potentially also defines) can then be registered with generic CLI module, along with default values. The default values can be either raw values or computations involving other parameters. eval
is used internally to evaluate the parameter values, while also considering any values overridden by command line arguments.
As an example, here is an updated version of test_parameters.py
:
#!/usr/bin/env python3
from cocotb_test.testbench import Testbench
import pytest
import os
import cocotb
from cocotb.triggers import Timer
tests_dir = os.path.dirname(__file__)
@cocotb.test()
def run_test_paramters(dut):
yield Timer(1)
WIDTH_IN = int(os.environ.get("WIDTH_IN", "8"))
WIDTH_OUT = int(os.environ.get("WIDTH_OUT", WIDTH_IN))
assert WIDTH_IN == len(dut.data_in)
assert WIDTH_OUT == len(dut.data_out)
tb = Testbench()
dut = "test_parameters"
tb.toplevel = dut
tb.module = os.path.splitext(os.path.basename(__file__))[0]
tb.verilog_sources = [
os.path.join(tests_dir, dut+".v"),
]
tb.add_template_parameter('WIDTH_IN', 8)
tb.add_template_parameter('WIDTH_OUT', 'WIDTH_IN')
tb.python_search = [tests_dir]
tb.includes = [os.path.join(tests_dir, "includes")]
tb.defines = ["DEFINE=1"]
tb.force_compile = True
if __name__ == '__main__':
tb.main()
@pytest.mark.skipif(os.getenv("SIM") == "ghdl", reason="Verilog not supported")
@pytest.mark.parametrize(
"parameters", [{"WIDTH_IN": "8", "WIDTH_OUT": "16"}, {"WIDTH_IN": "16"}]
)
def test_testbench_pytest(request, parameters):
tb.sim_build = os.path.join(tests_dir, "sim_build",
request.node.name.replace('[', '-').replace(']', ''))
tb.extra_env = parameters
tb.run(parameters=parameters)
The current implementation is very much a rough draft and has a number of pitfalls and limitations that would have to be addressed. But, I'm also thinking there may be some deeper changes worth considering in cocotb-test itself.
Primarily, it might be worth considering reworking some stuff in simulator.py
so that all of the settings can be split off from the rest of the implementation. This way, the settings can be built up incrementally and passed around as a single object, instead of dealing with a stack of separate arguments or a dict
. With the current setup, you either need a very cumbersome long list of arguments, or a dict that might as well be a proper object. The current Simulator
class and its subclasses are not appropriate - all of the settings get passed into the constructor, messing with the settings after the constructor is called is potentially problematic. And the Simulator
object itself isn't meant to be used directly, instead the proper subclass should be used based on the target simulator, and if the CLI is going to have a command-line option for the simulator, then the determination of which object to use would have to be done after running argparse, which is much too late.
Other things that would need to be resolved include proper handling of string parameters and HDL-style numbers (e.g. 16'hABCD
, 32'd54321
, etc.). The current CLI implementation also only supports parameters and --waves
, this should be extended to support defines, simulator selection, and really anything else that might need to be adjusted at run time. There may also need to be some way of determining when the parameters/defines/etc. have been changed, triggering a rebuild. Not sure if this should be done via adjusting sim_build
based on some/all of the settings, by writing out the configuration to a file in sim_build
and checking this against the current configuration, or perhaps some combination of the two.
Also, hmmm, maybe one option for dealing with derived parameters is simply to not specify them and let the simulator figure it out. In other words, by default don't pass any parameters to the simulator, unless explicitly asked for (either in the testbench config or via CLI or pytest overrides). Then the testbench doesn't have to deal with computing parameter values based on other parameters. In previous versions of the makefiles, all of the arguments had to be explicitly listed in the makefile and were always passed to the simulator. But with the current setup, this can be done dynamically based on which ones are actually specified, even if the makefile doesn't explicitly have knowledge of a particular parameter.
Edit: upon further reflection, punting the computation to the module only works if the computation matches the module parameter defaults. In some cases, specifically the parameters above, the testbench requires additional constraints that don't match what is in the actual HDL - specifically, the testbench is looking for the DEPTH
parameter to be 1024*KEEP_WIDTH
, but in the HDL the DEPTH
parameter is not dependent on any other parameters. So, it's still going to be necessary to support this capability on some level.