cocotb
cocotb copied to clipboard
Remove knowledge of tests from the scheduler
Right now, the scheduler has a lot of special cases for scheduling tests in different and exciting ways.
Now that #640 is merged, this isn't really necessary - we can bootstrap the scheduler to support tests with something like:
@cocotb.coroutine
def run_all_tests():
for test in tests:
# most of the special casing right now deals with the fact that exceptions could not
# previously be caught
log_test_begin(test)
try:
yield test()
raise TestSuccess
except TestSuccess:
log_test_pass(test)
except Exception:
log_test_failure(test)
# this is probably the only special-casing within `scheduler` that can't be avoided.
scheduler.kill_all_forked_coroutines()
scheduler.unprime_all_triggers()
yield Timer(1)
This would also open the doors to hooking in other test runners, like pytest (#494)
Something like trio.Nursery
could be used to deal with cleaning up coroutines and triggers.
async def run_all_tests():
for test in tests:
async with cocotb.nursery_thing():
await test()
Where any coroutines that are forked inside will be cleaned up and their triggers unprimed when the main task exits the async with
block.
You could make handling of exceptions in forked tasks configurable, per the options in #922.
Making this a public interface that's nestable could allow for #1963 as well.
What's the argument for a single scheduler instance and running the test runner inside the scheduler vs running the test runner and having the scheduler as a test fixture? Cleaning up the scheduler from a scheduled task is wonky; having it as a test fixture means it's cleaned up perfectly every time.
A pseudo-simultaneous but asymmetric coroutine system, like trio and what @garmin-mjames is talking about, would enable you to do that without special logic; but I haven't convinced myself it's a good idea. It would likely change a lot; and the end result is simply less flexible, but maybe that isn't an actual issue.