calyx
calyx copied to clipboard
add assert primitive and basic test
TODO: get it working with runt. When I invoke the test with fud it (correctly) fails, but runt seems to complain that the primitive doesn't exist (perhaps for fud2 they have to be registered)?
runt -i assert -d
✗ primitives:primitives/assert.futil
~
1 │- test
1│+ ---CODE---
2│+ 1
3│+ ---STDERR---
4│+ FAILED: interp_out.dump
5│+ /Users/elamdf/Documents/projects/inception/motifs_in_calyx/calyx/target/debug/cider -l /Users/elamdf/Documents/projects/inception/motifs_in_calyx/calyx --dump-registers pseudo_cider > interp_out.dump
6│+
7│+ thread 'main' panicked at cider/src/flatten/primitives/builder.rs:258:13:
8│+ not yet implemented: Primitives assert_prim not yet implemented
9│+ note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
10│+ ninja: build stopped: subcommand failed.
11│+ Error: ninja exited with exit status: 1
12│+
~
0 passing / 1 failing / 0 missing / 0 skipped / 0 remaining```
The error originates from cider which the Calyx interpreter. Cider reimplements each primitive in Rust. Unless @EclecticGriffin sees an obvious use case for this, we can just shim it out as a noop or a warning that is logged?
It would be an easy implementation in Cider fwiw. And if we're including it in the standard library, albeit in the unsynthesizable section, I think we should implement it in Cider. Happy to take care of that tomorrow or offer pointers. If that would be a blocker, you can also elide this particular test case in the runt invocation for the cider test suite.
Ok, trying to implement a cider assert but now I'm seeing a nondescript error trying to run the test in fud2.
<'./target/debug/calyx' -s verilog.cycle_limit=500 -s jq.expr=".memories" tests/correctness/assert.futil -q```
yields
```Error: ninja exited with exit status: 1```
With no further messaging (-v doesn't help). This test command(`fud e tests/correctness/assert.futil --through verilog --to dat`) runs as expected (it crashes the sim, as `assert` should), so I'm not sure how to proceed. Any tips? Should we do something more complicated s.t. `assert` passes the failure message without crashing the sim (which might be being invoked by ninja with logging disabled)?
It looks like the (expected/correct) assert failure is written to sim.log, but since the assert makes verilator return a nonzero exit code, ninja crashes and fails the test. Thinking about an elegant workaround
@elamdf That seems fine to me? Would you rather like assertions to log and not crash the program (software asserts generally crash the program as well).
I agree it should crash the sim. I think it's a bit shady, however, to define the assert test success case as "ninja should crash", since that can mean multiple things (and ideally we have a more elegant way to debug failed assertions outside of this unit test).
Do you think there's an elegant/generic way to handle this that's not 'catch this hardcoded case in emit_and_run in fud2's run.rs? I was thinking maybe wrapping the verilator binary execution s.t. the sim.log is parsed for assert failures before a non-zero status code is returned (and then the sim.log can be parsed by runt/fud2)
This pull request has not seen activity in 14 days and is being marked as stale. If you're continuing work on this, please reply and state how to get this PR in a mergeable state or what issues it is blocked on. If the PR is not ready for review, please mark it as a draft. The PR will be closed in 7 days if there is no further activity.
Just trying to save this PR from staleness!
I agree it should crash the sim. I think it's a bit shady, however, to define the
asserttest success case as "ninja should crash", since that can mean multiple things (and ideally we have a more elegant way to debug failed assertions outside of this unit test).
You know, taking a "perfect is the enemy of the good" approach here, maybe this situation is not so bad… just requiring some kind of crash seems like maybe it's a fine first step. @elamdf, what do you think about just keeping it that way for now? I think just making the Runt config a little smarter (grepping the log for an appropriate message) would be the appropriate next step, but just doing the obvious/dubious thing here seems like it might be better than nothing.
done- the tests now pass (with output expecting ninja to crash)
does this need anything else b4 merge? cc @rachitnigam
Workflows are running now!
resolved comments
@EclecticGriffin there seem to be a bunch of errors related to clippy formats? Any thoughts on what could be going wrong (for example, the recent change to Rust version)?
that's strange. I'm not sure what could be causing that but I'll take a look
On further inspection it seems that these are all legitimate warnings from clippy
it looks like my brittle hack (expecting "ninja exited with status 1" as the log msg for the assert tests) has shown its brittleness, and doesn't work now seemingly due to some new backtrace being emitted. Is there a way to wildcard log messages in runt, or some grepping that could be done?
Ah yes, this comes from anyhow. Toss this in-front of the command for runt: RUST_LIB_BACKTRACE=0
You can check the bottom of the cider runt.toml
This pull request has not seen activity in 14 days and is being marked as stale. If you're continuing work on this, please reply and state how to get this PR in a mergeable state or what issues it is blocked on. If the PR is not ready for review, please mark it as a draft. The PR will be closed in 7 days if there is no further activity.
This stale PR is being closed because it has not seen any activity in 7 days. If you're planning to continue work on it, please reopen it and mention how to make progress on it.