Zephyr fvp support
This PR adds corstone300 fvp to the platforms supported by the zephyr. Also changes the generated micro projects build system from make to ninja.
cc @gromero
cc @mehrdadh
thanks, @mehrdadh. I tested this commit with microTVM hardware CI, and compared the errors against nightly build 316. No new error was introduced.
@tvm-bot rerun
@mkatanbaf @areusch @mehrdadh I've re-triggered the tests, but apparently @mkatanbaf have done it already once... it seems that one of the FVP processes got stuck for about ~3h:
[2022-08-12T07:40:53.224Z] INFO:__main__:dummy log...
[2022-08-12T07:40:53.224Z]
[2022-08-12T07:40:53.224Z] INFO:__main__:microTVM Zephyr runtime - running
[2022-08-12T07:40:53.224Z]
[2022-08-12T07:40:53.224Z] INFO:__main__:IRIS semihosting initialized.
[2022-08-12T07:40:53.224Z] [07:40:53] /workspace/src/runtime/micro/micro_session.cc:368: remote: microTVM Zephyr runtime - running
[2022-08-12T10:25:46.234Z] Sending interrupt signal to process
[2022-08-12T10:25:50.665Z] script returned exit code 143
:(
Full log here: https://ci.tlcpack.ai/blue/rest/organizations/jenkins/pipelines/tvm/branches/PR-12125/runs/55/nodes/432/steps/1475/log/?start=0
@gromero Thanks for re-triggering the tests. This has happened locally a few times before (out of many runs!), and I talked with @areusch about it. This is difficult to reproduce, I ran the tests in a loop overnight for the past few nights, and this issue didn't come up!
There are some failing ethosu codegen tests on the rerun, but I don't think those are caused by this PR
@mkatanbaf Got it! Well, feels like déjà-vu to me (when using QEMU). When you ran locally in a loop did you use any container? Anyways, I know how hard it can be. But how about the last error in test_binary_add_with_non_4d_shapes :
https://ci.tlcpack.ai/blue/rest/organizations/jenkins/pipelines/tvm/branches/PR-12125/runs/56/nodes/432/steps/1490/log/?start=0
It seems not related to that change, so I'm confused...
There are some failing ethosu codegen tests on the rerun, but I don't think those are caused by this PR
@mkatanbaf yeah I agree. So it means the CI is not "quite" stable ... and that other glitches can happen :(
We can re-trigger and hope for the best, but I wonder if the first issue -- in FVP hanging -- is a blocker to merge it. @areusch Thoughts? :)
@mkatanbaf Got it! Well, feels like déjà-vu to me (when using QEMU). When you ran locally in a loop did you use any container?
yes, I'm using the cortexm docker container (formerly qemu). What I've seen (and trying to reproduce) is that it gets stuck when it tries to close the transport. My guess is that something prevents the subprocess termination.
@tvm-bot rerun
@tvm-bot rerun
@tvm-bot rerun
@tvm-bot rerun
@tvm-bot rerun
ok let's give this a shot, we can always disable the test in CI if it becomes flaky.
@areusch really?! We did that for the mps3 board with QEMU ...
@gromero i think we should try it, and CI Monitoring rotation will catch any flakes a lot faster.