carbon-lang
carbon-lang copied to clipboard
Implement basic support for documentation tests
From the outside, it is currently very hard to find out how much of the syntax described in the documentation is actually implemented by the Explorer (as also evidenced by #1891). And even if this was known, documentation and implementation are likely to drift over time as things evolve.
This PR provides:
- a (very) minimal inline DSL to transform Markdown fenced code blocks into runnable tests
- a
py_binarytarget implementing the above DSL - enough Bazel plumbing to make
bazel test docs/...automatically generate and execute these tests
The idea would then be, over time, to mark (at least some of) the example code in the documentation as tests, and have Bazel test automatically that the syntax used in the documentation is in fact valid.
TODO
- [ ] better error handling (when a test fails, it's very hard to see why and trace the failure back to the corresponding Markdown line)
- [ ] review by someone with Bazel readability (I never used Bazel before yesterday, there might be better ways to generate targets and tests dynamically)
- [ ] friendlier DSL syntax (plus possibly some heuristics to avoid having to write it at all whenever possible)