elixir
elixir copied to clipboard
Test generator
This issue is to collect requirements and ideas for a test generator. We don't have a specific implementation plans yet.
A test generator would a script or project in this repository that:
- Reads the
tests.toml
file from a practice exercise and chooses test ids that are not marked withinclude = false
- Fetches the newest
https://github.com/exercism/problem-specifications/blob/main/exercises/[exercise]/canonical-data.json
and generates a test suite for the tests chosen fromtests.toml
The process of updating tests.toml
to include the newest tests cases is already automated with bin/configlet sync
. The assumption is that we would run the test runner after bin/configlet sync
.
It would be fine if the test generator didn't work for all exercises, but at least some of them. It would also be fine if it required a bit of manual work, or maybe per-exercise configuration stored in exercises/practice/[exercise]/.meta/
.
Previous attempts:
- https://github.com/exercism/elixir/pull/521
- https://github.com/jiegillet/elixir/blob/92ba4784e6d0cd67da7e0bc730e04825a7b65d91/exercises/practice/complex-numbers/generate_tests.exs
An example of an exercise that might be problematic for the test generator: kindergarten-garden
implements a few test cases from tests.toml
in a single test
block:
https://github.com/exercism/elixir/blob/4ab39ca23e6a3f04548ba6bd41056b55ef605ea9/exercises/practice/kindergarten-garden/.meta/tests.toml#L20-L30
https://github.com/exercism/elixir/blob/4ab39ca23e6a3f04548ba6bd41056b55ef605ea9/exercises/practice/kindergarten-garden/test/garden_test.exs#L42-L57
Is there any guarantee that all the tests in tests.toml
have been implemented? Maybe another function of the tool could be checking that, probably from the test names. It would also be problematic in the case of kindergarten-garden
though.
You're right to keep expectations realistic, there are going to be load of exceptions :)
Is there any guarantee that all the tests in tests.toml have been implemented?
There is a guarantee that a test in tests.toml
will be defined in canonical-data.json
, but not in our (Elixir) implementation of the exercise. The whole idea for adding new tests cases is that you first run configlet which just adds them to tests.toml
, and then you can use the to-be-implemented generator to actually add them, somehow, to *_test.exs
files.
About this, there are a lot of exercises where Elixir added some extra tests cases that don't exist in problem specifications. I think the generator shouldn't try to remove those, but I should try to remove tests listed in tests.toml
with include = false
🤔. In a perfect world 😁.
Elixir 1.13 will have Code.string_to_quoted_with_comments/2
and Code.quoted_to_algebra/2
, that might make it easier to insert new test cases.
Is this topic still relevant ?
In the ongoing implementation of bottle-song
, we did choose to use multi-line strings as output instead of the array / list of strings specified by the JSON specs.
Since JSON may be too poor to capture subtleties like this one, maybe some strongly structured comments in the TOML (the comments
key of the specs, not regular ones indeed) could be used to pass options (something like gen::output::prefer-multiline-string
) ? It should be considered as a "label" but I did not see such a key / feature in the current TOML specs structure.
Concerning the identification of extra cases vs specs cases (and include = false
handling), is it conceivable to create a module attribute (similar to the @tag
of ExUnit on some points) to keep track of the "link" ? Something like this :
@uuid "2c698d68-b39d-4615-817f-2cfacc9aa318"
@tag :pending
test "Lorem ipsum" do
...
A downside of this approach is that this @uuid
thing is presented to the student. In fact they could delete it without harm but still, we are showing them something that they should not care about ! I do not feel like it is a big deal compared to the potential benefits though.
Sorry for the lack of response...
It's been over 2 years since I created this issue and I have to admit that my need for a test generator is very low. The work it takes to occasionally update an Exercise manually is not a lot, and there aren't many new exercises coming up that would need creating with a test generator.
@jiegillet If you don't mind, I would close this issue.
Yes, I think this can be closed. The amount a work required for a generator that would cleanly do test updates is way more than updating by hand. And new exercises can use the current test generator. It's not perfect, but it works :)