morphir-elm
morphir-elm copied to clipboard
Rewrite old elm examples and spark tests to use csv
Context
Our "old" tests involve:
- Write elm code that describes the functionality you want to test
- Run morphir to convert it into Spark code
- Run a test that imports that code, creates testing data by hand, runs the generated code with it, and manually verifies the result is what we expect.
It would be much better if the elm output could be compared against the Spark output for all of them - this means we can test against larger datasets with lower effort, and don't have to mentally work out every distinct possibility and check it behaves as we think it should.
Actions
- [ ] Check how much effort it would be to generate datasets for input/output types other than Antiques (Antiques seems to use
tests-integration/spark/elm-tests/src/AntiqueCsvEncoder.elm
,tests-integration/spark/elm-tests/src/AntiqueCsvDecoder.elm
, andtests-integration/spark/elm-tests/tests/GenerateAntiqueTestData.elm
. - [ ] Write tests in
tests-integration/spark/elm-tests/tests/
that run every example intests-integration/spark/model/src/SparkTests/*.elm
(except AntiqueRulesTests.elm) - [ ] Replace all the test cases in
tests-integration/spark/test/src/*.scala
(except AntiqueRulesTests.scala) with ones that compare against the output of running elm tests.
#833 is an issue that involves reducing the amount of effort in writing a new test, which would likely be useful to this issue.