cli icon indicating copy to clipboard operation
cli copied to clipboard

Add exit code tests

Open joshbax189 opened this issue 1 year ago • 8 comments

Before changing the behavior of errors as discussed in #273 , I wanted to add more tests that check the current behavior.

This PR adds tests which check, for each command:

  • bad input exits with error
  • input with warnings exits with error when --strict is set
  • multiple inputs (i.e. commands which apply to multiple files or have multiple steps) should execute as far as possible when --allow-error is set, then exit with error

I'll add some script helpers that check error statuses and note any discrepancies I see. I'll comment out and mark any currently failing expectations with #FIXME so that the CI run is clean.

joshbax189 avatar Nov 05 '24 00:11 joshbax189

Right now, this only has checks for the analyze command, but I wanted to get feedback on the usage of the helper functions in test. Sorry if it triggers a lot of CI failures!

joshbax189 avatar Nov 05 '24 00:11 joshbax189

I like the idea! These are my thoughts on the test implementation. :D

  • Open a new workflows/streams.yml and test it under the test/streams/ folder instead of testing for each command. Since most commands are implemented similarly (Node.js forward Emacs' streams), testing for each command could be more manageable.
  • From the point above, move all test-related files under test/streams/ (testing.sh, Eask-normal, Eask-warn, etc.)

WDYT? :)

jcs090218 avatar Nov 05 '24 05:11 jcs090218

I think I chose the worst possible example command :boom: I want to test more than just Eask file errors and warnings, which I might have implied by this choice...

A better example might be eask compile. I want a normal .el file, an .el file that compiles with a warning and one that errors. I want to specifically test the "recover after error" behavior of the compile command itself, not just the warnings from the Eask file. E.g. test that eask compile --allow-error error.el normal.el should exit with non-zero status, but also compile normal.elc. (It does, by the way).

I'll add a few more examples in a new subfolder of test, like you suggested, then check in again later.

joshbax189 avatar Nov 05 '24 20:11 joshbax189

No worries! I'm more keen to test this feature explicitly, so it's easier to track and test in one place. 🤔

jcs090218 avatar Nov 06 '24 02:11 jcs090218

Hey @jcs090218 sorry this PR fell off the TODO list over the xmas period. Turns out Eask has a lot of features to test!

This PR has tests that cover a few different issues, so I'm gonna separate them out into smaller PRs for your consideration.

joshbax189 avatar Mar 17 '25 19:03 joshbax189

Hey @jcs090218 sorry this PR fell off the TODO list over the xmas period. Turns out Eask has a lot of features to test!

Wow, I didn’t realize the PR had so many changes—87 files and 1,725 lines added! No problem at all! I truly appreciate and am grateful that people are willing to put in the time and effort to improve this project! 😄

This PR has tests that cover a few different issues, so I'm gonna separate them out into smaller PRs for your consideration.

That sounds great! I'm not sure if I can handle reviewing such a massive PR! 👍

jcs090218 avatar Mar 17 '25 21:03 jcs090218

Ok I'm still trying to figure out the best way to break down this PR. But now we have more options after adding Jest!

I can:

  • port each of the new tests to Jest within this PR, or
  • make a new PR for each command, so the project files and tests can be checked individually

I think the choice partly depends on whether you want the new tests to go into exit-status.test.js or into the test files for each command, e.g. like analyze.test.js in 1eddbb3

Since Jest is better at grouping tests within files and at reporting where the failure came from vs a shell script test, I'd prefer to put the new tests in individual test files. I think the tests for --allow-error and --strict make more sense in the contexts of the commands rather than in a group of their own. WDYT?

joshbax189 avatar Mar 31 '25 21:03 joshbax189

I think the choice partly depends on whether you want the new tests to go into exit-status.test.js or into the test files for each command, e.g. like analyze.test.js in https://github.com/emacs-eask/cli/commit/1eddbb32f65d3602f9324f866062305c88ffed8b

Since Jest is better at grouping tests within files and at reporting where the failure came from vs a shell script test, I'd prefer to put the new tests in individual test files. I think the tests for --allow-error and --strict make more sense in the contexts of the commands rather than in a group of their own. WDYT?

I agreed. "Test files for each command" is the way to go. It's more explicit. :)

jcs090218 avatar Mar 31 '25 22:03 jcs090218