k6
k6 copied to clipboard
Unify and standardize behavior and exit codes when k6 is stopped
k6 has multiple different ways to stop a test:
- Single Ctrl+C for semi-graceful shutdown
- Second Ctrl+C for immediate abort
- The
test.abort()JS API fromk6/execution - JS script exception
- Via the REST API, e.g. with
curl -i -X PATCH -d '{"data":{"type":"status","id":"default","attributes":{"stopped":true}}}' 'http://localhost:6565/v1/status' - A threshold with
abortOnFail: true
And, for reference, we currently have these exit codes predefined: https://github.com/grafana/k6/blob/1b9c5fa55598363501a3db7c44cccddc8430af8f/errext/exitcodes/codes.go#L10-L21
The problem is that stopping the test in different ways and at different times behaves very inconsistently and unpredictably, even when you account for the unavoidable peculiarities of k6... :sob: Here is what I've determined:
- While k6 is initializing for the first time (to get the exported
options):- :negative_squared_cross_mark: 1. a single Ctrl+C will not be graceful at all, it will behave like 2.; it will also cause the process to abort with an exit code of
1, which is apparently the default Go interrupt signal handling behavior when it has not yet been handled by k6 - :negative_squared_cross_mark: 2. will not even be reached (see above)
- :heavy_check_mark: 3.
test.abort()will surprisingly work as expected and cause k6 to exit withScriptAborted - :heavy_check_mark: 4. script exceptions work as expected, k6 exits with
ScriptException - :black_square_button: 5. doesn't apply, since the REST API has not yet been started at this point (and probably can't be started so early), so it can't be used to stop the test
- :black_square_button: 6. also doesn't apply, the test hasn't been started
- :negative_squared_cross_mark: 1. a single Ctrl+C will not be graceful at all, it will behave like 2.; it will also cause the process to abort with an exit code of
- While k6 is initializing one of the actual test VUs:
- :negative_squared_cross_mark: 1. is now handled by the k6 signal handler, but again it won't be graceful and the process will exit with code
GenericEngine, the catch-all exit code for when k6 doesn't "know" what caused the failure... :disappointed: - :negative_squared_cross_mark: 2. will probably never be reached, since 1. is not graceful, though that might be fixed by https://github.com/grafana/k6/pull/2800
- :white_check_mark: 3.
test.abort()kind of works, k6 exits withScriptAbortedand after https://github.com/grafana/k6/pull/2800 it should be graceful :crossed_fingers: - :heavy_check_mark: 4. script exceptions work as expected, k6 exits with
ScriptException - :negative_squared_cross_mark: 5. The REST API replies with 200 when you try to stop the test with it, and it sets the status to
stopped, however the VU initialization won't actually stop, even after https://github.com/grafana/k6/pull/2800 :disappointed: After VU initialization finishes,setup()is executed (:facepalm:), then no iterations are executed, thenteardown()is executed and finally, k6 exits with a 0 exit code - :white_check_mark: 6. this kind of works, assuming the user has set correct thresholds that will fail if no iterations were executed... the thresholds will be evaluated, but if all of them were
http_req_duration: ['p(99)<100'], they will happily pass with no iterations :disappointed:
- :negative_squared_cross_mark: 1. is now handled by the k6 signal handler, but again it won't be graceful and the process will exit with code
- During
setup()execution:- :negative_squared_cross_mark: same as VU init :arrow_up:, 1. is now handled by the k6 signal handler, but again it won't be graceful and the process will exit with code
GenericEngine, the catch-all exit code for when k6 doesn't "know" what caused the failure... :disappointed: - :negative_squared_cross_mark: same as VU init :arrow_up:, 2. will not be reached since 1. is not graceful
- :heavy_check_mark: 3.
test.abort()works as expected and exits withScriptAborted - :heavy_check_mark: 4. script exceptions work as expected, k6 exits with
ScriptExceptionandhandleSummary()is even executed (after https://github.com/grafana/k6/pull/2798) - :white_check_mark: 5. REST API stop seems to work almost as expected, though it exits with a
GenericEnginecode and needs more tests - :white_check_mark: 6. Thresholds evaluation appears to not be running during
setup(), which is probably the correct behavior, though it needs to be evaluated and tested :thinking:
- :negative_squared_cross_mark: same as VU init :arrow_up:, 1. is now handled by the k6 signal handler, but again it won't be graceful and the process will exit with code
- During normal test run execution
- :question: 1. Ctrl+C gracefully stops the execution, though k6 exits with a 0 code, which is probably not what we want? :thinking:
- :heavy_check_mark: 2. Second Ctrl+C works as expected and exits with
ExternalAbort - :heavy_check_mark: 3.
test.abort()works as expected and exits withScriptAborted - :white_check_mark: 4. quite intentionally, script exceptions interrupt only the current iteration, but not the whole test; users need to set thresholds or
try/catchandtest.abort()if they want an exception to abort the whole test, though there are certainly some UX improvements we can make by default (e.g. https://github.com/grafana/k6/issues/877) - :grey_question: 5. The REST API stop works as expected, it gracefully stops the execution, though again k6 will exit with a 0 code; there is a better argument to be made here that this is the correct behavior, compared to Ctrl+C, though again I think it needs to have its own dedicated non-zero exit code, for consistency :thinking:
- :heavy_check_mark: 6. Threshold aborting works as expected and exits with code
ThresholdsHaveFailed
- During
teardown()execution:- :question: 1. Single Ctrl+C doesn't actually stop the
teardown()execution which, IIRC, was an intentional decision... and it's probably the correct decision - if the user interrupted the execution mid-test (i.e. section :arrow_up:), andsetup()had already executed, we probably want to make sure we runteardown()too :thinking: however, the exit code probably shouldn't be0 - :heavy_check_mark: 2. And a second Ctrl+C works as expected and exits with
ExternalAbort, so it's not a big deal that the first Ctrl+C waits for teardown to finish. - :heavy_check_mark: 3.
test.abort()works as expected and exits withScriptAborted - :heavy_check_mark: 4. script exceptions work as expected, k6 exits with
ScriptExceptionandhandleSummary()is even executed (after https://github.com/grafana/k6/pull/2798) - :question: 5. REST API' stop doesn't stop the
teardown()execution, which makes some sense for similar reasons to Ctrl+C not stopping it, however the exit code should probably also not be 0 - :white_check_mark: 6. Thresholds are running and appear (from logs) to fail, and while that doesn't abort the
teardown()execution (arguably the correct behavior, the exit code will beThresholdsHaveFailedin the end, so it seems fine to me
- :question: 1. Single Ctrl+C doesn't actually stop the
- During
handleSummary()execution- :negative_squared_cross_mark: 1. The first Ctrl+C has no effect at all
- :heavy_check_mark: 2. Second Ctrl+C works as expected and exits with
ExternalAbort - :negative_squared_cross_mark: 3.
test.abort()aborts the test run and the error is logged, but the exit code is 0... k6 doesn't fall back to the default built-in end-of-test summary, which might be a good idea, but needs evaluation to be sure - :question: 4. Same as
test.abort(), a script exception aborts the function and is logged, and k6 even falls back and runs the default built-in end-of-test summary - all of that is completely fine, but the exit code should probably not be 0 :thinking: - :grey_question: 5. The REST API stop has absolutely no effect, which might arguably be the correct behavior :thinking:
- :black_square_button: 6. doesn't apply, metrics and threshold processing have finished before
handleSummary()is called
- After everything has been executed, but k6 is waiting because of the
--lingeroption- :heavy_check_mark: 1. Ctrl+C aborts the test, and because it is what
--lingeris waiting for, the exit code is (and should be)0 - :heavy_check_mark: 2. Second Ctrl+C works as expected too - it shouldn't be required, but it will exit immediately with
ExternalAbortif there is some sort of a bug in k6's test run finishing logic - :black_square_button: 3.
test.abort()can't be used at this point, no JS code is running - :black_square_button: 4. same for JS exceptions, no JS code is running
- :grey_question: 5. The REST API's stop also doesn't do anything, which is probably what we want :thinking: we want to have access to the REST API, to be able to query metrics, but we don't want the method that stops the test to also stop
--linger... we should add a separate endpoint (e.g. as a part of https://github.com/grafana/k6/issues/995) if we want to be able to clear the lingering state - :black_square_button: 6. doesn't apply, threshold crunching has long been stopped at this point
- :heavy_check_mark: 1. Ctrl+C aborts the test, and because it is what
This is connected to https://github.com/grafana/k6/issues/2790 and https://github.com/grafana/k6/issues/1889, but goes way beyond them... All of these behaviors should be standardized before test suites (https://github.com/grafana/k6/issues/1342) can be implemented, for example.
In general, it makes sense for all external and internal test aborts to make k6 exit with a non-zero exit code. We could maybe have different exit codes, depending on the cause, so users can filter out expected ones. Maybe we can even make some of these configurable (https://github.com/grafana/k6/issues/870, https://github.com/grafana/k6/issues/680) but, by default, it makes sense to me that the default behavior should be a non-zero exit code when the test was prematurely stopped in any way.
The only way a k6 process should exit with a 0 exit code is if the test finished normally and no thresholds failed.
There is actually a 7th way to stop a test - timeouts :sweat_smile:
Specifically, both setup() and teardown() have timeout values (60s by default and configurable by the setupTimeout and teardownTimeout options respectively). handleSummary() also has a fixed 2-minute timeout that isn't configurable at the moment.
These timeouts have their own exit and run_status codes and everything, so they are not just a normal script error or something like that.
It's also worth considering and testing that there is probably a difference between script exceptions (e.g. throw new Error('foo');) and script errors (e.g. wrong JS syntax or things like using await in a non-async function). For example:
- :negative_squared_cross_mark: script errors cause k6 to exit with a -1 exit code when k6 is initializing for the first time, instead of 107 (
ScriptException)
@imiric raised a very good suggestion in https://github.com/grafana/k6/pull/2810#discussion_r1052129443 - now that https://github.com/grafana/k6/pull/2810 will add a errext.AbortReason type, which will track the internal k6 test run error, we can stop manually assigning exit codes to errors deep in the codebase :tada: We should be able to have a AbortReason -> ExitCode mapping the same way that PR adds a AbortReason -> cloudapi.RunStatus mappnig!
https://github.com/grafana/k6/pull/2885 was a reminder that k6 run --paused is kind of it's own weird in-between state, somewhere after initializing the VUs and before setup() execution... :disappointed: So many cases... :sob:
As https://github.com/grafana/k6/pull/2885 and https://github.com/grafana/k6/pull/2893 have proven, --linger is somewhat big complication in the k6 run logic...
In general, if a script error or test.abort() occurs during the VU init phase, --linger should not apply and k6 should exit immediately with the appropriate exit code for whatever aborted the init. But if test.abort() is called during the test run itself, or the test was stopped in some other way besides Ctrl+C (e.g. REST API, thresholds with abortOnFail), --linger means that k6 should not exit immediately after stopping the test.
Another way a k6 test (the 8th? :scream:) can be stopped is when a specific output tells it to :weary: See https://github.com/grafana/k6/blob/b85d09d53d4373b48e08eb33ca0e011bcebdffdc/output/types.go#L69-L76
Connected to https://github.com/grafana/k6/issues/2804#issuecomment-1414094937, the exit code of a k6 run --out cloud test with K6_CLOUD_STOP_ON_ERROR=true that is stopped from the k6 cloud app is currently wrong, it's -1 (i.e. 255) when it should probably be 105 (ExternalAbort) .
Would it be possible to pass custom exit code to test.abort() and also include the message and the code in the summary?
We have tests that are not interested about metrics, but about specific functional failures that occur only under load. Typically we need to know what happened and provide some contextual data eg:
if (res.status === 400 && res.json('error_code') === 666) {
const context = JSON.stringify({
reqBody,
resStatus: res.status,
resBody: res.body,
});
execution.test.abort(`error_code must not be 666: ${context}`);
}
I would probably like to have something like:
execution.test.abort({
exitCode: 1666,
message: `error_code must not be 666`,
context: whatever, // I assume this has to be serializable
});
/*
The summary would then contain:
type SummaryData = {
result: {
outcome: 'passed' | 'aborted' | ...
exitCode: number,
context: unknown,
}
// ...other props ...
}
Btw summary data type is not provided by types/k6
*/
We also have some post-test scripts that depend on what went wrong, so it's quite useful to reflect this in the exit code, without the need to inspect summary/output.
There are workarounds like counter + threshold[count<1] + check with dynamic name (contextual data serialized in the check name) or looking up the specific check in the output (contextual data in tags), but it's quite a hassle for such a simple thing.
I noticed discussions about "angry checks" (checks interrupting iteration), maybe there should be a "furious check" (check aborting a test) :D
Please excuse me if this is not a good place to discuss this.