k6
k6 copied to clipboard
Provide option to get result per iteration
Feature Description
Our team is using K6 as a scale and reliability testing rather than it as performance test. The main reason to use K6 is concurrency and multi user and multi iteration(parallel execution) facility.
If we use functional testing framework like pytest this concurrency and multiuser/iteration scenario had to be handled by us. So we decided to use K6. But the problem is K6 is meant for large scale performance test. What I mean is that the results/metrics are given in consolidated manner for all the iterations. We need result/metric for each iteration. I know the intended design of K6 is performance testing in large scale. If we get result per iteration, it will be really useful for simple scale and reliability tests.
Iteration result object should contain all the result parameter given in --summary-export cli option.
Suggested Solution (optional)
Currently handlesummary call back function is returning result. If handleIterationSummary(data,scenarioData) {} call back is written this problem might be resolved.
Iteration result object should contain all the result parameter given in --summary-export cli option.
Already existing or connected issues / PRs (optional)
No response
Can you elaborate and give some more details about your use case? For example, what exactly do you mean by this:
Our team is using K6 as a scale and reliability testing rather than it as performance test.
Can you elaborate and give some more details about your use case? For example, what exactly do you mean by this:
Our team is using K6 as a scale and reliability testing rather than it as performance test.
It means ,we will test with multiple user and multiple iteration for longer duration and see whether application not crash when doing many iteration(kind of soak testing). We concern less about performance metrics but more concern about reliability of our application. Often application fails in one or two of the iteration,we are depending upon console.log messages to find in which iteration it fails
Hi @SamPosh, is it the summary callback a requirement? You could already do it using an Output with the additional iter tag enabled then get filtered and/or aggregated metrics by iteration's tag. Does it work for you?
Hi @SamPosh, is it the summary callback a requirement? You could already do it using an Output with the additional iter tag enabled then get filtered and/or aggregated metrics by iteration's tag. Does it work for you?
Pushing result in other platform and using it is possible. Currently we are pushing externally to report portal. But I think for small scale tests providing iterative support built-in within K6 would be much helpful. [Small scale & reliability means , in our testing we may not have more than 10 user and might not cross 50 iteration. But even for this normal functional tools are having lot of overhead. So K6 if provide option for iteration summary built-in then it would be usable for small scale tests as well. Currently K6 concentrate on large scale test so that it only gives aggregated output]
Something like what you want will be quite difficult to implement in k6, it's just architecturally almost impossible to capture the metrics every iteration emits separately. Every action that k6 measures (e.g. making an HTTP request) sends its metric samples on a shared channel in real-time, as soon as the action has completed and been measured. Even if we wrote a custom output that can call a JS callback with metrics data, it will be difficult to batch metric samples by the iteration they were emitted.
@SamPosh, I am still not completely sure what your use case is, but won't it be covered if you have some wrappers around the k6/http
API? You can check the properties of the Response
object (status
, for example) for every request you made and emit some custom metrics if things are not correct. Then you can have thresholds on the custom metrics (e.g. checking that the error count is 0 or something very low), ensuring that your application hasn't crashed or had some other issues during the test. What part of your use case is not covered with that approach?
Something like what you want will be quite difficult to implement in k6, it's just architecturally almost impossible to capture the metrics every iteration emits separately. Every action that k6 measures (e.g. making an HTTP request) sends its metric samples on a shared channel in real-time, as soon as the action has completed and been measured. Even if we wrote a custom output that can call a JS callback with metrics data, it will be difficult to batch metric samples by the iteration they were emitted.
@SamPosh, I am still not completely sure what your use case is, but won't it be covered if you have some wrappers around the
k6/http
API? You can check the properties of theResponse
object (status
, for example) for every request you made and emit some custom metrics if things are not correct. Then you can have thresholds on the custom metrics (e.g. checking that the error count is 0 or something very low), ensuring that your application hasn't crashed or had some other issues during the test. What part of your use case is not covered with that approach?
In an round about way I can achieve what I required . As I mentioned earlier it is achieved by pushing each iterative result in Reportportal. But if in built support is there it will be clean. Actually due to K6's simplicity in handling multiple iteration and multiple user in parallel, it opens opportunity for scale and reliability tests written in smaller scale. So I am thinking of another option, there is an option exec.scenario.iterationInTest ,so can't we add exec.scenario.iterationTestSummary() and give the result ? only when we call iterationTestSummary() it will produce object like --summary-export json . For larger scale where they required aggregated output they won't call it.
So I am thinking of another option, there is an option exec.scenario.iterationInTest ,so can't we add exec.scenario.iterationTestSummary() and give the result ? only when we call iterationTestSummary() it will produce object like --summary-export json . For larger scale where they required aggregated output they won't call it.
Again, due to the metrics architecture of k6, something like this will be quite difficult to do. I may be missing something, but I think it will probably require significant refactoring and that refactoring will actually conflict with other important things and potentially make them worse... If anyone has ideas on how something like this can be implemented nicely, please share :pray: