k6-operator
k6-operator copied to clipboard
How to provide options in a json file in the K6 resource
We have followed the suggestion from the k6 documentation to store the test options in separate json files. For instance, load.json, stress.json, soak.json, etc. Then at runtime, we can pass the option file for the type of load test we are running on the command line. I do not see how to do that using the k6 resource. Do we just add the script.js and the load.json file into the ConfigMap, and then pass the load.json file as an argument in the runner section?
Hi @xendren, do you mean that you're trying to use the --config options.json
as argument? This JSON is not part of the script (ConfigMap) then but a separate file. Runner image just doesn't have it. I believe you can build a custom runner image which has your JSON file(s) copied to it. Then pass new image to yaml of the operator. Would that be acceptable?
Hey... yes. We run it on the command line now and pass the options for the type of load test using the --config parameter. This is critical to being able to execute different types of load tests without having to manage a test script per test type.
I wasn't aware of the option for rebuilding the runner image. That could work, but we would need to take over management of the runner image to keep it up to date with changes to our options files. So do you know for sure that it would work if we added all of the options files in the runner image and still use the --config in the k6 resource to indicate which options file to use?
For anyone else looking for the answer, this did work. If we load the options.json file in the runner image, we can provide the options at runtime. Hopefully, this will be an added feature at a later date.
Hi @xendren, glad to know it worked for you! But what do you mean by "added feature" in this case? Could you please clarify?
@yorugac I have same thing troubles with scripts... I think we need something like that... We have a structure of tests:
configs/**.js
k6
resources/**.json
scenarios/**.js
services/**.js
test-script.js
locally \ docker we can run:
k6 run -c configs/config.js scenarios/test.js
But if I copied all these files in PVC and mount in K6 CRD:
apiVersion: k6.io/v1alpha1
kind: K6
metadata:
name: k6-sample
spec:
parallelism: 1
script:
volumeClaim:
name: "test-pvc"
file: "test-script.js"
in file field I can choose only test-script.js, but no one from included directories files don't works :( I mean if I set
script:
volumeClaim:
name: "test-pvc"
file: "scenarios/test.js"
It's not worked for me. Import modules inside throw an exception like a that:
time="2022-07-21T15:05:29Z" level=error msg="Module specifier \"services/test.js\" was tried to be loaded
as remote module by prepending \"https://\" to it, which didn't work.
If you are trying to import a nodejs module, this is not supported as k6 is _not_ nodejs based.
Please read https://k6.io/docs/using-k6/modules for more information.
Remote resolution error: \"Get \"https://services/test.js\": dial tcp:
lookup services on <IP>:53: no such host\"\n\tat go.k6.io/k6/js.(*InitContext).
Require-fm (native)\n\tat file:///test/test.js:1:0(25)\n" hint="script exception"
I'm in the same scenario as @ksemele . My error looks different though and (there's a preceding /test/
path i dont know where it is coming from.
I have a PVC that is bound to a Pod used by my CI/CD to build and copy the modules to the Volume.
On that Pod I'm mounting to the path /k6-modules
. I've validated the files are there by exec'ing to the pod and checking the volume's content.
output of pod ls
/ # ls k6-modules/v1/
httpd_test.js script_test.js test.main.js test.main.js.map
/ #
mount path for the pod used by ci/cd
volumeMounts:
- mountPath: "/k6-modules"
name: k6-modules-directory
error from my runner
time="2022-08-06T00:16:19Z" level=error msg="The moduleSpecifier \"/test/v1/script_test.js\" couldn't be found on local disk. Make sure that you've specified the right path to the file. If you're running k6 using the Docker image make sure you have mounted the local directory (-v /local/path/:/inside/docker/path) containing your script and modules so that they're accessible by k6 from inside of the container, see https://k6.io/docs/using-k6/modules#using-local-modules-with-docker."
my k6 config
# https://github.com/grafana/k6-operator/blob/main/README.md#parallelism
apiVersion: k6.io/v1alpha1
kind: K6
metadata:
name: k6-apache-stress
namespace: chaos
spec:
parallelism: 10
arguments: --out statsd
script:
volumeClaim:
name: k6-modules
file: v1/script_test.js
runner:
env:
- name: K6_STATSD_ENABLE_TAGS
value: "true"
- name: K6_STATSD_ADDR
value: datadog.datadog:8125
what i expected the config to look like.
# https://github.com/grafana/k6-operator/blob/main/README.md#parallelism
apiVersion: k6.io/v1alpha1
kind: K6
metadata:
name: k6-apache-stress
namespace: chaos
spec:
parallelism: 10
arguments: --out statsd
script:
volumeMounts:
name: k6-modules-directory
file: v1/script_test.js
volumes:
- name: k6-modules-directory
persistentVolumeClaim:
claimName: k6-modules
runner:
env:
- name: K6_STATSD_ENABLE_TAGS
value: "true"
- name: K6_STATSD_ADDR
value: datadog.datadog:8125
Is there anything else i can provide to help debug?
@ksemele could you please re-check where your files are located once mounted? k6-operator pods will be seeking them in /test
folder. Multi-file scripts mounted in the volume should work just as one-file script, so long as the directory paths are correct.
I'll be adding a note to the README about /test
in volumes since I see now that it wasn't made obvious there. Reference issue: https://github.com/grafana/k6-operator/issues/143