k6-operator
k6-operator copied to clipboard
Custom Image functionality not working
We are trying to use custom image for runner pods as we found some bug in latest version of k6 images. However when we are creating k6 resource with the required image, Job is always being created by the default image loadimpact/k6:latest. This is happening irrespective of whatever value we provide in the k6 resource runner spec image. Please find one of the k6 resource we created and the corresponding Job created by the operator for this k6 resource. Please find the reference for the k6 resource used and the corrosponding job created. I tried with multiple images and it was picking the default one every-time. Is this the bug in the Operator
apiVersion: k6.io/v1alpha1
kind: K6
metadata:
annotations:
meta.helm.sh/release-name: test
meta.helm.sh/release-namespace: default
creationTimestamp: "2022-07-27T13:00:55Z"
generation: 1
labels:
app.kubernetes.io/managed-by: Helm
name: test-38-local-tag-logs
namespace: default
resourceVersion: "4051"
uid: 5cbe68bf-075a-4b27-8fac-86ec84619223
spec:
arguments: --out statsd
parallelism: 2
runner:
env:
- name: K6_STATSD_ADDR
value: test-statsd.default.svc:8125
image: k6-local
resources:
limits:
cpu: 600m
memory: 1000Mi
apiVersion: batch/v1
kind: Job
metadata:
creationTimestamp: "2022-07-27T15:41:09Z"
labels:
app: k6
controller-uid: 21c0c40b-a473-44a8-8d42-bac37b1d11a0
job-name: test-38-local-1
k6_cr: test-38-local
name: test-38-local-1
namespace: default
ownerReferences:
- apiVersion: k6.io/v1alpha1
blockOwnerDeletion: true
controller: true
kind: K6
name: test-38-local
uid: 23b77a56-fe62-44a0-8845-4f8df7c662b0
resourceVersion: "69411"
uid: 21c0c40b-a473-44a8-8d42-bac37b1d11a0
spec:
backoffLimit: 6
completions: 1
parallelism: 1
selector:
matchLabels:
controller-uid: 21c0c40b-a473-44a8-8d42-bac37b1d11a0
template:
metadata:
creationTimestamp: null
labels:
app: k6
controller-uid: 21c0c40b-a473-44a8-8d42-bac37b1d11a0
job-name: test-38-local-1
k6_cr: test-38-local
spec:
containers:
- command:
- k6
- run
- --quiet
- --execution-segment=0:1/2
- --execution-segment-sequence=0,1/2,1
- --out
- statsd
- /test/test.js
- --address=0.0.0.0:6565
- --paused
env:
- name: K6_STATSD_ADDR
value: test-statsd.default.svc:8125
image: loadimpact/k6:latest
imagePullPolicy: Always
name: k6
ports:
- containerPort: 6565
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /test
name: k6-test-volume
dnsPolicy: ClusterFirst
hostname: test-38-local-1
restartPolicy: Never
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 0
volumes:
- configMap:
defaultMode: 420
name: test.js
name: k6-test-volume
status:
active: 1
startTime: "2022-07-27T15:41:09Z"
Note: Not able to find any logs related to image in operator manager pod
@akshaychopra5207, this description is insufficient as error cannot be repeated. When I use image: k6-local
, I get ErrImagePull
on all pods. Both for 0.7 and the latest
operator.
Additionally, your spec doesn't contain script
field:
spec:
arguments: --out statsd
parallelism: 2
runner:
env:
- name: K6_STATSD_ADDR
value: test-statsd.default.svc:8125
image: k6-local
resources:
limits:
cpu: 600m
memory: 1000Mi
The above spec would fail on validation. So it's not clear to me how you got that Job with test.js
in configmap with this spec.
IOW, it seems like the info in this issue is incomplete.
Also, FYI, I believe community forum is more suitable for questions of this type.
This seems quite outdated; closing it.