VerneMQ pods can't access volumes
Hi there,
VerneMQ pods fail to start up with error message:
08:39:35.207 [error] Error creating /vernemq/data/generated.configs: permission denied
Error generating config with cuttlefish
run vernemq config generate -l debug for more information.
The PVCs are created successfully, but it appears as though the pods don't have permissions to access the volumes. Any ideas?
Manifest:
apiVersion: vernemq.com/v1alpha1
kind: VerneMQ
metadata:
labels:
vernemq: k8s
name: k8s
namespace: messaging-dev
spec:
baseImage: vernemq/vernemq
vmqConfig: 'accept_eula=yes'
config:
configs:
- name: allow_register_during_netsplit
value: "on"
- name: allow_publish_during_netsplit
value: "on"
- name: allow_subscribe_during_netsplit
value: "on"
- name: allow_unsubscribe_during_netsplit
value: "on"
- name: allow_anonymous
value: "on"
listeners:
- address: 0.0.0.0
port: 1883
allowedProtocolVersions: '3,4,5'
- address: 0.0.0.0
port: 8080
websocket: true
allowedProtocolVersions: '3,4,5'
plugins: []
storage:
volumeClaimTemplate:
metadata:
name: data
annotations: {}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: "gp2"
serviceAccountName: vernemq-k8s
size: 2
version: 1.10.3
@KeenanLawrence The cuttlefish error might indicate that there is an error in the conf file. Did you add any configs?
@ioolkos Updated original issue, MD formatting was wrong.
Yes, looks like it's to do with generated config. Apart from the above and adding some extra envs to the deployment spec, I haven't altered anything else.
Deployment manifest:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/name: vmq-operator
app.kubernetes.io/version: latest
tags.datadoghq.com/env: dev
tags.datadoghq.com/service: vernemq-dev
tags.datadoghq.com/version: 1.10.3
name: vmq-operator
namespace: messaging-dev
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/name: vmq-operator
template:
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/name: vmq-operator
app.kubernetes.io/version: latest
tags.datadoghq.com/env: dev
tags.datadoghq.com/service: vernemq
tags.datadoghq.com/version: 1.10.3
spec:
containers:
- env:
- name: WATCH_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: OPERATOR_NAME
value: vmq-operator
- name: DOCKER_DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: DOCKER_DD_ENV
valueFrom:
fieldRef:
fieldPath: metadata.labels['tags.datadoghq.com/env']
- name: DOCKER_DD_SERVICE
valueFrom:
fieldRef:
fieldPath: metadata.labels['tags.datadoghq.com/service']
- name: DOCKER_DD_VERSION
valueFrom:
fieldRef:
fieldPath: metadata.labels['tags.datadoghq.com/version']
- name: DOCKER_DD_LOGS_INJECTION
value: 'true'
name: vmq-operator
image: vernemq/vmq-operator:latest
nodeSelector:
beta.kubernetes.io/os: linux
securityContext:
runAsNonRoot: true
runAsUser: 65534
serviceAccountName: vmq-operator
The configs are generated at every boot. That directory doesn't seem writable, so the error might come from there, not format issues as such. (still don't know why though).