docker-vernemq
docker-vernemq copied to clipboard
Vernemq Helm chart with Deployment Kind: node '[email protected]' not responding to pings.
Hello Team,
I am using vernemq helm chart for kubernetes deployment.
My requirement is that I don't want to run helm chart in statefulsets
kind due to issue. This isssue is due to same pod name or vernemq node name and some how swc plugin
break with existing node name after crash.
So I have made some simple changes in vernemq helm chart statefulset.yaml
file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "vernemq.fullname" . }}
labels:
app.kubernetes.io/name: {{ include "vernemq.name" . }}
helm.sh/chart: {{ include "vernemq.chart" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- if .Values.statefulset.labels }}
{{ toYaml .Values.statefulset.labels | nindent 4 }}
{{- end }}
{{- with .Values.statefulset.annotations }}
annotations:
{{ toYaml . | nindent 4 }}
{{- end }}
spec:
# serviceName: {{ include "vernemq.fullname" . }}-headless
replicas: {{ .Values.replicaCount }}
# podManagementPolicy: {{ .Values.statefulset.podManagementPolicy }}
strategy:
type: {{ .Values.statefulset.updateStrategy }}
selector:
matchLabels:
app.kubernetes.io/name: {{ include "vernemq.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ include "vernemq.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- with .Values.statefulset.podAnnotations }}
annotations:
{{ toYaml . | nindent 8 }}
{{- end }}
spec:
serviceAccountName: {{ include "vernemq.serviceAccountName" . }}
terminationGracePeriodSeconds: {{ .Values.statefulset.terminationGracePeriodSeconds }}
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- containerPort: 1883
name: mqtt
- containerPort: 1884
name: mqttproxy
- containerPort: 8883
name: mqtts
- containerPort: 4369
name: epmd
- containerPort: 44053
name: vmq
- containerPort: 8080
name: ws
- containerPort: 8888
name: prometheus
{{- range tuple 9100 9101 9102 9103 9104 9105 9106 9107 9108 9109 }}
- containerPort: {{ . }}
{{- end }}
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
# - name: DOCKER_VERNEMQ_KUBERNETES_NAMESPACE
# valueFrom:
# fieldRef:
# fieldPath: metadata.namespace
- name: DOCKER_VERNEMQ_DISCOVERY_KUBERNETES
value: "1"
- name: DOCKER_VERNEMQ_KUBERNETES_LABEL_SELECTOR
value: "app.kubernetes.io/name={{ include "vernemq.name" . }},app.kubernetes.io/instance={{ .Release.Name }}"
{{- /* Add this localhost listener in order to get the port forwarding working */}}
- name: DOCKER_VERNEMQ_LISTENER__TCP__LOCALHOST
value: "127.0.0.1:1883"
{{- if .Values.service.mqtts.enabled }}
- name: DOCKER_VERNEMQ_LISTENER__SSL__DEFAULT
value: "$(MY_POD_IP):{{ .Values.service.mqtts.port }}"
{{- end }}
{{- if .Values.additionalEnv }}
{{ toYaml .Values.additionalEnv | nindent 12 }}
{{- end }}
resources:
{{ toYaml .Values.resources | nindent 12 }}
livenessProbe:
httpGet:
path: /health
port: prometheus
scheme: HTTP
initialDelaySeconds: {{ .Values.statefulset.livenessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.statefulset.livenessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.statefulset.livenessProbe.timeoutSeconds }}
successThreshold: {{ .Values.statefulset.livenessProbe.successThreshold }}
failureThreshold: {{ .Values.statefulset.livenessProbe.failureThreshold }}
readinessProbe:
httpGet:
path: /health
port: prometheus
scheme: HTTP
initialDelaySeconds: {{ .Values.statefulset.readinessProbe.initialDelaySeconds }}
periodSeconds: {{ .Values.statefulset.readinessProbe.periodSeconds }}
timeoutSeconds: {{ .Values.statefulset.readinessProbe.timeoutSeconds }}
successThreshold: {{ .Values.statefulset.readinessProbe.successThreshold }}
failureThreshold: {{ .Values.statefulset.readinessProbe.failureThreshold }}
volumeMounts:
- name: logs
mountPath: /vernemq/log
- name: data
mountPath: /vernemq/data
{{- range .Values.secretMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
readOnly: true
{{- end }}
{{- range .Values.configMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
# subPath: {{ .subPath }}
# readOnly: true
{{- end }}
{{- range .Values.hostMounts }}
- name: {{ .name }}
mountPath: {{ .path }}
# subPath: {{ .subPath }}
readOnly: true
{{- end }}
{{- with .Values.statefulset.lifecycle }}
lifecycle:
{{- toYaml . | nindent 12 }}
{{- end }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{ toYaml . | nindent 8 }}
{{- end }}
{{- if eq .Values.podAntiAffinity "hard" }}
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- topologyKey: "kubernetes.io/hostname"
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- {{ include "vernemq.name" . }}
- key: "release"
operator: In
values:
- {{ .Release.Name }}
{{- else if eq .Values.podAntiAffinity "soft" }}
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
topologyKey: "kubernetes.io/hostname"
labelSelector:
matchExpressions:
- key: "app"
operator: In
values:
- {{ include "vernemq.name" . }}
- key: "release"
operator: In
values:
- {{ .Release.Name }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{ toYaml . | nindent 8 }}
{{- end }}
securityContext:
{{ toYaml .Values.securityContext | nindent 8 }}
volumes:
- name: logs
emptyDir: {}
{{- range .Values.secretMounts }}
- name: {{ .name }}
secret:
secretName: {{ .secretName }}
{{- end }}
{{- range .Values.configMounts }}
- name: {{ .name }}
configMap:
name: {{ .configMapName }}
{{- end }}
{{- range .Values.hostMounts }}
- name: {{ .name }}
hostPath:
# name: {{ .configMapName }}
path: {{ .path }}
type: {{ .type }}
{{- end }}
{{- if .Values.persistentVolume.enabled }}
volumeClaimTemplates:
- metadata:
name: data
annotations:
{{- range $key, $value := .Values.persistentVolume.annotations }}
{{ $key }}: {{ $value }}
{{- end }}
spec:
accessModes:
{{- range .Values.persistentVolume.accessModes }}
- {{ . | quote }}
{{- end }}
resources:
requests:
storage: {{ .Values.persistentVolume.size }}
{{- if .Values.persistentVolume.storageClass }}
{{- if (eq "-" .Values.persistentVolume.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: "{{ .Values.persistentVolume.storageClass }}"
{{- end }}
{{- end }}
{{- else }}
- name: data
{{- end }}
I have highlighted the changes and make it as a Deployment
Kind but when I run and execute the vmq-admin cluster show
command. It gives and error that
node '[email protected]' not responding to pings.
I had observed that when I run with StatefulSet kind
then it works fine and the svc/node name url look like below
'VerneMQ@ray-mqtt-prod-0.ray-mqtt-prod-headless.ray-prod.svc.cluster.local'
but in Deployment kind
case url/dns is
*node '[email protected]' where headless service is missing in url/dns of Deployment kind.
Is it possible to run vernemq helm chart with Deployment kind
if I don't require any persistent state?
How can I run with Deployment kind
that same helm chart?
What should be the possible reason that vmq-admin command goes failed?
@ioolkos Can you help me in this?
@nrvmodi I can't really help in depth, sorry. Stable nodenames are certainly one of the reasons to use a stateful set (it's not only about persistent volumes). In a "deployment" every Pod restart would have a different random node name which is not very useful for a Verne cluster.
Yes, I know about Stable node names. but we have another issue (https://github.com/vernemq/vernemq/issues/1363) for Kubernetes statefulset. So I was trying to deploy it with deployment kind
Suppose If I try with vernemq-operator then also https://github.com/vernemq/vernemq/issues/1363 issue persist?