add helm postal chart
This PR add kubernetes helm chart for postal.
init db and create admin user:
kubectl exec -it POSTAL-PODNAME-WEB postal initialize
kubectl exec -it POSTAL-PODNAME-WEB postal make-user
tested with:
- postal 3.3.4
- kubernetes v1.30
- helm 3.15.4
It exposes web using ingress and using port 25 on load balancer type for smtp
mysql installation is not part of chart, so DB should be installed before this chart
Will not be able to spend much time on polishing this chart, but still think might be helpful for community.
Looks good, might be worth adding a HorizontalPodAutoscaler for the web/smtp or maybe a config option for the number of replicas of them?
Looks good, might be worth adding a HorizontalPodAutoscaler for the web/smtp or maybe a config option for the number of replicas of them?
I added replicas to values
Can the worker use more than one replica? Can you provider which component of postal must be with one replica and which can not? btw, looks good!
JFYI, stuff like:
kubectl exec -it POSTAL-PODNAME-WEB postal initialize
kubectl exec -it POSTAL-PODNAME-WEB postal make-user
Don't have to be dependency.
Both init and make user can be wrapped inside check if we have initialized before and have default admin user on not. This is question of entry point honestly.
There is many ways to handle it, f.e.: read env vars that will contains init user secret that will be set in helm values or via providing existing secret name and key name.
Please add to each deployment stuff like:
{{- with .Values.web.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
affinity:
{{- with .Values.web.affinity }}
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if not .Values.web.affinity.podAntiAffinity }}
{{- if eq .Values.web.podAntiAffinityPreset "hard" }}
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchLabels:
{{- include "postal.selectorLabels" . | nindent 20 }}
app.kubernetes.io/component: web
topologyKey: kubernetes.io/hostname
{{- else if eq .Values.web.podAntiAffinityPreset "soft" }}
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 1
podAffinityTerm:
labelSelector:
matchLabels:
{{- include "postal.selectorLabels" . | nindent 20 }}
app.kubernetes.io/component: web
topologyKey: kubernetes.io/hostname
{{- end }}
{{- end }}
{{- with .Values.web.topologySpreadConstraints }}
topologySpreadConstraints:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.web.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
P.s:
- default
podAntiAffinityPreset: "soft"is good thing, to ask k8s put pods of same deployment on different nodes if possible, and if not - well put them where it can. This out of the box allows not worry that service will be down if one node going on some trip :) app.kubernetes.io/componentis more correct label name thenapp.
Looks good, might be worth adding a HorizontalPodAutoscaler for the web/smtp or maybe a config option for the number of replicas of them?
{{- if .Values.web.autoscaling.enabled }}
apiVersion: {{ include "postal.hpa.apiVersion" . }}
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "postal.fullname" . }}-web
namespace: {{ .Release.Namespace }}
labels:
{{- include "postal.labels" . | nindent 4 }}
app.kubernetes.io/component: {{ include "postal.fullname" . }}-web
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "postal.fullname" . }}-web
minReplicas: {{ .Values.web.autoscaling.minReplicas }}
maxReplicas: {{ .Values.web.autoscaling.maxReplicas }}
metrics:
{{- if .Values.web.autoscaling.targetCPU }}
- type: Resource
resource:
name: cpu
{{- if eq (include "postal.hpa.apiVersion" .) "autoscaling/v2beta1" }}
targetAverageUtilization: {{ .Values.web.autoscaling.targetCPU }}
{{- else }}
target:
type: Utilization
averageUtilization: {{ .Values.web.autoscaling.targetCPU }}
{{- end }}
{{- end }}
{{- if .Values.web.autoscaling.targetMemory }}
- type: Resource
resource:
name: memory
{{- if eq (include "postal.hpa.apiVersion" .) "autoscaling/v2beta1" }}
targetAverageUtilization: {{ .Values.web.autoscaling.targetMemory }}
{{- else }}
target:
type: Utilization
averageUtilization: {{ .Values.web.autoscaling.targetMemory }}
{{- end }}
{{- end }}
{{- if .Values.web.autoscaling.behavior }}
behavior: {{ toYaml .Values.web.autoscaling.behavior | nindent 4 }}
{{- end }}
{{- end }}
@dragoangel thank you for improving this. Can you please add these changes as PR to my repo: https://github.com/dmitryzykov/postal-install from which this PR is created, I will merge it so it will be updated here.
Any other improvements for helm chart are also welcomed.
Hi @dmitryzykov, unfortunately, not have many time for that, I'm not using postal atm, but you can collect my feedback and improve your PR, I copied samples that I had from other places and adjusted a bit, you can "polish" them a bit and it will be cool.
@dragoangel can you just attach your helm chart folder as zip archive? Will add it to pr
No, you got me wrong, I not have postal at all :) I just reviewing your PR
@dragoangel got it now :) I'm not actively working on postal too. Might add these changes later when will work again with postal chart.
Anyway if someone will find this helpful and tested this - any changes in PR format are welcomed.
also we need have: livenessProbe & readinessProbe's, @willpower232 do postal has /healthy page that returns 200 OK if it's okay and 5xx if not? Then we can add stuff like:
livenessProbe:
httpGet:
path: /healthy
port: web
periodSeconds: 3
timeoutSeconds: 2
failureThreshold: 10
readinessProbe:
httpGet:
path: /healthy
port: web
periodSeconds: 3
timeoutSeconds: 2
successThreshold: 2
failureThreshold: 2
And for smtp, can be same but instead of httpGet:
tcpSocket:
port: smtp
For worker - I don't know, we can exec command in container and check if proccess running, or sort of that.
@dmitryzykov how is the postal runner handled? I see smtp, web, and worker not the runner. it's not needed? it was removed. solved. Still wonder why this is not merged? By now could had been improved by community members @adamcooke can you consider approving this PR and merge it?
Hello all! What is the status here?
also we need have:
livenessProbe & readinessProbe's, @willpower232 do postal has/healthypage that returns200 OKif it's okay and5xxif not? Then we can add stuff like:livenessProbe: httpGet: path: /healthy port: web periodSeconds: 3 timeoutSeconds: 2 failureThreshold: 10 readinessProbe: httpGet: path: /healthy port: web periodSeconds: 3 timeoutSeconds: 2 successThreshold: 2 failureThreshold: 2And for smtp, can be same but instead of
httpGet:tcpSocket: port: smtpFor worker - I don't know, we can exec command in container and check if proccess running, or sort of that.
@willpower232 any ideas on how we can handle this? I am really willing to give some time to improve this helm chart. First would be nice to approve it, and allow us to improve it.
It looks like there is a /health endpoint on port 9090 of the web container and 9091 of the smtp container
https://github.com/postalserver/postal/commit/a2eb70e
I would love to be able to merge but I don't have that permission.
@adamcooke any plans to merge?
I made also a postal chart but adapted for kubee https://github.com/EraldyHq/kubee/blob/main/charts/postal/README.md
I took a look to the PR but there was too much to do (I don’t remember the details but the diff in values file should be explicit)
Technically, postal uses the same binary (ie same manifest) but with different command.
The env is in a yaml file and is the same for all 3 service.
I tried to replicate it.
The whole postal configuration is in the values file except the secret. The manifest by service are a template.
For installation or upgrade, postal does not have idempotent cli command so there is only the installation.
Before accepting any PR, I vote for the fact that the whole configuration should be in the values file.
My 2 cents.