asciinema-server icon indicating copy to clipboard operation
asciinema-server copied to clipboard

Configure SMTP Relay Hostname

Open nlowe opened this issue 5 years ago • 8 comments

I'm trying to deploy a copy of asciinema-server in a kubernetes cluster. I already have a functioning SMTP relay outside of kubernetes that I want to use. However, I can't seem to change the relay host. When I try to sign up, I get the following error:

22:54:53.597 [error] Error in process #PID<0.2618.0> on node :"[email protected]" with exit value:
{%Bamboo.SMTPAdapter.SMTPError{message: "There was a problem sending the email through SMTP.\n\nThe error is :retries_exceeded\n\nMore detail below:\n\n{:network_failure, 'smtp', {:error, :nxdomain}}\n", raw: {:retries_exceeded, {:network_failure, 'smtp', {:error, :nxdomain}}}}, [{Bamboo.SMTPAdapter, :handle_response, 1, [file: 'lib/bamboo/adapters/smtp_adapter.ex', line: 92]}, {Exq.Worker.Server, :"-dispatch_work/3-fun-0-", 4, [file: 'lib/exq/worker/server.ex', line: 141]}]}

22:54:53.599 [info] {%Bamboo.SMTPAdapter.SMTPError{
   message: "There was a problem sending the email through SMTP.\n\nThe error is :retries_exceeded\n\nMore detail below:\n\n{:network_failure, 'smtp', {:error, :nxdomain}}\n",
   raw: {:retries_exceeded, {:network_failure, 'smtp', {:error, :nxdomain}}}
 },
 [
   {Bamboo.SMTPAdapter, :handle_response, 1,
    [file: 'lib/bamboo/adapters/smtp_adapter.ex', line: 92]},
   {Exq.Worker.Server, :"-dispatch_work/3-fun-0-", 4,
    [file: 'lib/exq/worker/server.ex', line: 141]}
 ]}

I've tried setting MAILNAME to the hostname of my relay. I've tried mounting /opt/app/etc/custom.exs with the following contents:

use Mix.Config

config :asciinema, Asciinema.Mailer,
   deliver_later_strategy: Asciinema.BambooExqStrategy,
   adapter: Bamboo.SMTPAdapter,
   server: "relay.mydomain.net",
   port: 25

I've verified that relay.mydomain.net is resolvable from inside the container. Is there a setting I'm missing? I'm trying to avoid deploying a relay for the relay, and even if I had to I don't really like hard-coding the kubernetes service name to smtp, which is what appears to be required by looking at the docker-compose file.

nlowe avatar Mar 19 '19 23:03 nlowe

I was able to work around this by deploying namshi/smtp in the same pod as the server and adding a hostAlias to the pod:

hostAliases:
  - hostnames:
      - "smtp"
    ip: 127.0.0.1

It's still not ideal but it does work.

nlowe avatar Mar 20 '19 20:03 nlowe

@nlowe - is it possible for you to share the yaml or helm link for the Kubernetes install for the Asciinema server?

vikramkhatri avatar Oct 11 '19 15:10 vikramkhatri

Here's the deployment template. Our values includes some company-specific defaults that I'll have to clean up but I'll try to get the full chart on github later today:

templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: {{ include "asciinema-server.fullname" . }}
  labels:
    app.kubernetes.io/name: {{ include "asciinema-server.name" . }}
    helm.sh/chart: {{ include "asciinema-server.chart" . }}
    app.kubernetes.io/instance: {{ .Release.Name }}
    app.kubernetes.io/managed-by: {{ .Release.Service }}
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: {{ include "asciinema-server.name" . }}
      app.kubernetes.io/instance: {{ .Release.Name }}
  template:
    metadata:
      labels:
        app.kubernetes.io/name: {{ include "asciinema-server.name" . }}
        app.kubernetes.io/instance: {{ .Release.Name }}
    spec:
      restartPolicy: Always
      initContainers:
        - name: init-redis-ping
          image: redis:latest
          command: ["redis-cli"]
          args: ["-h", "{{ .Release.Name }}-redis-master", "-p", "6379", "ping"]
          envFrom:
            - configMapRef:
                name: {{ template "asciinema-server.fullname" . }}
        - name: init-postgres-setup
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          command: ["setup"]
          envFrom:
            - configMapRef:
                name: {{ template "asciinema-server.fullname" . }}
            - configMapRef:
                name: {{ template "asciinema-server.fullname" .}}-smtp
            - secretRef:
                name: {{ template "asciinema-server.fullname" . }}
      hostAliases:
        - hostnames:
            - "smtp"
          ip: 127.0.0.1
      containers:
        - name: {{ .Chart.Name }}
          image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
          imagePullPolicy: {{ .Values.image.pullPolicy }}
          envFrom:
            - configMapRef:
                name: {{ template "asciinema-server.fullname" . }}
            - configMapRef:
                name: {{ template "asciinema-server.fullname" . }}-smtp
            - secretRef:
                name: {{ template "asciinema-server.fullname" . }}
          ports:
            - name: http
              containerPort: 4000
              protocol: TCP
          livenessProbe:
            httpGet:
              path: /
              port: http
          readinessProbe:
            httpGet:
              path: /
              port: http
          volumeMounts:
            - mountPath: /opt/app/uploads
              subPath: uploads
              name: app-data
            - mountPath: /opt/app/cache
              subPath: cache
              name: app-data
          resources:
{{ toYaml .Values.resources | indent 12 }}
        - name: smtp
          image: "{{ .Values.email.image.repository }}:{{ .Values.email.image.tag }}"
          imagePullPolicy: {{ .Values.email.image.pullPolicy }}
          envFrom:
            - configMapRef:
                name: {{ template "asciinema-server.fullname" . }}-smtp
          ports:
            - name: smtp
              containerPort: 25
              protocol: TCP
          livenessProbe:
            tcpSocket:
              port: 25
          readinessProbe:
            tcpSocket:
              port: 25
          resources:
{{ toYaml .Values.email.resources | indent 12 }}
      volumes:
        - name: app-data
          persistentVolumeClaim:
{{- if and .Values.persistence.enabled .Values.persistence.existingClaim }}
            claimName: {{ .Values.persistence.existingClaim | quote }}
{{- else }}
            claimName: {{ template "asciinema-server.fullname" . }}
{{- end }}
    {{- with .Values.nodeSelector }}
      nodeSelector:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.affinity }}
      affinity:
{{ toYaml . | indent 8 }}
    {{- end }}
    {{- with .Values.tolerations }}
      tolerations:
{{ toYaml . | indent 8 }}
    {{- end }}

nlowe avatar Oct 11 '19 16:10 nlowe

@nlowe - Thank you. I will go through the above. I will look for your chart.

vikramkhatri avatar Oct 11 '19 16:10 vikramkhatri

I've published the chart here: https://github.com/nlowe/asciinema-server-helm

Note that the default values are likely not suitable for your environment and will need changed. It also makes a lot of assumptions (your cluster has persistence setup and you want to use ClusterIP Services and Ingresses).

nlowe avatar Oct 11 '19 19:10 nlowe

@nlowe - Thank You. I will try this out. I really appreciate it.

vikramkhatri avatar Oct 11 '19 20:10 vikramkhatri

@nlowe I installed your helm chart on self-hosted Openshift and it worked flawlessly! Only had problems with the anyUserID - usual securityContext problems from openshift.

asciinema should consider making this available in their main repo and in stable/

gfvirga avatar Mar 30 '20 19:03 gfvirga

I've created a wiki page about SMTP config, showing how this can be configured in various ways - https://github.com/asciinema/asciinema-server/wiki/SMTP-configuration

ku1ik avatar Jan 03 '21 17:01 ku1ik