openproject-deploy
openproject-deploy copied to clipboard
Kubernetes Deploy not compatible with many Cloud Providers (DOKS)
I attempted to run OpenProject on DigitalOcean Kubernetes yesterday, and was already a bit concerned about the opdata
PVC you create, and mount to multiple containers.
At first glance it might seem like this would work, and some cloud providers might support it, but from what I could gather GKE and DOKS do not. Because while containers can run on the same node, and it will allow the volume to be mounted to all of them, a scaled cluster with multiple nodes, or even node pools, will attempt to attach the volume to different droplets, which isn't allowed.
I might be going back to a single-container deploy for now, but it seems like that isn't recommended for production, and the environment configuration is limited with this. Also scaling would work a lot better with the multi-container solution, so either OpenProject should support S3 for its data, or the one-container solution needs to be adopted again IMHO.
Hello @Dan6erbond, OpenProject does support S3 attachments so you can configure that for a multi-container setup and remove the opdata PVC altogether.
Will update the README accordingly.
@machisuji thank you for the response. I saw that and recently tried to deploy OpenProject with S3 and the multi-container setup, but came across issues with the proxy - both Apache and Nginx give me a 504 response and I'm not sure why. I follow the configuration pretty much 1:1 but am using Terraform for the configuration.
Hi Im also getting a 504 but using traefik for ingress, All my pods are running in their own namespaces
There are many issues with these manifests.
- There is no reason for the apache pod. The ingress is already doing reverse proxying.
- The pvc should be read write many.
- The containers shouldn't run as root for a variety of reasons. (incompatible with most shared storage solutions, bad practice security wise)
- The cron container should be run as a cron job in k8s
- The seeder should be run as an init container or a k8s job.
- Stuff in the entrypoint should probably also be done in the same place as the seeder is run.
A helm chart or jsonnet library to construct the config would probably make it easier for new users. If the above issues are fixed you could probably add some HPA's for the web and worker pods to make it autoscale.
The 504 is probably because the container tries to talk https on an http connection (tries to upgrade) by default. Explicitly disabling https and hsts fixed it for me.
@mackaybe I'm may have this error about https. Where should i disable https ? In service ?
For testing you need to set the env variables to disable https and hsts with the container they provide. If you run in behind properly configured reverse proxy which terminates SSL it works fine though. k8s example deployment is below, but if you're using an https ingress it's not needed. This example only shows the init container (I do the seeding there) but it is the same for the web and worker.
apiVersion: v1
kind: Secret
metadata:
name: openproject-env-config
stringData:
OPENPROJECT_HTTPS: "false"
OPENPROJECT_HSTS: "false"
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
openproject.service: web
name: web
spec:
replicas: 1
selector:
matchLabels:
openproject.service: seeder-web
strategy:
type: Recreate
template:
metadata:
labels:
openproject.service: seeder-web
spec:
initContainers:
- name: seeder
args:
- ./docker/prod/seeder
image: openproject/community:12.3.0
imagePullPolicy: IfNotPresent
envFrom:
- secretRef:
name: openproject-env-config
Thank you for your help :)
I can now access openproject with a port forwarding on proxy pod :)
Checking for ingress now.
@mackaybe I'm still in trouble with the ingress.
To avoid any DNS problem, i try a wget from ingress pod.
For one of my custom project wget http://CUTOMPROJECT-SERVICE-IP:80/
or wget http://CUTOMPROJECT-ENDPOINT-IP:80/
works well
But wget http://OPENPROJECT-SERVICE-IP:80/
or wget http://OPENPROJECT-ENDPOINT-IP:80/
is block on Connecting to IP
I've tried to add this in proxy deployment, no change :
env:
- name: OPENPROJECT_HTTPS
value: "false"
- name: OPENPROJECT_HSTS
value: "false"
The proxy pod is totally unnecessary if using ingress. This example works without disabling https or hsts for me using ingress-nginx (it also has annotations to use oauth2-proxy for authentication and exposes the api without requiring oidc; my opinion their oidc implementation even in the "enterprise" version is totally unusable in a modern configuration):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-ingress
annotations:
nginx.ingress.kubernetes.io/auth-url: https://oauth2-proxy.example.com/oauth2/auth
nginx.ingress.kubernetes.io/auth-signin: https://oauth2-proxy.example.com/oauth2/start?rd=https://$host
nginx.ingress.kubernetes.io/auth-response-headers: X-Auth-Request-User,X-Auth-Request-Groups,X-Auth-Request-Email,X-Auth-Request-Preferred-Username
spec:
tls:
- hosts:
- op.example.com
secretName: wildcard
rules:
- host: op.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
- path: /api/docs
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: api-ingress
spec:
tls:
- hosts:
- op.example.com
secretName: wildcard
rules:
- host: op.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
---
apiVersion: v1
kind: Service
metadata:
name: web
labels:
openproject.service: web
spec:
type: ClusterIP
selector:
openproject.service: seeder-web
ports:
- name: web
protocol: TCP
port: 8080
targetPort: 8080
I have not used their proxy pod but perhaps it is configured to use https
Thank You for your help.
I still can't access openproject with ingress ! curl http://SERVICE_IP:8080 does not work from other pods, but works from openproject pods (cron for example).
I can't see any log on web pod when curl does not work.
Not that in the service definition, i have change this :
selector:
openproject.service: seeder-web
by
selector:
openproject.service: web
(their is no pod with label seeder-web).
If i try a https curl from cron pod, i'm having directly this error :
curl: (35) error:1408F10B:SSL routines:ssl3_get_record:wrong version number
error that i don't have from the infress pod for example.
My guess is thes service is going wrong. My conf :
apiVersion: v1
kind: Service
metadata:
name: web
namespace: openproject
uid: d49fee0e-5a74-43cc-989b-f9edb597dd7f
resourceVersion: '6150915440'
creationTimestamp: '2022-11-23T13:12:43Z'
labels:
openproject.service: web
selfLink: /api/v1/namespaces/openproject/services/web
status:
loadBalancer: {}
spec:
ports:
- name: web
protocol: TCP
port: 8080
targetPort: 8080
selector:
openproject.service: web
clusterIP: 10.3.144.27
clusterIPs:
- 10.3.144.27
type: ClusterIP
sessionAffinity: None
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
internalTrafficPolicy: Cluster
Do you see anything wrong ?
Problem resolved ! I've deleted the networkPolicy, and all went fine :)
Yes I mentioned earlier I put the seeder in an init pod for the web deployment hence the change in name. I forgot about the deletion of the network policy.
Problem resolved ! I've deleted the networkPolicy, and all went fine :)
Same here, I guess that's because when you use external ingress controller, the controller pod should also be annotated with 'frontend' network policy for it be able to forward traffic to OpenProject's 'proxy' (apache2) pod.
I didn't test that though, just deleted the NetworkPolicy for simplicity.
Since the deployment for Kubernetes has moved to the helm charts repository, I'm closing this issue. Please open issues there https://github.com/opf/helm-charts