charts
charts copied to clipboard
[bitnami/harbor] Setting `internalTLS.enabled: true` results in all harbor pods going into a crash loop.
Name and Version
bitnami/harbor 21.4.4
What architecture are you using?
amd64
What steps will reproduce the bug?
Set internalTLS.enabled: true
and deploy the chart.
Are you using any custom parameters or values?
A redacted values file follows
adminPassword: 'REDACTED'
core:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/spot-worker
operator: DoesNotExist
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/component: core
app.kubernetes.io/instance: harbor
app.kubernetes.io/name: harbor
topologyKey: kubernetes.io/hostname
csrfKey: 'REDACTED'
priorityClassName: system-cluster-critical
replicaCount: 2
resources:
limits:
memory: 128Mi
requests:
cpu: 10m
memory: 80Mi
secret: 'REDACTED'
secretKey: 'REDACTED'
serviceAccountName: harbor
tolerations:
- key: nidhogg.uswitch.com/fluent-bit.fluent-bit
operator: Exists
effect: NoSchedule
exporter:
replicaCount: 2
resources:
limits:
memory: 32Mi
requests:
cpu: 10m
memory: 16Mi
serviceAccountName: harbor
exposureType: ingress
externalDatabase:
coreDatabase: harbor
host: 'REDACTED'
password: 'REDACTED'
sslmode: require
user: 'REDACTED'
externalRedis:
host: 'REDACTED'
externalURL: 'https://harbor.REDACTED'
ingress:
core:
hostname: 'harbor.REDACTED'
pathType: Prefix
internalTLS:
enabled: true
jobservice:
jobLogger: database
replicaCount: 2
resources:
limits:
memory: 48Mi
requests:
cpu: 3m
memory: 32Mi
secret: 'REDACTED'
serviceAccountName: harbor
logLevel: info
metrics:
enabled: true
serviceMonitor:
enabled: true
persistence:
enabled: false
imageChartStorage:
disableredirect: true
s3:
accesskey: REDACTED
bucket: REDACTED
region: ap-southeast-2
secretkey: REDACTED
type: s3
portal:
replicaCount: 2
resources:
limits:
memory: 16Mi
requests:
cpu: 2m
memory: 8Mi
serviceAccountName: harbor
postgresql:
enabled: false
redis:
enabled: false
registry:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/spot-worker
operator: DoesNotExist
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app.kubernetes.io/component: registry
app.kubernetes.io/instance: harbor
app.kubernetes.io/name: harbor
topologyKey: kubernetes.io/hostname
controller:
resources:
limits:
memory: 32Mi
requests:
cpu: 2m
memory: 24Mi
credentials:
htpasswd: 'REDACTED'
password: 'REDACTED'
username: 'REDACTED'
replicaCount: 2
secret: 'REDACTED'
server:
resources:
limits:
memory: 128Mi
requests:
cpu: 100m
memory: 32Mi
serviceAccountName: harbor
trivy:
enabled: false
What is the expected behavior?
Pods should not go into a crash loop.
What do you see instead?
All pods are in a crash loop. If you look at logs, all of them say the following before exiting:
INFO ==> Appending internalTLS trust CA cert...
/opt/bitnami/scripts/libharbor.sh: line 102: /etc/ssl/certs/ca-certificates.crt: Permission denied
Additional information
We had the above working with chart version 19.6.0
.
From between then and when we upgraded, container SecurityContexts were defined to lock things down.
All the containers now have a read-only filesystem and no longer run as root
.
The permissions on the /etc/ssl/certs/ca-certificates.crt
are it is owned and grouped by root
and has permissions of 664
.
I was able to get things working by slackening off the container SecurityContexts by setting the following for all of the pods:
containerSecurityContext:
readOnlyRootFilesystem: false
runAsGroup: 0
This is not ideal though as it reduces the security you added by quite a bit.