charts
charts copied to clipboard
Gitea Chart External DB Secret Improperly Created
Name and Version
bitnami/gitea 2.3.6
What architecture are you using?
amd64
What steps will reproduce the bug?
- Digital Ocean Managed Kubernetes (DOKS)
- With the provided values file below
- A deployment should not create an externalDb secret if specifying an existing secret.
- An empty externalDb secret is created, but the externalDB secret specified in the values.yaml file is used, see below:
Empty Secret Manifest:
# gitea-externaldb.yaml
apiVersion: v1
data:
db-password: ""
kind: Secret
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"db-password":""},"kind":"Secret","metadata":{"annotations":{},"labels":{"app.kubernetes.io/instance":"gitea","app.kubernetes.io/managed-by":"Helm","app.kubernetes.io/name":"gitea","app.kubernetes.io/version":"1.22.0","argocd.argoproj.io/instance":"gitea","helm.sh/chart":"gitea-2.3.6"},"name":"gitea-externaldb","namespace":"gitea"},"type":"Opaque"}
creationTimestamp: "2024-06-29T02:33:00Z"
labels:
app.kubernetes.io/instance: gitea
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: gitea
app.kubernetes.io/version: 1.22.0
argocd.argoproj.io/instance: gitea
helm.sh/chart: gitea-2.3.6
name: gitea-externaldb
namespace: gitea
resourceVersion: "1466339"
uid: 3d653193-dcff-44da-9726-41593f96e8e3
type: Opaque
Pod Manifest:
# gitea-pod-manifest
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2024-06-29T02:46:59Z"
generateName: gitea-6bcff9fd74-
labels:
app.kubernetes.io/instance: gitea
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: gitea
app.kubernetes.io/version: 1.22.0
helm.sh/chart: gitea-2.3.6
pod-template-hash: 6bcff9fd74
name: gitea-6bcff9fd74-xjj9r
namespace: gitea
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: gitea-6bcff9fd74
uid: a64df29a-d56b-4a3b-98c6-d727eb078e4c
resourceVersion: "1626637"
uid: 4926d894-9055-448f-9773-3dd099210486
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
app.kubernetes.io/instance: gitea
app.kubernetes.io/name: gitea
topologyKey: kubernetes.io/hostname
weight: 1
automountServiceAccountToken: false
containers:
- env:
- name: BITNAMI_DEBUG
value: "true"
- name: GITEA_HTTP_PORT
value: "3000"
- name: GITEA_SSH_LISTEN_PORT
value: "2222"
- name: GITEA_SSH_PORT
value: "22"
- name: GITEA_DATABASE_HOST
value: postgresql-ha-pgpool.postgresql-ha.svc.cluster.local
- name: GITEA_DATABASE_PORT_NUMBER
value: "5432"
- name: GITEA_DATABASE_NAME
value: gitea
- name: GITEA_DATABASE_USERNAME
value: postgres
- name: GITEA_DATABASE_PASSWORD
valueFrom:
secretKeyRef:
key: db-password
name: gitea-passwords
- name: GITEA_ADMIN_USER
value: speedythesnail-adm
- name: GITEA_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: admin-password
name: gitea-passwords
- name: GITEA_ADMIN_EMAIL
value: [email protected]
- name: GITEA_APP_NAME
value: Gitea
- name: GITEA_RUN_MODE
value: prod
- name: GITEA_ROOT_URL
value: http://gitea.somedomain.com
- name: GITEA_ENABLE_OPENID_SIGNIN
value: "false"
- name: GITEA_ENABLE_OPENID_SIGNUP
value: "false"
image: docker.io/bitnami/gitea:1.22.0-debian-12-r1
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 5
initialDelaySeconds: 600
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: http
timeoutSeconds: 5
name: gitea
ports:
- containerPort: 3000
name: http
protocol: TCP
- containerPort: 2222
name: ssh
protocol: TCP
readinessProbe:
failureThreshold: 5
httpGet:
path: /
port: http
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
cpu: 375m
ephemeral-storage: 1Gi
memory: 384Mi
requests:
cpu: 250m
ephemeral-storage: 50Mi
memory: 256Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsGroup: 1001
runAsNonRoot: true
runAsUser: 1001
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /bitnami/gitea
name: gitea-data
- mountPath: /opt/bitnami/gitea
name: empty-dir
subPath: app-base-dir
- mountPath: /tmp
name: empty-dir
subPath: tmp-dir
dnsPolicy: ClusterFirst
enableServiceLinks: true
initContainers:
- args:
- -ec
- |
#!/bin/bash
. /opt/bitnami/scripts/liblog.sh
info "Copying base dir to empty dir"
# In order to not break the application functionality (such as upgrades or plugins) we need
# to make the base directory writable, so we need to copy it to an empty dir volume
cp -r --preserve=mode /opt/bitnami/gitea /emptydir/app-base-dir
info "Copy operation completed"
command:
- /bin/bash
image: docker.io/bitnami/gitea:1.22.0-debian-12-r1
imagePullPolicy: IfNotPresent
name: prepare-base-dir
resources:
limits:
cpu: 375m
ephemeral-storage: 1Gi
memory: 384Mi
requests:
cpu: 250m
ephemeral-storage: 50Mi
memory: 256Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
readOnlyRootFilesystem: true
runAsGroup: 1001
runAsNonRoot: true
runAsUser: 1001
seLinuxOptions: {}
seccompProfile:
type: RuntimeDefault
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /emptydir
name: empty-dir
nodeName: lowend-pool-rso7h
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1001
fsGroupChangePolicy: Always
serviceAccount: gitea
serviceAccountName: gitea
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- emptyDir: {}
name: empty-dir
- name: gitea-data
persistentVolumeClaim:
claimName: gitea
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2024-06-29T02:47:22Z"
status: "True"
type: PodReadyToStartContainers
- lastProbeTime: null
lastTransitionTime: "2024-06-29T02:47:24Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2024-06-29T02:46:59Z"
message: 'containers with unready status: [gitea]'
reason: ContainersNotReady
status: "False"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2024-06-29T02:46:59Z"
message: 'containers with unready status: [gitea]'
reason: ContainersNotReady
status: "False"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2024-06-29T02:46:59Z"
status: "True"
type: PodScheduled
containerStatuses:
- containerID: containerd://b3da880225ed9aa2e05f4d22d60287db769f7792edb0d663c5c1ed9dfd8cc27e
image: docker.io/bitnami/gitea:1.22.0-debian-12-r1
imageID: docker.io/bitnami/gitea@sha256:e36ba645397276bb3462cabd673b5dcc06117149c3563f0a731fdc73bbfe58d6
lastState:
terminated:
containerID: containerd://b3da880225ed9aa2e05f4d22d60287db769f7792edb0d663c5c1ed9dfd8cc27e
exitCode: 1
finishedAt: "2024-06-29T14:00:29Z"
reason: Error
startedAt: "2024-06-29T14:00:21Z"
name: gitea
ready: false
restartCount: 133
started: false
state:
waiting:
message: back-off 5m0s restarting failed container=gitea pod=gitea-6bcff9fd74-xjj9r_gitea(4926d894-9055-448f-9773-3dd099210486)
reason: CrashLoopBackOff
hostIP: 10.1.0.2
hostIPs:
- ip: 10.1.0.2
initContainerStatuses:
- containerID: containerd://2110d86c39df225b123177a07195b83fe1669135b381ce5fe83d44f8b0093b95
image: docker.io/bitnami/gitea:1.22.0-debian-12-r1
imageID: docker.io/bitnami/gitea@sha256:e36ba645397276bb3462cabd673b5dcc06117149c3563f0a731fdc73bbfe58d6
lastState: {}
name: prepare-base-dir
ready: true
restartCount: 0
started: false
state:
terminated:
containerID: containerd://2110d86c39df225b123177a07195b83fe1669135b381ce5fe83d44f8b0093b95
exitCode: 0
finishedAt: "2024-06-29T02:47:22Z"
reason: Completed
startedAt: "2024-06-29T02:47:22Z"
phase: Running
podIP: 10.244.0.50
podIPs:
- ip: 10.244.0.50
qosClass: Burstable
startTime: "2024-06-29T02:46:59Z"
Are you using any custom parameters or values?
custom-values.yaml
adminUsername: some-adm
adminEmail: [email protected]
appName: Gitea
runMode: prod
exposeSSH: true
existingSecret: 'gitea-passwords'
existingSecretKey: 'admin-password'
image:
debug: true
persistence:
enabled: true
accessModes:
- ReadWriteOnce
size: 8Gi
existingClaim: ''
resourcesPreset: 'micro'
service:
type: ClusterIP
ports:
http: 8080
ssh: 22
loadBalancerSourceRanges: []
loadBalancerIP: 'gitea.somewebsite.net'
## @param service.annotations Additional custom annotations for Gitea service
##
annotations: {}
sessionAffinity: None
sessionAffinityConfig: {}
ingress:
enabled: true
pathType: ImplementationSpecific
ingressClassName: 'nginx'
hostname: 'gitea.somewebsite.net'
path: /
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/force-ssl-redirect: 'true'
nginx.ingress.kubernetes.io/ssl-passthrough: 'true'
tls: true
selfSigned: false
postgresql:
enabled: false
externalDatabase:
host: 'postgresql-ha-pgpool.postgresql-ha.svc.cluster.local'
port: 5432
user: postgres
database: gitea
existingSecret: 'gitea-passwords'
existingSecretPasswordKey: 'db-password'
What is the expected behavior?
The externaldb secret should not be created if specifying an existing secret
What do you see instead?
A blank externaldb secret is created.
Additional information
I am working on a PR right now that will fix this bug by applying the same logic to the externaldb-secret creation as is done on the secrets creation.
I am creating this issue so I can have a reference for the PR when it is ready.