charts
charts copied to clipboard
[bitnami/keycloak] Keycloak crashes with restricted POD Security Context
Name and Version
bitnami/keycloak 21.0.3
What architecture are you using?
None
What steps will reproduce the bug?
Install keycloak with POD Security Context enabled (restricted). K8S is 1.25 and namespace has "pod-security.kubernetes.io/enforce" = "restricted" Label if I disable the POD Security then POD can easily start.
Are you using any custom parameters or values?
global:
compatibility:
openshift:
adaptSecurityContext: force
What is the expected behavior?
Keycloak POD runs successfully.
What do you see instead?
Keycloak POD crashes with following error:
Error: Could not find or load main class io.quarkus.bootstrap.runner.QuarkusEntryPoint Caused by: java.lang.ClassNotFoundException: io.quarkus.bootstrap.runner.QuarkusEntryPoint
Additional information
No response
I can already say that the last version that works with POD security is "23.0.5" and after that version PODs would crash!
Hi!
Are you changing the keycloak container image? It seems to me that this quarkus issue occurs when using non-bitnami keycloak images. Could you check it?
Hi javsalgar, Thanks for support. No I did not. First I check it with the latest image (bitnami/keycloak:24.0.3.) It crashed. Then I started to downgrade the tag untill I reached bitnami/keycloak:23.0.5. That was the tag that did not crashed
I can replicate this behaviour on Kubernetes 1.25 even without his custom values.
Setting both:
podSecurityContext:
enabled: false
containerSecurityContext:
enabled: false
Allows the pod to start normally, but isn't ideal.
I'm using the Chart version 21.0.3
Edit: After further digging you can also simply set these values, which will also allow the pod's to start.
containerSecurityContext:
privileged: true
allowPrivilegeEscalation: true
Based on the fix I would guess that the file ownerships are incorrect in combination with the fsGroup & similar settings
@krankkkk exactly. But my issue is that I am running my deployment in a restricted Env which means allowPrivilegeEscalation: true is prohibited(with Namespace label: pod-security.kubernetes.io/enforce: restricted).
Hi @abalfazlmeshki
Could you please share the exact values you use to reproduce the issue?
Hi @dgomezleon Sure, here is my custom variables
global:
compatibility:
openshift:
adaptSecurityContext: force
Hi @abalfazlmeshki ,
Are you using it on Openshift? Note that option is not expected to work in different environments. For example, I could obtain this error on GKE due to fsgroup
:
mkdir: cannot create directory '/bitnami/postgresql/data': Permission denied
since permissions were:
$ ls -larth /bitnami/postgresql/
total 24K
drwxr-xr-x 3 root root 4.0K May 1 13:12 ..
drwx------ 2 root root 16K May 8 10:14 lost+found
drwxr-xr-x 3 root root 4.0K May 8 10:14 .
However, it worked properly on Openshift.
I hope it helps.
Het @dgomezleon I know exactly what you are saying. Although I am not using it OpenShift but I can guarantee that the issue happens when I add these POD Security Conext:
allowPrivilegeEscalation: false
privileged: false
As a hint I can say the issue is about the quarkus. I even tries the keycloak image of the keycloak itself and faced the same issue. I think there has been some updates in quarkus that needs more privileges. FYI: https://github.com/keycloak/keycloak/discussions/28910
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
@dgomezleon Any Progress on this? The Helm chart does not work with its default settings.
Hi @krankkkk ,
I was not able to reproduce the issue. Note that we tested the chart on Openshift with the default values, so it seems to be something related to the environment.
Let's see also if Keycloak team can shed some light.
@dgomezleon Well yes for openshift you explicitly disable the offending parameters see common/_compatibility.yaml
{{- define "common.compatibility.renderSecurityContext" -}}
{{- $adaptedContext := .secContext -}}
{{- if (((.context.Values.global).compatibility).openshift) -}}
{{- if or (eq .context.Values.global.compatibility.openshift.adaptSecurityContext "force") (and (eq .context.Values.global.compatibility.openshift.adaptSecurityContext "auto") (include "common.compatibility.isOpenshift" .context)) -}}
{{/* Remove incompatible user/group values that do not work in Openshift out of the box */}}
{{- $adaptedContext = omit $adaptedContext "fsGroup" "runAsUser" "runAsGroup" -}} <-------- This one
{{- if not .secContext.seLinuxOptions -}}
{{/* If it is an empty object, we remove it from the resulting context because it causes validation issues */}}
{{- $adaptedContext = omit $adaptedContext "seLinuxOptions" -}}
{{- end -}}
{{- end -}}
{{- end -}}
{{- omit $adaptedContext "enabled" | toYaml -}}
{{- end -}}
Hi @krankkkk ,
I have successfully tested the chart with the default values on Minikube and GKE (after adding pod-security.kubernetes.io/enforce: restricted
label to the namespace). Unfortunately, I was not able to reproduce the issue.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.
Any updates on this? Was running today in this issue with default values.