charts
charts copied to clipboard
[bitnami/kafka] Not able to setup externalClientProtocol: sasl_tls in kafka 19.1.3
Name and Version
bitnami:kafka 19.1.3
What architecture are you using?
None
What steps will reproduce the bug?
I am trying to setup externalClientProtocol as sasl_tls in bitnami kafka running in GKE cluster. I already have changed some paramters. Attached my ssl certificate as a secret. But I am facing issues.
Are you using any custom parameters or values?
advertisedListeners: []
affinity: {}
allowEveryoneIfNoAclFound: true
allowPlaintextListener: true
args: []
auth:
clientProtocol: plaintext
externalClientProtocol: sasl_tls
interBrokerProtocol: plaintext
sasl:
interBrokerMechanism: plain
jaas:
clientPasswords: []
clientUsers:
- user
existingSecret: ""
interBrokerPassword: ""
interBrokerUser: admin
zookeeperPassword: ""
zookeeperUser: ""
mechanisms: plain,scram-sha-256,scram-sha-512
tls:
autoGenerated: false
endpointIdentificationAlgorithm: https
existingSecret: ""
existingSecrets:
- kafka-ssl-0
jksKeystoreSAN: ""
jksTruststore: ""
jksTruststoreSecret: ""
password: test@#987
pemChainIncluded: true
type: jks
zookeeper:
tls:
enabled: false
existingSecret: ""
existingSecretKeystoreKey: zookeeper.keystore.jks
existingSecretTruststoreKey: zookeeper.truststore.jks
passwordsSecret: ""
passwordsSecretKeystoreKey: keystore-password
passwordsSecretTruststoreKey: truststore-password
type: jks
verifyHostname: true
authorizerClassName: ""
autoCreateTopicsEnable: true
brokerRackAssignment: ""
clusterDomain: cluster.local
command:
- /scripts/setup.sh
common:
exampleValue: common-chart
global:
imagePullSecrets: []
imageRegistry: ""
storageClass: ""
commonAnnotations: {}
commonLabels: {}
config: ""
containerPorts:
client: 9092
external: 9094
internal: 9093
containerSecurityContext:
allowPrivilegeEscalation: false
enabled: true
runAsNonRoot: true
runAsUser: 1001
customLivenessProbe: {}
customReadinessProbe: {}
customStartupProbe: {}
defaultReplicationFactor: 1
deleteTopicEnable: false
diagnosticMode:
args:
- infinity
command:
- sleep
enabled: false
existingConfigmap: ""
existingLog4jConfigMap: ""
externalAccess:
autoDiscovery:
enabled: true
image:
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
registry: docker.io
repository: bitnami/kubectl
tag: 1.25.3-debian-11-r8
resources:
limits: {}
requests: {}
enabled: true
service:
annotations: {}
domain: ""
extraPorts: []
labels: {}
loadBalancerAnnotations: []
loadBalancerIPs: []
loadBalancerNames: []
loadBalancerSourceRanges: []
nodePorts: []
ports:
external: 9094
type: LoadBalancer
useHostIPs: false
usePodIPs: false
externalZookeeper:
servers: []
extraDeploy: []
extraEnvVars: []
extraEnvVarsCM: ""
extraEnvVarsSecret: ""
extraVolumeMounts: []
extraVolumes: []
fullnameOverride: ""
global:
imagePullSecrets: []
imageRegistry: ""
storageClass: ""
heapOpts: -Xmx1024m -Xms1024m
hostAliases: []
hostIPC: false
hostNetwork: false
image:
debug: true
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
registry: docker.io
repository: bitnami/kafka
tag: 3.3.1-debian-11-r11
initContainers: []
interBrokerListenerName: INTERNAL
kubeVersion: ""
lifecycleHooks: {}
listenerSecurityProtocolMap: ""
listeners: []
livenessProbe:
enabled: true
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
log4j: ""
logFlushIntervalMessages: _10000
logFlushIntervalMs: 1000
logPersistence:
accessModes:
- ReadWriteOnce
annotations: {}
enabled: false
existingClaim: ""
mountPath: /opt/bitnami/kafka/logs
selector: {}
size: 2Gi
storageClass: ""
logRetentionBytes: _1073741824
logRetentionCheckIntervalMs: 300000
logRetentionHours: 168
logSegmentBytes: _1073741824
logsDirs: /bitnami/kafka/data
maxMessageBytes: _1000012
metrics:
jmx:
config: |-
jmxUrl: service:jmx:rmi:///jndi/rmi://127.0.0.1:5555/jmxrmi
lowercaseOutputName: true
lowercaseOutputLabelNames: true
ssl: false
{{- if .Values.metrics.jmx.whitelistObjectNames }}
whitelistObjectNames: ["{{ join "\",\"" .Values.metrics.jmx.whitelistObjectNames }}"]
{{- end }}
containerPorts:
metrics: 5556
containerSecurityContext:
enabled: true
runAsNonRoot: true
runAsUser: 1001
enabled: false
existingConfigmap: ""
extraRules: ""
image:
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
registry: docker.io
repository: bitnami/jmx-exporter
tag: 0.17.2-debian-11-r15
resources:
limits: {}
requests: {}
service:
annotations:
prometheus.io/path: /
prometheus.io/port: '{{ .Values.metrics.jmx.service.ports.metrics }}'
prometheus.io/scrape: "true"
clusterIP: ""
ports:
metrics: 5556
sessionAffinity: None
whitelistObjectNames:
- kafka.controller:*
- kafka.server:*
- java.lang:*
- kafka.network:*
- kafka.log:*
kafka:
affinity: {}
args: []
certificatesSecret: ""
command: []
containerPorts:
metrics: 9308
containerSecurityContext:
enabled: true
runAsNonRoot: true
runAsUser: 1001
enabled: false
extraFlags: {}
extraVolumeMounts: []
extraVolumes: []
hostAliases: []
image:
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
registry: docker.io
repository: bitnami/kafka-exporter
tag: 1.6.0-debian-11-r25
initContainers: []
nodeAffinityPreset:
key: ""
type: ""
values: []
nodeSelector: {}
podAffinityPreset: ""
podAnnotations: {}
podAntiAffinityPreset: soft
podLabels: {}
podSecurityContext:
enabled: true
fsGroup: 1001
priorityClassName: ""
resources:
limits: {}
requests: {}
schedulerName: ""
service:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: '{{ .Values.metrics.kafka.service.ports.metrics }}'
prometheus.io/scrape: "true"
clusterIP: ""
ports:
metrics: 9308
sessionAffinity: None
serviceAccount:
automountServiceAccountToken: true
create: true
name: ""
sidecars: []
tlsCaCert: ca-file
tlsCaSecret: ""
tlsCert: cert-file
tlsKey: key-file
tolerations: []
topologySpreadConstraints: []
prometheusRule:
enabled: false
groups: []
labels: {}
namespace: ""
serviceMonitor:
enabled: false
honorLabels: false
interval: ""
jobLabel: ""
labels: {}
metricRelabelings: []
namespace: ""
relabelings: []
scrapeTimeout: ""
selector: {}
minBrokerId: 0
nameOverride: ""
networkPolicy:
allowExternal: true
egressRules:
customRules: []
enabled: false
explicitNamespacesSelector: {}
externalAccess:
from: []
nodeAffinityPreset:
key: ""
type: ""
values: []
nodeSelector: {}
numIoThreads: 8
numNetworkThreads: 3
numPartitions: 1
numRecoveryThreadsPerDataDir: 1
offsetsTopicReplicationFactor: 1
pdb:
create: false
maxUnavailable: 1
minAvailable: ""
persistence:
accessModes:
- ReadWriteOnce
annotations: {}
enabled: true
existingClaim: ""
labels: {}
mountPath: /bitnami/kafka
selector: {}
size: 2Gi
storageClass: ""
podAffinityPreset: ""
podAnnotations: {}
podAntiAffinityPreset: soft
podLabels: {}
podManagementPolicy: Parallel
podSecurityContext:
enabled: true
fsGroup: 1001
priorityClassName: ""
provisioning:
args: []
auth:
tls:
caCert: ca.crt
cert: tls.crt
certificatesSecret: ""
key: tls.key
keyPassword: ""
keyPasswordSecretKey: key-password
keystore: keystore.jks
keystorePassword: ""
keystorePasswordSecretKey: keystore-password
passwordsSecret: ""
truststore: truststore.jks
truststorePassword: ""
truststorePasswordSecretKey: truststore-password
type: jks
command: []
containerSecurityContext:
enabled: true
runAsNonRoot: true
runAsUser: 1001
enabled: false
extraEnvVars: []
extraEnvVarsCM: ""
extraEnvVarsSecret: ""
extraProvisioningCommands: []
extraVolumeMounts: []
extraVolumes: []
initContainers: []
nodeSelector: {}
numPartitions: 1
parallel: 1
podAnnotations: {}
podLabels: {}
podSecurityContext:
enabled: true
fsGroup: 1001
postScript: ""
preScript: ""
replicationFactor: 1
resources:
limits: {}
requests: {}
schedulerName: ""
serviceAccount:
automountServiceAccountToken: true
create: false
name: ""
sidecars: []
tolerations: []
topics: []
waitForKafka: true
rbac:
create: true
readinessProbe:
enabled: true
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
replicaCount: 1
resources:
limits: {}
requests: {}
schedulerName: ""
service:
annotations: {}
clusterIP: ""
externalTrafficPolicy: Cluster
extraPorts: []
headless:
annotations: {}
labels: {}
loadBalancerIP: ""
loadBalancerSourceRanges: []
nodePorts:
client: ""
external: ""
ports:
client: 9092
external: 9094
internal: 9093
sessionAffinity: None
sessionAffinityConfig: {}
type: ClusterIP
serviceAccount:
annotations: {}
automountServiceAccountToken: true
create: true
name: ""
sidecars: []
socketReceiveBufferBytes: 102400
socketRequestMaxBytes: _104857600
socketSendBufferBytes: 102400
startupProbe:
enabled: false
failureThreshold: 15
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
superUsers: User:admin
terminationGracePeriodSeconds: ""
tolerations: []
topologySpreadConstraints: []
transactionStateLogMinIsr: 1
transactionStateLogReplicationFactor: 1
updateStrategy:
type: RollingUpdate
volumePermissions:
containerSecurityContext:
runAsUser: 0
enabled: false
image:
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
registry: docker.io
repository: bitnami/bitnami-shell
tag: 11-debian-11-r49
resources:
limits: {}
requests: {}
zookeeper:
affinity: {}
args: []
auth:
client:
clientPassword: ""
clientUser: ""
enabled: false
existingSecret: ""
serverPasswords: ""
serverUsers: ""
quorum:
enabled: false
existingSecret: ""
learnerPassword: ""
learnerUser: ""
serverPasswords: ""
serverUsers: ""
autopurge:
purgeInterval: 0
snapRetainCount: 3
clusterDomain: cluster.local
command:
- /scripts/setup.sh
common:
exampleValue: common-chart
global:
imagePullSecrets: []
imageRegistry: ""
storageClass: ""
commonAnnotations: {}
commonLabels: {}
configuration: ""
containerPorts:
client: 2181
election: 3888
follower: 2888
tls: 3181
containerSecurityContext:
allowPrivilegeEscalation: false
enabled: true
runAsNonRoot: true
runAsUser: 1001
customLivenessProbe: {}
customReadinessProbe: {}
customStartupProbe: {}
dataLogDir: ""
diagnosticMode:
args:
- infinity
command:
- sleep
enabled: false
enabled: true
existingConfigmap: ""
extraDeploy: []
extraEnvVars: []
extraEnvVarsCM: ""
extraEnvVarsSecret: ""
extraVolumeMounts: []
extraVolumes: []
fourlwCommandsWhitelist: srvr, mntr, ruok
fullnameOverride: ""
global:
imagePullSecrets: []
imageRegistry: ""
storageClass: ""
heapSize: 1024
hostAliases: []
image:
debug: false
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
registry: docker.io
repository: bitnami/zookeeper
tag: 3.8.0-debian-11-r47
initContainers: []
initLimit: 10
jvmFlags: ""
kubeVersion: ""
lifecycleHooks: {}
listenOnAllIPs: false
livenessProbe:
enabled: true
failureThreshold: 6
initialDelaySeconds: 30
periodSeconds: 10
probeCommandTimeout: 2
successThreshold: 1
timeoutSeconds: 5
logLevel: ERROR
maxClientCnxns: 60
maxSessionTimeout: 40000
metrics:
containerPort: 9141
enabled: false
prometheusRule:
additionalLabels: {}
enabled: false
namespace: ""
rules: []
service:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: '{{ .Values.metrics.service.port }}'
prometheus.io/scrape: "true"
port: 9141
type: ClusterIP
serviceMonitor:
additionalLabels: {}
enabled: false
honorLabels: false
interval: ""
jobLabel: ""
metricRelabelings: []
namespace: ""
relabelings: []
scrapeTimeout: ""
selector: {}
minServerId: 1
nameOverride: ""
namespaceOverride: ""
networkPolicy:
allowExternal: true
enabled: false
nodeAffinityPreset:
key: ""
type: ""
values: []
nodeSelector: {}
pdb:
create: false
maxUnavailable: 1
minAvailable: ""
persistence:
accessModes:
- ReadWriteOnce
annotations: {}
dataLogDir:
existingClaim: ""
selector: {}
size: 8Gi
enabled: true
existingClaim: ""
selector: {}
size: 8Gi
storageClass: ""
podAffinityPreset: ""
podAnnotations: {}
podAntiAffinityPreset: soft
podLabels: {}
podManagementPolicy: Parallel
podSecurityContext:
enabled: true
fsGroup: 1001
preAllocSize: 65536
priorityClassName: ""
readinessProbe:
enabled: true
failureThreshold: 6
initialDelaySeconds: 5
periodSeconds: 10
probeCommandTimeout: 2
successThreshold: 1
timeoutSeconds: 5
replicaCount: 1
resources:
limits: {}
requests:
cpu: 250m
memory: 256Mi
schedulerName: ""
service:
annotations: {}
clusterIP: ""
disableBaseClientPort: false
externalTrafficPolicy: Cluster
extraPorts: []
headless:
annotations: {}
publishNotReadyAddresses: true
loadBalancerIP: ""
loadBalancerSourceRanges: []
nodePorts:
client: ""
tls: ""
ports:
client: 2181
election: 3888
follower: 2888
tls: 3181
sessionAffinity: None
sessionAffinityConfig: {}
type: ClusterIP
serviceAccount:
annotations: {}
automountServiceAccountToken: true
create: false
name: ""
sidecars: []
snapCount: 100000
startupProbe:
enabled: false
failureThreshold: 15
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
syncLimit: 5
tickTime: 2000
tls:
client:
auth: none
autoGenerated: false
enabled: false
existingSecret: ""
existingSecretKeystoreKey: ""
existingSecretTruststoreKey: ""
keystorePassword: ""
keystorePath: /opt/bitnami/zookeeper/config/certs/client/zookeeper.keystore.jks
passwordsSecretKeystoreKey: ""
passwordsSecretName: ""
passwordsSecretTruststoreKey: ""
truststorePassword: ""
truststorePath: /opt/bitnami/zookeeper/config/certs/client/zookeeper.truststore.jks
quorum:
auth: none
autoGenerated: false
enabled: false
existingSecret: ""
existingSecretKeystoreKey: ""
existingSecretTruststoreKey: ""
keystorePassword: ""
keystorePath: /opt/bitnami/zookeeper/config/certs/quorum/zookeeper.keystore.jks
passwordsSecretKeystoreKey: ""
passwordsSecretName: ""
passwordsSecretTruststoreKey: ""
truststorePassword: ""
truststorePath: /opt/bitnami/zookeeper/config/certs/quorum/zookeeper.truststore.jks
resources:
limits: {}
requests: {}
tolerations: []
topologySpreadConstraints: []
updateStrategy:
rollingUpdate: {}
type: RollingUpdate
volumePermissions:
containerSecurityContext:
enabled: true
runAsUser: 0
enabled: false
image:
digest: ""
pullPolicy: IfNotPresent
pullSecrets: []
registry: docker.io
repository: bitnami/bitnami-shell
tag: 11-debian-11-r42
resources:
limits: {}
requests: {}
zookeeperChrootPath: ""
zookeeperConnectionTimeoutMs: 6000
What is the expected behavior?
No response
What do you see instead?
Error I am facing is :
Use --bootstrap-server instead to specify a broker to connect to.
Error while executing config command with args '--zookeeper kafka-zookeeper --alter --add-config SCRAM-SHA-256=[iterations=8192,password=bitnami],SCRAM-SHA-512=[password=bitnami] --entity-type users --entity-name user'
org.apache.kafka.common.KafkaException: Exception while loading Zookeeper JAAS login context [java.security.auth.login.config=/opt/bitnami/kafka/config/kafka_jaas.conf, zookeeper.sasl.client=default:true, zookeeper.sasl.clientconfig=default:Client]
at org.apache.kafka.common.security.JaasUtils.isZkSaslEnabled(JaasUtils.java:67)
at kafka.admin.ConfigCommand$.processCommandWithZk(ConfigCommand.scala:116)
at kafka.admin.ConfigCommand$.main(ConfigCommand.scala:95)
at kafka.admin.ConfigCommand.main(ConfigCommand.scala)
Caused by: java.lang.SecurityException: java.io.IOException: /opt/bitnami/kafka/config/kafka_jaas.conf (No such file or directory)```
### Additional information
_No response_
Hello can anyone update me on this? This is required on urgent basis
Hi @vipul-06,
I'm sorry but we can not give support for a chart version that was released more than two years ago. Even if we found a bug in that specific version, it is possible that the issue was already fixed and it is not possible to release new patches for older majors.
Are you able to reproduce the issue using the latest version of the chart?
Since you would be upgrading several majors, I would recommend checking the Upgrading notes in addition to both upstream Zookeeper and Kafka breaking changes.
Hi @migruiz4 can you just tell me that in this chart version 19.1.3 I am able to connect to external cloud service through this configuration. But my internal connection is also getting stopped. I want to keep my all internal connection as plaintext only.
Yes, it should be possible. The kafka chart was configured with 3 listeners INTERNAL
using port 9093 for inter-broker communications, CLIENT
using port 9092 for internal client connections and EXTERNAL
using port 9094 for external client connections.
# Enables 'EXTERNAL' listener
externalAccess:
enabled=true
# Security protocols for each listener
auth:
# CLIENT
clientProtocol: plaintext
# EXTERNAL CLIENTS
externalClientProtocol: sasl_tls
# INTERNAL (inter-broker)
interBrokerProtocol: plaintext
You can find the README for version 19.1.3 here, including examples for external access configuration.
If I set as above mentioned b you I get the zookeeper error earlier mentioned. If I set like this :
auth: clientProtocol: sasl_tls externalClientProtocol: sasl_tls interBrokerProtocol: plaintext
Then my external connection is successful but my internal connection also required sasl_tls. This is same issue as mine:
https://stackoverflow.com/questions/74046781/passing-jaas-config-to-bitnami-kafka-helm-chart-for-sasl-plain
As previously mentioned, there is no point for us to troubleshoot issues with older versions of the chart, because the issue may have already been fixed and even if we found the root cause of the issue there is no point because we can not release patches for older versions.
I'm sorry but you may need to try using the latest version of the chart or I may not be able to further help you.
This Issue has been automatically marked as "stale" because it has not had recent activity (for 15 days). It will be closed if no further activity occurs. Thanks for the feedback.
Due to the lack of activity in the last 5 days since it was marked as "stale", we proceed to close this Issue. Do not hesitate to reopen it later if necessary.