KAFKA-17821: the set of configs displayed by `logAll` could be invalid
Jira: https://issues.apache.org/jira/browse/KAFKA-17821
User can config different protocol in consumer config. We will print all config in log, but some configs has the default value and is unsupported for each protocol, thus we print these config will misdirect user. We should improve this, and won't show the unsupported configs in log.
test in my local, if use CLASSIC protocol won't show group.remote.assignor
[2024-11-30 19:25:10,460] INFO ConsumerConfig values:
metric.reporters = [org.apache.kafka.common.metrics.JmxReporter]
sasl.oauthbearer.token.endpoint.url = null
sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
retry.backoff.max.ms = 1000
reconnect.backoff.max.ms = 1000
partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor, class org.apache.kafka.clients.consumer.CooperativeStickyAssignor]
ssl.engine.factory.class = null
sasl.oauthbearer.expected.audience = null
ssl.keystore.type = JKS
enable.auto.commit = false
sasl.oauthbearer.header.urlencode = false
interceptor.classes = []
exclude.internal.topics = true
ssl.truststore.password = null
default.api.timeout.ms = 60000
ssl.endpoint.identification.algorithm = https
max.poll.records = 500
check.crcs = true
sasl.login.refresh.buffer.seconds = 300
receive.buffer.bytes = 65536
ssl.truststore.type = JKS
sasl.oauthbearer.clock.skew.seconds = 30
client.dns.lookup = use_all_dns_ips
fetch.min.bytes = 1
send.buffer.bytes = 131072
sasl.oauthbearer.jwks.endpoint.url = null
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
enable.metrics.push = true
sasl.login.retry.backoff.ms = 100
metadata.recovery.rebootstrap.trigger.ms = 300000
ssl.secure.random.implementation = null
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
sasl.jaas.config = null
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
session.timeout.ms = 45000
internal.leave.group.on.close = true
ssl.keystore.certificate.chain = null
socket.connection.setup.timeout.ms = 10000
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
ssl.cipher.suites = null
security.protocol = PLAINTEXT
allow.auto.create.topics = true
ssl.keymanager.algorithm = SunX509
sasl.login.callback.handler.class = null
auto.offset.reset = latest
metadata.max.age.ms = 300000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
max.partition.fetch.bytes = 1048576
bootstrap.servers = []
metrics.recording.level = INFO
ssl.truststore.certificates = null
security.providers = null
sasl.mechanism = GSSAPI
client.id = consumer-null-1
request.timeout.ms = 30000
sasl.login.retry.backoff.max.ms = 10000
heartbeat.interval.ms = 3000
auto.commit.interval.ms = 5000
sasl.login.class = null
ssl.truststore.location = null
ssl.keystore.password = null
fetch.max.bytes = 52428800
max.poll.interval.ms = 300000
group.protocol = classic
sasl.login.connect.timeout.ms = null
socket.connection.setup.timeout.max.ms = 30000
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.min.period.seconds = 60
sasl.oauthbearer.scope.claim.name = scope
group.id = null
sasl.oauthbearer.expected.issuer = null
sasl.login.read.timeout.ms = null
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
internal.throw.on.fetch.stable.offset.unsupported = false
metadata.recovery.strategy = rebootstrap
ssl.key.password = null
fetch.max.wait.ms = 500
ssl.keystore.key = null
sasl.client.callback.handler.class = null
metrics.num.samples = 2
key.deserializer = class org.apache.kafka.common.serialization.ByteArrayDeserializer
ssl.protocol = TLSv1.3
group.instance.id = null
client.rack =
ssl.keystore.location = null
sasl.oauthbearer.sub.claim.name = sub
sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
metrics.sample.window.ms = 30000
isolation.level = read_uncommitted
sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
sasl.login.refresh.window.jitter = 0.05
(org.apache.kafka.common.config.AbstractConfig:380)
Committer Checklist (excluded from commit message)
- [ ] Verify design and implementation
- [ ] Verify test coverage and CI build status
- [ ] Verify documentation (including upgrade notes)
Thanks for @kirktrue review, addressed all comments.
This PR is being marked as stale since it has not had any activity in 90 days. If you would like to keep this PR alive, please leave a comment asking for a review. If the PR has merge conflicts, update it with the latest from the base branch.
If you are having difficulty finding a reviewer, please reach out on the [mailing list](https://kafka.apache.org/contact).
If this PR is no longer valid or desired, please feel free to close it. If no activity occurs in the next 30 days, it will be automatically closed.