terraform-provider-kafka
terraform-provider-kafka copied to clipboard
kafka: client has run out of available brokers to talk to: EOF on version v0.5.3
Error: kafka: client has run out of available brokers to talk to: EOF
I've upgraded the version to v0.5.3 and it did not work. The error happened when I tried to run terraform plan or terraform apply commands.
logs:
Error: kafka: client has run out of available brokers to talk to: EOF
But when I've downgraded version to 0.5.1 it looks fine.
my config:
terraform {
required_version = ">= 1.0"
backend "consul" {
scheme = "http"
path = "terraform/kafka"
}
required_providers {
consul = {
source = "hashicorp/consul"
version = "~> 2.15.1"
}
kafka = {
source = "Mongey/kafka"
version = "~> 0.5.1"
}
}
}
provider "kafka" {
bootstrap_servers = [
"${var.instance}:9093"
]
tls_enabled = false
}
resource "kafka_topic" "test-topic" {
name = "test-topic"
replication_factor = 1
partitions = 1
config = {
"cleanup.policy" = "compact"
}
}
terraform environment:
Terraform v1.4.5
on linux_amd64
+ provider registry.terraform.io/hashicorp/consul v2.15.1
+ provider registry.terraform.io/mongey/kafka v0.5.3
Thanks @timurkhisamov I'll investigate now
@timurkhisamov Can you provider more details (TF_LOG=1 terraform apply), what platform and terraform version you're using. I am unable to reproduce with this setup
Terraform v1.4.5
on darwin_arm64
+ provider registry.terraform.io/mongey/kafka v0.5.3
main.tf
terraform {
required_providers {
kafka = {
source = "Mongey/kafka"
version = "0.5.3"
}
}
}
provider "kafka" {
bootstrap_servers = ["localhost:9092"]
tls_enabled = false
}
resource "kafka_topic" "syslog" {
name = "syslog"
replication_factor = 1
partitions = 1
config = {
"segment.ms" = "4000"
}
}
docker-compose.yml
---
version: '3.2'
services:
kafka:
image: bashj79/kafka-kraft
ports:
- "9092:9092"
Applies correctly
Terraform will perform the following actions:
# kafka_topic.syslog will be created
+ resource "kafka_topic" "syslog" {
+ config = {
+ "segment.ms" = "4000"
}
+ id = (known after apply)
+ name = "syslog"
+ partitions = 1
+ replication_factor = 1
}
Plan: 1 to add, 0 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: yes
kafka_topic.syslog: Creating...
kafka_topic.syslog: Creation complete after 1s [id=syslog]
Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Here is debug log. Terraform version and client are the same:
Terraform v1.4.5
on linux_amd64
+ provider registry.terraform.io/hashicorp/consul v2.15.1
+ provider registry.terraform.io/mongey/kafka v0.5.3
Hi there, any update on resolving this issue?
same issue happening here with versions 0.5.4 and 0.5.1, all of a sudden everything stopped working with the same error:
╷
│ Error: kafka: client has run out of available brokers to talk to: EOF
│
│ with module.redpanda.kafka_topic.redpanda_topics["XXXX"],
│ on modules/redpanda/main.tf line 28, in resource "kafka_topic" "redpanda_topics":
│ 28: resource "kafka_topic" "redpanda_topics" {
│ Error: kafka: client has run out of available brokers to talk to: EOF
│
│ with module.redpanda.kafka_acl.redpanda_write_acls["XXXX"],
│ on modules/redpanda/main.tf line 63, in resource "kafka_acl" "redpanda_write_acls":
│ 63: resource "kafka_acl" "redpanda_write_acls" {```
@Mongey hi ✋
I have the same errors on 0.5.4 version too. (on 0.5.1 works fine)
There is kafka logs (kafka version 1.0) when I trying to do terraform plan command
There is no ACL configured in this environment.
[2023-12-18 10:39:33,498] ERROR Closing socket for 172.17.0.40:9093-X.X.X.X:52245-7 because of error (kafka.network.Processor)
org.apache.kafka.common.errors.InvalidRequestException: Error getting request for apiKey: METADATA, apiVersion: 7, connectionId: 172.17.0.40:9093-X.X.X.X:52245-7, listenerName: ListenerName(EXTERNAL), principal: User:ANONYMOUS
Caused by: java.lang.IllegalArgumentException: Invalid version for API key METADATA: 7
at org.apache.kafka.common.protocol.ApiKeys.schemaFor(ApiKeys.java:297)
at org.apache.kafka.common.protocol.ApiKeys.requestSchema(ApiKeys.java:267)
at org.apache.kafka.common.protocol.ApiKeys.parseRequest(ApiKeys.java:275)
at org.apache.kafka.common.requests.RequestContext.parseRequest(RequestContext.java:63)
at kafka.network.RequestChannel$Request.<init>(RequestChannel.scala:85)
at kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:591)
at kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:585)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.network.Processor.processCompletedReceives(SocketServer.scala:585)
at kafka.network.Processor.run(SocketServer.scala:493)
at java.lang.Thread.run(Thread.java:745)
Greetings 👋
I might have more information on that, using Redpanda v23.3.2 I have the following warnings when encountering the EOF error:
WARN 2024-01-15 16:21:57,001 [shard 0:main] kafka - connection_context.cc:310 - Error while processing request from 172.17.0.1:44278 - Unsupported version 9 for metadata API
WARN 2024-01-15 16:21:57,276 [shard 0:main] kafka - connection_context.cc:310 - Error while processing request from 172.17.0.1:44284 - Unsupported version 9 for metadata API
WARN 2024-01-15 16:21:57,546 [shard 0:main] kafka - connection_context.cc:310 - Error while processing request from 172.17.0.1:44300 - Unsupported version 9 for metadata API
And if you follow the rabbit hole:
https://github.com/Mongey/terraform-provider-kafka/blob/74c5627eab020f26937bb0c2d7da271a5aee3207/kafka/client.go#L56-L60
Which calls Sarama and ends up here:
https://github.com/IBM/sarama/blob/c067a7f4e5653479b039a09f9978a28a54584e54/client.go#L215-L217
if conf.Metadata.Full {
// do an initial fetch of all cluster metadata by specifying an empty list of topics
err := client.RefreshMetadata()
conf.Metadata.Full is set here: https://github.com/Mongey/terraform-provider-kafka/blob/74c5627eab020f26937bb0c2d7da271a5aee3207/kafka/config.go#L35
And in there, we have a call to broker.GetMetadata(req) and that req created by NewMetadataRequest like so:
req := NewMetadataRequest(client.conf.Version, topics)
Which ends up passing this:
func NewMetadataRequest(version KafkaVersion, topics []string) *MetadataRequest {
m := &MetadataRequest{Topics: topics}
if version.IsAtLeast(V2_8_0_0) {
m.Version = 10
} else if version.IsAtLeast(V2_4_0_0) {
m.Version = 9
} else if version.IsAtLeast(V2_4_0_0) {
m.Version = 8
} else if version.IsAtLeast(V2_1_0_0) {
m.Version = 7
And so we get our Metadata version 9 here that is not supported by Redpanda at the moment and that version is hardcoded here:
https://github.com/Mongey/terraform-provider-kafka/blob/74c5627eab020f26937bb0c2d7da271a5aee3207/kafka/config.go#L32
Sooo, how to solve that? I've temporarily forked the provided and downgraded the version to V2_1_0_0 and got through the initial connection check. Any idea how we could improve this? I'm up for a contribution if we can find something 👍
@adrien-f would it be worth asking redpanda if they can implement the missing Metadata API features to achieve parity with kafka?
Hey @mattfysh ! It might be worth reaching out to them in their Slack community: https://redpanda.com/slack
hello again, the fix is merged but not yet released - I think it just missed the v0.7.0 tag by a hair. Any chance there will be a v0.7.1 with this fix included? thank you 🙏
@mattfysh I released v0.7.1 with the change