terraform-provider-kafka
terraform-provider-kafka copied to clipboard
import issues
Import is unable to use list generated from locals
provider "consul" {
address = var.consul_address
datacenter = "aws-ue1"
}
data "consul_service" "kafka_cluster" {
name = "kafka-cluster"
datacenter = "aws-ue1"
}
locals {
bootstrap_servers = [
for i in data.consul_service.kafka_cluster.service : {
bootstrap ="${i.node_address}:${i.port}"
}
]
}
provider "kafka" {
bootstrap_servers = formatlist("%s",[for node in local.bootstrap_servers: node.bootstrap])
tls_enabled = false
skip_tls_verify = true
When I attempt to insert it I am getting bootstrap_servers was not set
2020-02-04T10:19:32.625-0600 [INFO] plugin.terraform-provider-kafka_v0.2.3: configuring server automatic mTLS: timestamp=2020-02-04T10:19:32.625-0600
2020-02-04T10:19:32.653-0600 [DEBUG] plugin: using plugin: version=5
2020-02-04T10:19:32.653-0600 [DEBUG] plugin.terraform-provider-kafka_v0.2.3: plugin address: address=/var/folders/3n/kcphbjs52992xcll2ywhjjb00000gp/T/plugin381724773 network=unix timestamp=2020-02-04T10:19:32.653-0600
2020-02-04T10:19:32.708-0600 [DEBUG] plugin.terraform-provider-kafka_v0.2.3: 2020/02/04 10:19:32 [DEBUG] configuring provider with Brokers @ <nil>
2020/02/04 10:19:32 [ERROR] <root>: eval: *terraform.EvalConfigProvider, err: bootstrap_servers was not set
2020/02/04 10:19:32 [ERROR] <root>: eval: *terraform.EvalSequence, err: bootstrap_servers was not set
2020/02/04 10:19:32 [ERROR] <root>: eval: *terraform.EvalOpFilter, err: bootstrap_servers was not set
2020/02/04 10:19:32 [ERROR] <root>: eval: *terraform.EvalSequence, err: bootstrap_servers was not set
Error: bootstrap_servers was not set
Perhaps we're doing the provider validation too early in the process... I'll need to investigate how other providers handle this.
Is there any update on this issue?
I am having the same issue as https://github.com/Mongey/terraform-provider-confluentcloud/issues/13
Whereby if my confluentcloud kafka cluster hasn't yet been created, the provider throws an error.
I have tried to use depends_on = [confluentcloud_kafka_cluster.myinstance]
against my kafka_topic
, which should only initialize my kafka provider if the resource exists, but this still does not work.
@jonhoare Have you tried with v0.2.4 ?
@Mongey Yes I am using v0.2.4.
I've only just set this up today and so I am using the latest versions.
-
terraform-provider-kafka
(v0.2.4 ) -
terraform-provider-confluent-cloud
(v0.0.1)
provider "confluentcloud" {}
resource "confluentcloud_kafka_cluster" "myinstance" {
name = "myinstance"
service_provider = "azure"
region = "westeurope"
availability = "LOW"
environment_id = "env-id"
}
resource "confluentcloud_api_key" "management" {
cluster_id = confluentcloud_kafka_cluster.myinstance.id
environment_id = "env-id"
}
locals {
bootstrap_servers = [replace(confluentcloud_kafka_cluster.myinstance.bootstrap_servers, "SASL_SSL://", "")]
}
provider "kafka" {
bootstrap_servers = local.bootstrap_servers
tls_enabled = true
sasl_username = confluentcloud_api_key.management.key
sasl_password = confluentcloud_api_key.management.secret
sasl_mechanism = "plain"
}
resource "kafka_topic" "mytopic" {
depends_on = [confluentcloud_kafka_cluster.myinstance]
name = "mytopic"
replication_factor = 3
partitions = 1
config = {
"cleanup.policy" = "delete"
"compression.type" = "producer"
"delete.retention.ms" = "86400000"
"file.delete.delay.ms" = "60000"
"flush.messages" = "9223372036854775807"
"flush.ms" = "9223372036854775807"
"index.interval.bytes" = "4096"
"max.message.bytes" = "2097164"
"message.format.version" = "1.0-IV0"
"message.timestamp.difference.max.ms" = "9223372036854775807"
"message.timestamp.type" = "CreateTime"
"min.cleanable.dirty.ratio" = "0.5"
"min.insync.replicas" = "2"
"preallocate" = "false"
"retention.bytes" = "1000000000"
"retention.ms" = "43200000"
"segment.bytes" = "1073741824"
"segment.index.bytes" = "10485760"
"segment.jitter.ms" = "0"
"segment.ms" = "604800000"
"unclean.leader.election.enable" = "false"
}
}
this should be fixed... This works from me.
provider "consul" {}
data "consul_keys" "kafka_servers" {
datacenter = "dc1"
# Read the launch AMI from Consul
key {
name = "kafka"
path = "kafka"
}
}
provider "kafka" {
bootstrap_servers = [data.consul_keys.kafka_servers.var.kafka]
ca_cert = file("../secrets/snakeoil-ca-1.crt")
client_cert = file("../secrets/kafkacat-ca1-signed.pem")
client_key = file("../secrets/kafkacat-raw-private-key.pem")
tls_enabled = true
}
# Make sure we don't lock down ourself on first run of terraform.
# First grant ourself admin permissions, then add ACL for topic.
resource "kafka_acl" "global" {
resource_name = "*"
resource_type = "Topic"
acl_principal = "User:*"
acl_host = "*"
acl_operation = "All"
acl_permission_type = "Allow"
}
resource "kafka_topic" "syslog" {
name = "syslog"
replication_factor = 1
partitions = 4
config = {
"segment.ms" = "4000"
"retention.ms" = "86400000"
}
depends_on = [kafka_acl.global]
}
resource "kafka_acl" "test" {
resource_name = "syslog"
resource_type = "Topic"
acl_principal = "User:Alice"
acl_host = "*"
acl_operation = "Write"
acl_permission_type = "Deny"
depends_on = [kafka_acl.global]
}
@Mongey It may work for you because you are using Consul as a source for your bootstrap_servers where @jonhoare is using the sample code for getting the bootstrap_servers from the newly created Kafka cluster.
I mention this because I am having the same issue - bootstrap_servers is not set when creating both a Kafka cluster and Kafka topics from within Terraform.
@jonhoare - were you able to resolve your issue?
Same problem exists if using sasl like this
provider "kafka" {
bootstrap_servers = split(",", aws_msk_cluster.msk_cluster.bootstrap_brokers_sasl_scram)
sasl_username = var.kafka_admin_user
sasl_password = random_password.scram_password.result
sasl_mechanism = "scram-sha512"
}
Error: No bootstrap_servers provided
│
│ with kafka_topic.kafka_topics["test-topic"],
│ on topics.tf line 18, in resource "kafka_topic" "kafka_topics":
│ 18: resource "kafka_topic" "kafka_topics" {
@jonhoare were you able to solve your issue? I'm experiencing the exact same issue and using latest version... this issue should be open as this was not resolved yet. As @xcjs mentioned, the scenario described by @Mongey is a different one and assumed that the bootstrap_servers already exists. This is a core issue that must be resolved as it won't enable the creation of new cluster from scratch (given the confluent service credentials, username + password, only)
Are there any updates on this issue? I am experiencing the same problem as @VipulZopSmart where an aws_msk_cluster
is created in the same terraform definition with the Mongey/kafka
provider.
Is this that can even be resolved or is it a won't fix? Thanks in advance :)
@dahooligan can you provide the full example?
Hy @Mongey, thanks for your reply. I'll provide you with a (non-)working minimal example asap.
@Mongey, In my example, the error occurs when the provider
block is placed in the referred module.
# source dir
my-tf/
|__msk/v1/
| |__files/
| | |__topics.yaml
| |__main.tf
| |__msk_topic.tf
|__core/
| |__kafka-topic/v1/
| |__main.tf
| |__variables.tf
# topics.yaml
hello_from_cis:
partitions: 1
hello_from_bob:
partitions: 1
# msk_topic.tf
locals {
topics = yamldecode(file("${path.module}/files/topics.yaml"))
}
module "msk_topics" {
source = "../../../../../modules/aws/msk-topic/v1"
msk_bootstrap_servers = split(",", module.msk.bootstrap_brokers[0])
topics = local.topics
}
# kafka-topic/v1/main.tf
terraform {
required_providers {
kafka = {
source = "Mongey/kafka"
version = "~> 0.5"
}
}
}
provider "kafka" {
bootstrap_servers = var.msk_bootstrap_servers
tls_enabled = false
}
resource "kafka_topic" "topics" {
for_each = var.topics
name = each.key
partitions = lookup(each.value, "partitions", 1)
replication_factor = lookup(each.value, "replication_factor", 2)
config = merge({
"retention.ms" = 3600000
"retention.bytes" = 250000000
}, lookup(each.value, "config", {}))
}
terraform init; terraform plan
.
.
.
.
.
.
.
.
Plan: 2 to add, 0 to change, 0 to destroy.
╷
│ Error: Missing required argument
│
│ The argument "bootstrap_servers" is required, but was not set.
╵
Other than that. I've found that moving the provider
block from kafka-topic/v1/main.tf
to msk_topic.tf
will make it work.
After examining the error following the plan result, my hypothesis is that the provider
needs time to initialize. However, msk_topic.tf
invokes kafka-topic/v1/main.tf
, which causes the resource "kafka_topic" "topics"
to initialize prior to the provider
, resulting in the error.