terraform
terraform copied to clipboard
terraform 1.10 causes "Error: Unsupported argument" on module source that worked with terraform 1.9
Terraform Version
1.10
Terraform Configuration Files
I haven't been able to build an easily distributable repo. The problem is only occurring in our private repos. We have multiple occurrences of it happening though and they're all referring to the same module source so I'm providing the source here in case it helps. Note that the problem occurs when running terraform validate
as well as terraform plan
A module that uses the module source is:
module "product-authoring-private-topic" {
source = "github.com/our-org/kafka-automation?depth=1//terraform-modules/private-topic"
name = "product-authoring.product.v1beta1"
partitions = 3
replication_factor = 2
cleanup_policy = "delete"
retention_ms = 1209600000 # 14 days
compression_type = "snappy"
min_cleanable_dirty_ratio = 0.5
min_insync_replicas = 2
}
and the module source (which is where we think the problem is occurring) is over the following 4 files:
main.tf:
resource "kafka_topic" "this" {
name = "${data.external.github_repo.result.name}.${var.name}"
replication_factor = var.replication_factor
partitions = var.partitions
config = {
"cleanup.policy" = "${var.cleanup_policy}"
"compression.type" = "${var.compression_type}"
"delete.retention.ms" = "${var.delete_retention_ms}"
"file.delete.delay.ms" = "${var.file_delete_delay_ms}"
"flush.messages" = "${var.flush_messages}"
"flush.ms" = "${var.flush_ms}"
"follower.replication.throttled.replicas" = "${var.follower_replication_throttled_replicas}"
"index.interval.bytes" = "${var.index_interval_bytes}"
"leader.replication.throttled.replicas" = "${var.leader_replication_throttled_replicas}"
"max.compaction.lag.ms" = "${var.max_compaction_lag_ms}"
"max.message.bytes" = "${var.max_message_bytes}"
"message.timestamp.difference.max.ms" = "${var.message_timestamp_difference_max_ms}"
"message.timestamp.type" = "${var.message_timestamp_type}"
"min.cleanable.dirty.ratio" = "${var.min_cleanable_dirty_ratio}"
"min.compaction.lag.ms" = "${var.min_compaction_lag_ms}"
"min.insync.replicas" = "${var.min_insync_replicas}"
"preallocate" = "${var.preallocate}"
"retention.bytes" = "${var.retention_bytes}"
"retention.ms" = "${var.retention_ms}"
"segment.bytes" = "${var.segment_bytes}"
"segment.index.bytes" = "${var.segment_index_bytes}"
"segment.jitter.ms" = "${var.segment_jitter_ms}"
"segment.ms" = "${var.segment_ms}"
"unclean.leader.election.enable" = "${var.unclean_leader_election_enable}"
"message.downconversion.enable" = "${var.message_downconversion_enable}"
}
}
data "external" "github_repo" {
program = ["bash", "${path.module}/external/get-github-repo.sh"]
}
outputs.tf:
output "name" {
description = "The final name for the topic"
value = kafka_topic.this.name
}
terraform.tf:
terraform {
required_providers {
kafka = {
source = "Mongey/kafka"
version = "~> 0.5.2"
}
}
}
vars.tf:
variable "name" {
type = string
description = "The name of the topic. The final topic name would be prepended with the GitHub repository name"
}
variable "partitions" {
type = number
description = "The number of partitions for the topic"
}
variable "replication_factor" {
type = number
description = "The number of replicas for the topic"
validation {
condition = var.replication_factor >= 2
error_message = "Replication factor should not be less than 2."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_cleanup.policy
variable "cleanup_policy" {
type = string
description = "This config designates the retention policy to use on log segments"
default = "delete"
validation {
condition = contains(["compact", "delete", "compact,delete"], var.cleanup_policy)
error_message = "Cleanup policy should either be 'compact', 'delete' or 'compact,delete'."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_compression.type
variable "compression_type" {
type = string
description = "Specify the final compression type for a given topic"
default = "producer"
validation {
condition = contains(["uncompressed", "zstd", "lz4", "snappy", "gzip", "producer"], var.compression_type)
error_message = "Compression type should be one of: uncompressed, zstd, lz4, snappy, gzip, producer."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_delete.retention.ms
variable "delete_retention_ms" {
type = number
description = "The amount of time to retain delete tombstone markers for log compacted topics"
# 1 day
default = 86400000
validation {
condition = var.delete_retention_ms >= 0
error_message = "Value of delete_retention_ms should not be less than 0."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_file.delete.delay.ms
variable "file_delete_delay_ms" {
type = number
description = "The time to wait before deleting a file from the filesystem"
# 1 minute
default = 60000
validation {
condition = var.file_delete_delay_ms >= 0
error_message = "Value of file_delete_delay_ms should not be less than 0."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_flush.messages
variable "flush_messages" {
type = number
description = "This setting allows specifying an interval at which we will force an fsync of data written to the log"
default = 9223372036854775807
validation {
condition = var.flush_messages >= 1
error_message = "Value of flush_messages should not be less than 1."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_flush.ms
variable "flush_ms" {
type = number
description = "This setting allows specifying a time interval at which we will force an fsync of data written to the log"
default = 9223372036854775807
validation {
condition = var.flush_ms >= 0
error_message = "Value of flush_ms should not be less than 0."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_follower.replication.throttled.replicas
variable "follower_replication_throttled_replicas" {
type = string
description = "A list of replicas for which log replication should be throttled on the follower side"
default = ""
}
# https://kafka.apache.org/documentation/#topicconfigs_index.interval.bytes
variable "index_interval_bytes" {
type = number
description = "This setting controls how frequently Kafka adds an index entry to its offset index"
default = 4096
validation {
condition = var.index_interval_bytes >= 0
error_message = "Value of index_interval_bytes should not be less than 0."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_leader.replication.throttled.replicas
variable "leader_replication_throttled_replicas" {
type = string
description = "A list of replicas for which log replication should be throttled on the leader side"
default = ""
}
# https://kafka.apache.org/documentation/#topicconfigs_max.compaction.lag.ms
variable "max_compaction_lag_ms" {
type = number
description = "The maximum time a message will remain ineligible for compaction in the log"
default = 9223372036854775807
validation {
condition = var.max_compaction_lag_ms >= 1
error_message = "Value of max_compaction_lag_ms should not be less than 1."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_max.message.bytes
variable "max_message_bytes" {
type = number
description = "The largest record batch size allowed by Kafka (after compression if compression is enabled)"
default = 1048588
validation {
condition = var.max_message_bytes >= 0
error_message = "Value of max_message_bytes should not be less than 0."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_message.timestamp.difference.max.ms
variable "message_timestamp_difference_max_ms" {
type = number
description = "The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message"
default = 9223372036854775807
validation {
condition = var.message_timestamp_difference_max_ms >= 0
error_message = "Value of message_timestamp_difference_max_ms should not be less than 0."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_message.timestamp.type
variable "message_timestamp_type" {
type = string
description = "Define whether the timestamp in the message is message create time or log append time"
default = "CreateTime"
validation {
condition = contains(["CreateTime", "LogAppendTime"], var.message_timestamp_type)
error_message = "Message timestamp type should be CreateTime or LogAppendTime."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_min.cleanable.dirty.ratio
variable "min_cleanable_dirty_ratio" {
type = number
description = "This configuration controls how frequently the log compactor will attempt to clean the log (assuming log compaction is enabled)"
default = 0.5
validation {
condition = var.min_cleanable_dirty_ratio >= 0 && var.min_cleanable_dirty_ratio <= 1
error_message = "Value of min_cleanable_dirty_ratio should be between 0 and 1, 0 and 1 inclusive."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_min.compaction.lag.ms
variable "min_compaction_lag_ms" {
type = number
description = "The minimum time a message will remain uncompacted in the log"
default = 0
validation {
condition = var.min_compaction_lag_ms >= 0
error_message = "Value of min_compaction_lag_ms should not be less than 0."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_min.insync.replicas
variable "min_insync_replicas" {
type = number
description = "When a producer sets acks to 'all' (or '-1'), this configuration specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful"
default = 1
validation {
condition = var.min_insync_replicas >= 1
error_message = "Value of min_insync_replicas should not be less than 1."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_preallocate
variable "preallocate" {
type = bool
description = "True if we should preallocate the file on disk when creating a new log segment"
default = false
}
# https://kafka.apache.org/documentation/#topicconfigs_retention.bytes
variable "retention_bytes" {
type = number
description = "This configuration controls the maximum size a partition (which consists of log segments) can grow to before we will discard old log segments to free up space if we are using the 'delete' retention policy"
default = -1
}
# https://kafka.apache.org/documentation/#topicconfigs_retention.ms
variable "retention_ms" {
type = number
description = "This configuration controls the maximum time we will retain a log before we will discard old log segments to free up space if we are using the 'delete' retention policy"
# 7 days
default = 604800000
validation {
condition = var.retention_ms >= -1
error_message = "Value of retention_ms should not be less than -1."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_segment.bytes
variable "segment_bytes" {
type = number
description = "This configuration controls the segment file size for the log"
# 1 gibibyte
default = 1073741824
validation {
condition = var.segment_bytes >= 14
error_message = "Value of segment_bytes should not be less than 14."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_segment.index.bytes
variable "segment_index_bytes" {
type = number
description = "This configuration controls the size of the index that maps offsets to file positions"
# 10 mebibytes
default = 10485760
validation {
condition = var.segment_index_bytes >= 4
error_message = "Value of segment_index_bytes should not be less than 4."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_segment.jitter.ms
variable "segment_jitter_ms" {
type = number
description = "The maximum random jitter subtracted from the scheduled segment roll time to avoid thundering herds of segment rolling"
default = 0
validation {
condition = var.segment_jitter_ms >= 0
error_message = "Value of segment_jitter_ms should not be less than 0."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_segment.ms
variable "segment_ms" {
type = number
description = "This configuration controls the period of time after which Kafka will force the log to roll even if the segment file isn't full to ensure that retention can delete or compact old data."
# 7 days
default = 604800000
validation {
condition = var.segment_ms >= 1
error_message = "Value of segment_ms should not be less than 1."
}
}
# https://kafka.apache.org/documentation/#topicconfigs_unclean.leader.election.enable
variable "unclean_leader_election_enable" {
type = bool
description = "Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss"
default = false
}
# https://kafka.apache.org/documentation/#topicconfigs_message.downconversion.enable
variable "message_downconversion_enable" {
type = bool
description = "This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests"
default = true
}
Debug Output
https://gist.github.com/jamiekt/f8e9c59ce3605e72446b238e1cb49e82
Expected Behavior
When using terraform 1.9 terraform validate
ran successfully. We expect the same to happen with terraform 1.10
Actual Behavior
When using terraform 1.10 the same command fails with:
╷
│ Error: Unsupported argument
│
│ on main.tf line 36, in module "product-authoring-private-topic":
│ 36: name = "product-authoring.product.v1beta1"
│
│ An argument named "name" is not expected here.
╵
╷
│ Error: Unsupported argument
│
│ on main.tf line 37, in module "product-authoring-private-topic":
│ 37: partitions = 3
│
│ An argument named "partitions" is not expected here.
╵
╷
│ Error: Unsupported argument
│
│ on main.tf line 38, in module "product-authoring-private-topic":
│ 38: replication_factor = 2
│
│ An argument named "replication_factor" is not expected here.
╵
╷
│ Error: Unsupported argument
│
│ on main.tf line 39, in module "product-authoring-private-topic":
│ 39: cleanup_policy = "delete"
│
│ An argument named "cleanup_policy" is not expected here.
╵
╷
│ Error: Unsupported argument
│
│ on main.tf line 40, in module "product-authoring-private-topic":
│ 40: retention_ms = 1209600000 # 14 days
│
│ An argument named "retention_ms" is not expected here.
╵
╷
│ Error: Unsupported argument
│
│ on main.tf line 41, in module "product-authoring-private-topic":
│ 41: compression_type = "snappy"
│
│ An argument named "compression_type" is not expected here.
╵
╷
│ Error: Unsupported argument
│
│ on main.tf line 42, in module "product-authoring-private-topic":
│ 42: min_cleanable_dirty_ratio = 0.5
│
│ An argument named "min_cleanable_dirty_ratio" is not expected here.
╵
╷
│ Error: Unsupported argument
│
│ on main.tf line 43, in module "product-authoring-private-topic":
│ 43: min_insync_replicas = 2
│
│ An argument named "min_insync_replicas" is not expected here.
╵
Error: Terraform exited with code 1.
Error: Process completed with exit code 1.
Steps to Reproduce
-
terraform init
-
terraform validate
Additional Context
The error occurs when running in a GitHub Actions workflow
References
No response