terraform-aws-dynamodb-table
terraform-aws-dynamodb-table copied to clipboard
updating Amazon DynamoDB Table: updating replicas, while creating: updating replica point in time recovery: updating PITR: ValidationException: 1 validation error detected: Invalid AWS region
Hi, I really hope that you can help me with this issue, as I already spent 3 days troubleshooting and trying different things. Thanks in advance.
Description
We have several DynamoDB tables in our main region (us-east-1) and want to create replica (global) tables in another region (us-east-2). We have PITR enabled in main region but we do not want to enable it for tables in replica region. So, I have terraform script which in theory should work fine and it actually creates all the replica tables, but the process completes with very weird error:
Error: updating Amazon DynamoDB Table (arn:aws:dynamodb:us-east-1:<aws_account_id>:table/<table_name>): updating replicas, while creating: updating replica (us-east-2) point in time recovery: updating PITR: ValidationException: 1 validation error detected: Invalid AWS region in 'arn:aws:dynamodb:us-east-1:<aws_account_id>:table/<table_name>'
This very same error shows for each table created in replica region, though as i mentioned the global tables have been created successfully. I think it somehow tries to configure PITR for replica region as well, but I don't understand why.
I first tried to not pass point_in_time_recovery parameter in replica_regions block at all, as I see that your module will set it to null here and the default value for aws_dynamodb_table resource is false in aws provider according to the documentation.
Then I tried to update configuration and pass point_in_time_recovery parameter as false explicitly in replica_regions block, but I've got the same error. I cannot find anything related in the Internet. I understand that it is ValidationException returned from aws api, but I don't understand what am I missing?
- [x] β I have searched the open/closed issues and my issue is not listed.
Versions
-
Module version [Required]: 4.0.1
-
Terraform version: 1.7.4
-
Provider version(s): 5.46.0
Reproduction Code [Required]
main.tf
module "dynamodb_table" {
source = "terraform-aws-modules/dynamodb-table/aws"
version = "4.0.1"
name = "table_name"
hash_key = "HK"
range_key = "SK"
point_in_time_recovery_enabled = true
ttl_enabled = true
ttl_attribute_name = "expire_ttl"
server_side_encryption_enabled = true
server_side_encryption_kms_key_arn = data.aws_kms_key.general_key.arn
table_class = "STANDARD"
deletion_protection_enabled = true
stream_enabled = true
attributes = local.dynamodb_attributes
global_secondary_indexes = local.dynamodb_global_secondary_indexes
replica_regions = local.dynamodb_replica_regions
tags = local.tags
}
local.tf
locals {
environment = "prod"
region = "us-east-1"
replica_region = "us-east-2"
tags = {
Terraform = "true"
Environment = local.environment
}
dynamodb_attributes = [
{
name = "HK"
type = "S"
},
{
name = "SK"
type = "S"
},
{
name = "GSI1_HK"
type = "S"
},
{
name = "GSI1_SK"
type = "S"
},
{
name = "GSI2_HK"
type = "S"
},
{
name = "GSI2_SK"
type = "S"
}
]
dynamodb_global_secondary_indexes = [
{
name = "GSI1"
hash_key = "GSI1_HK"
range_key = "GSI1_SK"
projection_type = "ALL"
},
{
name = "GSI2"
hash_key = "GSI2_HK"
range_key = "GSI2_SK"
projection_type = "ALL"
},
]
dynamodb_replica_regions = [{
region_name = local.replica_region
kms_key_arn = data.aws_kms_key.dynamodb_replica_cmk.arn
propagate_tags = true
point_in_time_recovery = false
}]
}
data.tf
data "aws_kms_key" "dynamodb_replica_cmk" {
provider = aws.replica
key_id = "alias/replica-cmk"
}
data "aws_kms_key" "general_key" {
key_id = "alias/general-key"
}
provider.tf
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 5.21"
}
}
}
provider "aws" {
region = local.replica_region
alias = "replica"
}
Steps to reproduce the behavior:
-Are you using workspaces? -No, we are not using workspace.
-Have you cleared the local cache (see Notice section above)? -Yes, I have cleared the local cache
List steps in order that led up to the issue you encountered:
-Just run terraform apply that's it. Plan shows everything good, but plan generates error mentioned above.
Expected behavior
This should create replica (global) tables in replica region without PITR.
Actual behavior
It actually creates replica tables, but process completes with error (for each table described in terraform configuration) already mentioned above:
Error: updating Amazon DynamoDB Table (arn:aws:dynamodb:us-east-1:<aws_account_id>:table/<table_name>): updating replicas, while creating: updating replica (us-east-2) point in time recovery: updating PITR: ValidationException: 1 validation error detected: Invalid AWS region in 'arn:aws:dynamodb:us-east-1:<aws_account_id>:table/<table_name>'
Terminal Output Screenshot(s)
Additional context
After the global tables were created despite the fact that terraform apply failed, now if I run it one more time, even terraform plan completes with this error:
Error: reading Amazon DynamoDB Table (arn:aws:dynamodb:us-east-1:<aws_account_id>:table/<table_name>): describing Continuous Backups: ValidationException: 1 validation error detected: Invalid AWS region in 'arn:aws:dynamodb:us-east-1:<aws_account_id>:table/<table_name>' status code: 400, request id: AND2DAN15FOAP7KSA3LK5VCDDRVV4KQNSO5AEMVJF66Q9ASUAAJG
That's why I think that it tries to configure PITR for replica tables as well, even that I set it to false explicitly.