terraform-provider-mongodbatlas
terraform-provider-mongodbatlas copied to clipboard
Modify first num_shards doesn't take effect - is not passed into regions_config sent to Atlas API
Terraform CLI and Terraform MongoDB Atlas Provider Version
# Copy-paste your version info here
terraform version:0.13.0
We haven't set the version for provider, so I think I should be the latest. We are using terraform enterprise.
provider "mongodbatlas" {
public_key = var.public_key
private_key = var.private_key
}
Terraform Configuration File
# Copy-paste your configuration info here
resource "mongodbatlas_cluster" "xxx_xxx_cluster2" {
project_id = mongodbatlas_project.xxx.id
name = "xxxx-mongodb-cluster"
cluster_type = "SHARDED"
num_shards = 3
replication_factor = 1
provider_backup_enabled = false
auto_scaling_disk_gb_enabled = false
mongo_db_major_version = "4.2"
//Provider Settings "block"
provider_name = "AWS"
disk_size_gb = 40
provider_volume_type = "STANDARD"
provider_encrypt_ebs_volume = true
provider_instance_size_name = "M30"
provider_region_name = "AP_NORTHEAST_1"
}
Steps to Reproduce
1.tf apply to create one shard cluster 2.change the shard number to 3 and apply again
Expected Behavior
mongodb atlas should have 3 shards finally
Actual Behavior
No error in tf plan, tf apply in terraform but mongodb atlas think it should do no change
rerun tf plan, tf will output the shard number need to be changed from 1 to 3
Debug Output
Crash Output
Additional Context
References
@iwasnobody thank you for the report. The provider is reading num_shards the first time and creating a single shard correctly but when you change that value to 2 it sees the diff but is then not properly sending it to the right place in Atlas. I recreated your environment and saw the following in the plan (removed all the non related output but this is why capturing the plan helps us speed up replying to issues, took a bit of time to recreate): ~ num_shards = 1 -> 2 .... replication_specs { id = "5f52bb0bbc47752bb303c552" num_shards = 1 zone_name = "Zone 1"
regions_config {
analytics_nodes = 0
electable_nodes = 3
priority = 7
read_only_nodes = 0
region_name = "US_EAST_1"
}
}
The bolded area is mine, that has to also be updated w/ the correct shard number too - in fact I think this is a bit of a bug and a needed improvement to make sharding better to implement in Terraform.
However, I also found a workaround for now, you can either start w/ a config like this or change yours to have num_shards in two places (like our GEOSHARDED cluster example):
resource "mongodbatlas_cluster" "xxx_xxx_cluster2" { project_id = mongodbatlas_project.xxx.id name = "xxxx-mongodb-cluster" cluster_type = "SHARDED" num_shards = 1
provider_backup_enabled = false auto_scaling_disk_gb_enabled = false mongo_db_major_version = "4.2"
//Provider Settings "block" provider_name = "AWS" disk_size_gb = 40 provider_volume_type = "STANDARD" provider_encrypt_ebs_volume = true provider_instance_size_name = "M30" //used a closer region to my area provider_region_name = "US_EAST_1"
replication_specs { num_shards = 2 } }
In my test this correctly updated the shard created with a config like yours to a 2 shard configuration.
Also note:
replication_factor = 1
is not valid, replication_factor defaults to 3 as Atlas always provides a 3 node replica set.
Ran into this just now too. Seems like you also have to set it in replication_specs
:
replication_specs {
num_shards = var.shard_count
}
I'm hitting this as well with v0.9.1
testing the migration of a replicaset to sharded cluster, in preparation to do this in a production cluster. It has constant config drift saying it will update the shards:
# mongodbatlas_cluster.shard-test will be updated in-place
~ resource "mongodbatlas_cluster" "shard-test" {
id = "Y2x1c3Rlcl9pZA==:NjBkNmE1MjQ5YjhiNWI0NjM1NDRmYmYx-Y2x1c3Rlcl9uYW1l:c2hhcmQtdGVzdA==-cHJvamVjdF9pZA==:NWY4xxxxxxxxxxxxxxxxx-cHJvdmlkZXJfbmFtZQ==:R0NQ"
name = "shard-test"
~ num_shards = 1 -> 3
Adding the replication_specs
I also got this drift on the zone name, which was unexpected:
~ replication_specs {
id = "60d6a5249b8b5b463544fbe9"
~ num_shards = 1 -> 3
~ zone_name = "Zone 1" -> "ZoneName managed by Terraform"
My resulting change was to add this config block in the mongodbatlas_cluster
resource:
replication_specs {
num_shards = 3
zone_name = "Zone 1"
}
Here is the relevant output from my TF plan before adding replication_specs
{
"address": "mongodbatlas_cluster.shard-test",
"mode": "managed",
"type": "mongodbatlas_cluster",
"name": "shard-test",
"provider_name": "registry.terraform.io/mongodb/mongodbatlas",
"schema_version": 1,
"values": {
"advanced_configuration": [
{
"fail_index_key_too_long": false,
"javascript_enabled": true,
"minimum_enabled_tls_protocol": "TLS1_2",
"no_table_scan": false,
"oplog_size_mb": 0,
"sample_refresh_interval_bi_connector": 0,
"sample_size_bi_connector": 0
}
],
"auto_scaling_compute_enabled": false,
"auto_scaling_compute_scale_down_enabled": false,
"auto_scaling_disk_gb_enabled": true,
"backing_provider_name": "",
"backup_enabled": false,
"bi_connector": null,
"bi_connector_config": [
{
"enabled": false,
"read_preference": "secondary"
}
],
"cluster_id": "60d6a5249b8b5b463544fbf1",
"cluster_type": "SHARDED",
"connection_strings": [
{
"aws_private_link": {},
"aws_private_link_srv": {},
"private": "",
"private_endpoint": [],
"private_srv": "",
"standard": "mongodb://shard-test-shard-00-00.xxxxx.mongodb.net:27016,shard-test-shard-00-01.xxxxx.mongodb.net:27016,shard-test-shard-00-02.xxxxx.mongodb.net:27016/?ssl=true\u0026authSource=admin",
"standard_srv": "mongodb+srv://shard-test.xxxxx.mongodb.net"
}
],
"container_id": "60d6axxxxxxxxxxxxxxxxxxxxx",
"disk_size_gb": 40,
"encryption_at_rest_provider": "NONE",
"id": "Y2x1c3Rlcl9pZA==:NjBkNmE1MjQ5YjhiNWI0NjM1NDRmYmYx-Y2x1c3Rlcl9uYW1l:c2hhcmQtdGVzdA==-cHJvamVjdF9pZA==:NWY4OWM1OTAwZjY3NDE3OTViMjk1NGU4-cHJvdmlkZXJfbmFtZQ==:R0NQ",
"labels": [],
"mongo_db_major_version": "4.4",
"mongo_db_version": "4.4.6",
"mongo_uri": "mongodb://shard-test-shard-00-00.xxxxx.mongodb.net:27016,shard-test-shard-00-01.xxxxx.mongodb.net:27016,shard-test-shard-00-02.xxxxx.mongodb.net:27016",
"mongo_uri_updated": "2021-06-26T12:58:39Z",
"mongo_uri_with_options": "mongodb://shard-test-shard-00-00.xxxxx.mongodb.net:27016,shard-test-shard-00-01.xxxxx.mongodb.net:27016,shard-test-shard-00-02.xxxxx.mongodb.net:27016/?ssl=true\u0026authSource=admin",
"name": "shard-test",
"num_shards": 3,
"paused": false,
"pit_enabled": false,
"project_id": "5f89cxxxxxxxxxxxxxxxxxxxx",
"provider_auto_scaling_compute_max_instance_size": "",
"provider_auto_scaling_compute_min_instance_size": "",
"provider_backup_enabled": true,
"provider_disk_iops": null,
"provider_disk_type_name": "",
"provider_encrypt_ebs_volume": null,
"provider_encrypt_ebs_volume_flag": null,
"provider_instance_size_name": "M30",
"provider_name": "GCP",
"provider_region_name": "CENTRAL_US",
"provider_volume_type": "",
"replication_factor": 3,
"replication_specs": [
{
"id": "60d6a5249b8b5b463544fbe9",
"num_shards": 1,
"regions_config": [
{
"analytics_nodes": 0,
"electable_nodes": 3,
"priority": 7,
"read_only_nodes": 0,
"region_name": "CENTRAL_US"
}
],
"zone_name": "Zone 1"
}
],
"snapshot_backup_policy": [
{
"cluster_id": "60d6a5249b8b5b463544fbf1",
"cluster_name": "shard-test",
"next_snapshot": "2021-06-26T16:06:55Z",
"policies": [
{
"id": "60d6a7dc14449b145a655c75",
"policy_item": [
{
"frequency_interval": 6,
"frequency_type": "hourly",
"id": "60d6a7dc14449b145a655c76",
"retention_unit": "days",
"retention_value": 2
},
{
"frequency_interval": 1,
"frequency_type": "daily",
"id": "60d6a7dc14449b145a655c77",
"retention_unit": "days",
"retention_value": 7
},
{
"frequency_interval": 6,
"frequency_type": "weekly",
"id": "60d6a7dc14449b145a655c78",
"retention_unit": "weeks",
"retention_value": 4
},
{
"frequency_interval": 40,
"frequency_type": "monthly",
"id": "60d6a7dc14449b145a655c79",
"retention_unit": "months",
"retention_value": 12
}
]
}
],
"reference_hour_of_day": 4,
"reference_minute_of_hour": 6,
"restore_window_days": 7,
"update_snapshots": false
}
],
"srv_address": "mongodb+srv://shard-test.xxxxx.mongodb.net",
"state_name": "IDLE"
}
}
While this config and workaround seems valid and I tested on an M30, I was unable to apply it to an M50 (VPC attached in GCP) and convert to a 3-shard cluster. I think this is related, but unexpectedly failed.
module.gcp_us_central1.mongodbatlas_cluster.detection: Modifying... [id=Y2x1c3Rlcl9pZA==:NWY5Yxxxxxxxxxxxxxxxxxxxxxxx-Y2x1c3Rlcl9uYW1l:xxxxxxxxxxx-cHJvamVjdF9pZA==:NWY5Yjxxxxxxxxxxxxxxxx-cHJvdmlkZXJfbmFtZQ==:R0NQ]
Error: error updating MongoDB Cluster (detection): PATCH https://cloud.mongodb.com/api/atlas/v1.0/groups/5f9b1xxxxxxxxxxxxx/clusters/xxxxxxxxxx: 400 (request "INVALID_CLUSTER_CONFIGURATION") The specified cluster configuration is not valid.
I made the same change in the UI, and now Terraform says:
No changes. Infrastructure is up-to-date.
@mbrancato can you provide the entire cluster config for the one above?
Sure @themantissa - only the name was changed below.
resource "mongodbatlas_cluster" "mydb" {
project_id = data.mongodbatlas_project.default.id
name = "mydb"
num_shards = 3
replication_factor = 3
provider_backup_enabled = true
auto_scaling_disk_gb_enabled = true
mongo_db_major_version = "4.4"
provider_name = "GCP"
provider_instance_size_name = "M50"
provider_region_name = "CENTRAL_US"
replication_specs {
num_shards = 3
zone_name = "Zone 1"
}
}
The previous replicaset config was:
resource "mongodbatlas_cluster" "mydb" {
project_id = data.mongodbatlas_project.default.id
name = "mydb"
num_shards = 1
replication_factor = 3
provider_backup_enabled = true
auto_scaling_disk_gb_enabled = true
mongo_db_major_version = "4.4"
provider_name = "GCP"
provider_instance_size_name = "M50"
provider_region_name = "CENTRAL_US"
}
Root issue covered by internal ticket INTMDB-432. Doesn't look like a bug per se but something to look into further, especially if we see a more recent issue filed. cc @Zuhairahmed