terraform-provider-oci
terraform-provider-oci copied to clipboard
Network Security Group not added to containerengine node_pool
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform Version and Provider Version
Terraform v0.12.31 provider.oci v4.56.0
Affected Resource(s)
oci_containerengine_node_pool
Terraform Configuration Files
resource "oci_containerengine_node_pool" "node_pool1_cp" {
#Required
cluster_id = oci_containerengine_cluster.oke_cp.id
compartment_id = data.terraform_remote_state.common.outputs.onetooldev_compartment
kubernetes_version = var.oke_cp_node_pool_kubernetes_version
name = var.oke_cp_node_pool_name
node_shape = var.oke_cp_node_pool_node_shape
node_shape_config {
memory_in_gbs = var.oke_cp_node_pool_node_shape_memingb
ocpus = var.oke_cp_node_pool_node_shape_ocpus
}
node_source_details {
#Required
image_id = var.oke_cp_node_pool_node_image_id
source_type = "image"
#Optional
boot_volume_size_in_gbs = var.oke_cp_node_pool_node_boot_volume_size_in_gb
}
node_metadata = {
ssh_authorized_keys = var.ssh_public_key_oke_cp
hostclass = var.oke_node_pool_node_hostclass
user_data = filebase64("./oke-init/cloud-init-prod.yaml")
}
node_config_details {
placement_configs {
availability_domain = lookup(data.oci_identity_availability_domains.ashburn.availability_domains[0],"name")
subnet_id = oci_core_subnet.oke-cp-subnet-worker.id
}
placement_configs {
availability_domain = lookup(data.oci_identity_availability_domains.ashburn.availability_domains[1],"name")
subnet_id = oci_core_subnet.oke-cp-subnet-worker.id
}
placement_configs {
availability_domain = lookup(data.oci_identity_availability_domains.ashburn.availability_domains[2],"name")
subnet_id = oci_core_subnet.oke-cp-subnet-worker.id
}
size = var.oke_cp_cluster_number_of_nodes
nsg_ids = ["ocid1.networksecuritygroup.oc1.iad.aaaaaaaa5vlrsicpioij5muehc7pqcxfyrbkh3npfeyym6byl64ljrpj2q6q"]
}
#quantity_per_subnet = "var.node_pool_quantity_per_subnet
ssh_public_key = var.ssh_public_key_oke_cp
}
Debug Output
https://gist.github.com/anggol/088c07a56762b0fda787ab5ad8bdcce0
Panic Output
N/A
Expected Behavior
As indicated in the plan, an existing NSG with ocid = ocid1.networksecuritygroup.oc1.iad.aaaaaaaa5vlrsicpioij5muehc7pqcxfyrbkh3npfeyym6byl64ljrpj2q6q should have been associated to the node pool.
Actual Behavior
The apply completed without an error, but the NSG wasn't associated to the node pool.
Steps to Reproduce
-
terraform apply
Important Factoids
N/A
References
N/A
any update? nsg can be update in UI, so I think this should be updatable in terraform?
Thank you for reporting the issue. We have raised an internal ticket to track this. Our service engineers will get back to you.
Thank you for the update!
Thanks for reporting this issue. This is a known bug which has already been fixed in 4.101.0 version. Please update your provider to 4.101.0 or above using below code and retry. Thanks!
provider "oci" {
region = var.region
tenancy_ocid = var.tenancy_ocid
user_ocid = var.user_ocid
fingerprint = var.fingerprint
private_key_path = var.private_key_path
version = "4.101.0"
}