CKV2_AWS_1 is failing, the NACLs already attached
Describe the bug Check: CKV2_AWS_1: "Ensure that all NACL are attached to subnets" FAILED for resource: aws_network_acl.elasticache File: /tfplan.json:2623-2683 Guide: https://docs.bridgecrew.io/docs/ensure-that-all-nacl-are-attached-to-subnets
2624 | "values": {
2625 | "arn": "arn:aws:ec2:us-east-1:907320361432:network-acl/acl-0ed5xxxx42a675e",
2626 | "egress": [
2627 | {
2628 | "action": "allow",
2629 | "cidr_block": "10.0.0.0/8",
2630 | "from_port": 1,
2631 | "icmp_code": 0,
2632 | "icmp_type": 0,
2633 | "ipv6_cidr_block": "",
2634 | "protocol": "6",
2635 | "rule_no": 100,
2636 | "to_port": 65535
2637 | }
2638 | ],
2639 | "id": "acl-xxxxxxxxxxx",
2640 | "ingress": [
2641 | {
2642 | "action": "allow",
2643 | "cidr_block": "10.0.0.0/8",
2644 | "from_port": 1024,
2645 | "icmp_code": 0,
2646 | "icmp_type": 0,
2647 | "ipv6_cidr_block": "",
2648 | "protocol": "6",
2649 | "rule_no": 100,
2650 | "to_port": 65535
2651 | }
2652 | ],
2653 | "owner_id": "9073xxx432",
2654 | "subnet_ids": [
2655 | "subnet-051dedb53xxxx1545",
2656 | "subnet-06debexxxxa5e",
2657 | "subnet-0de458b4b3xxx8bd"
2658 | ],
2659 |
2683 | "vpc_id": "vpc-07f5a95xxxxac099"
To Reproduce Create vpc with terraform vpc code( sample code as below) .
`module "vpc" {
refference link https://github.com/terraform-aws-modules/terraform-aws-vpc
source = "terraform-aws-modules/vpc/aws"
version = "2.70.0"
cidr = var.vpc_cidr
name = upper(local.name)
azs = var.vpc_azs
public_subnets = var.public_subnets
private_subnets = var.private_subnets
database_subnets = var.database_subnets
elasticache_subnets = var.elasticache_subnets
enable_nat_gateway = true
map_public_ip_on_launch = false
one_nat_gateway_per_az = var.single_nat ? false: true
single_nat_gateway = var.single_nat
enable_dns_hostnames = true
enable_dns_support = true
tags = local.tags
manage_default_security_group = true
default_security_group_egress = []
default_security_group_ingress = []
enable_flow_log = true
flow_log_destination_arn = data.terraform_remote_state.security.outputs.flowlogs_bucket.arn
flow_log_destination_type = "s3"
flow_log_max_aggregation_interval = 60
flow_log_traffic_type = "ALL"
elasticache_subnet_suffix = "eventstore"
vpc_tags = merge(local.tags,local.eks_tags)
database_dedicated_network_acl = true
public_dedicated_network_acl = true
private_dedicated_network_acl = true
elasticache_dedicated_network_acl = true
enable_s3_endpoint = true
enable_kms_endpoint = true kms_endpoint_private_dns_enabled = true kms_endpoint_security_group_ids = [module.private_sg_endpoints.this_security_group_id] kms_endpoint_subnet_ids = module.vpc.private_subnets
enable_ssm_endpoint = true ssm_endpoint_private_dns_enabled = true ssm_endpoint_security_group_ids = [module.private_sg_endpoints.this_security_group_id] ssm_endpoint_subnet_ids = module.vpc.private_subnets
enable_ssmmessages_endpoint = true ssmmessages_endpoint_private_dns_enabled = true ssmmessages_endpoint_security_group_ids = [module.private_sg_endpoints.this_security_group_id] ssmmessages_endpoint_subnet_ids = module.vpc.private_subnets
enable_ec2_endpoint = true ec2_endpoint_private_dns_enabled = true ec2_endpoint_security_group_ids = [module.private_sg_endpoints.this_security_group_id] ec2_endpoint_subnet_ids = module.vpc.private_subnets
enable_ec2messages_endpoint = true ec2messages_endpoint_private_dns_enabled = true ec2messages_endpoint_security_group_ids = [module.private_sg_endpoints.this_security_group_id] ec2messages_endpoint_subnet_ids = module.vpc.private_subnets
enable_lambda_endpoint = true lambda_endpoint_private_dns_enabled = true lambda_endpoint_security_group_ids = [module.private_sg_endpoints.this_security_group_id] lambda_endpoint_subnet_ids = data.aws_subnet_ids.private_subnets_lambda.ids
enable_logs_endpoint = true logs_endpoint_private_dns_enabled = true logs_endpoint_security_group_ids = [module.private_sg_endpoints.this_security_group_id] logs_endpoint_subnet_ids = module.vpc.private_subnets
enable_monitoring_endpoint = true monitoring_endpoint_private_dns_enabled = true monitoring_endpoint_security_group_ids = [module.private_sg_endpoints.this_security_group_id] monitoring_endpoint_subnet_ids = module.vpc.private_subnets public_inbound_acl_rules = [{ cidr_block = var.public_incoming_cidr, from_port = 1024, protocol = "tcp", rule_action = "allow", rule_number = 100 to_port = 65535 }, { cidr_block = var.vpc_cidr, from_port = 1, protocol = "-1", rule_action = "allow", rule_number = 200 to_port = 65535 }] public_outbound_acl_rules = [{ cidr_block = var.vpc_cidr, from_port = 1, protocol = "-1", rule_action = "allow", rule_number = 100, to_port = 65535 }, { cidr_block = "0.0.0.0/0", from_port = 80, protocol = "tcp", rule_action = "allow", rule_number = 200, to_port = 80 }, { cidr_block = "0.0.0.0/0", from_port = 443, protocol = "tcp", rule_action = "allow", rule_number = 300, to_port = 443 }] private_inbound_acl_rules = [{ cidr_block = var.private_incoming_cidr, from_port = 1, protocol = "-1", rule_action = "allow", rule_number = 100 to_port = 65535 }, { cidr_block = var.public_incoming_cidr, from_port = 1024, protocol = "tcp", rule_action = "allow", rule_number = 200 to_port = 65535 }] private_outbound_acl_rules = [{ cidr_block = var.private_incoming_cidr, from_port = 1, protocol = "-1", rule_action = "allow", rule_number = 100, to_port = 65535 }, { cidr_block = var.public_incoming_cidr, from_port = 80, protocol = "tcp", rule_action = "allow", rule_number = 200, to_port = 80 }, { cidr_block = var.public_incoming_cidr, from_port = 443, protocol = "tcp", rule_action = "allow", rule_number = 300, to_port = 443 }] database_inbound_acl_rules = [ for port_num in var.database_ports: { cidr_block = var.private_incoming_cidr, from_port = port_num[0], protocol = "tcp", rule_action = "allow", rule_number = 100*(index(var.database_ports,port_num)+1), to_port = port_num[1] }] database_outbound_acl_rules = [{ cidr_block = var.private_incoming_cidr, from_port = 1, protocol = "tcp", rule_action = "allow", rule_number = 100+100*length(var.database_ports), to_port = 65535 } ] elasticache_inbound_acl_rules = [ { cidr_block = var.private_incoming_cidr, from_port = 1024, protocol = "tcp", rule_action = "allow", rule_number = 100, to_port = 65535 }] elasticache_outbound_acl_rules = [{ cidr_block = var.private_incoming_cidr, from_port = 1, protocol = "tcp", rule_action = "allow", rule_number = 100 to_port = 65535 }] }`
Expected behavior Nacls are already attached so checkov scan CKV2_AWS_1 shouldn't be failed.
Had a quick look at this and according to the AWS Provider documentation (https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/network_acl#argument-reference) SubnetIds are an optional argument. Happy to put together a PR to remove the check from https://github.com/bridgecrewio/checkov/blob/master/checkov/terraform/checks/graph_checks/aws/SubnetHasACL.yaml but in reality I think the check may need renaming, as it only need to verify that there is a VPC attached.
Thanks for contributing to Checkov! We've automatically marked this issue as stale to keep our issues list tidy, because it has not had any activity for 6 months. It will be closed in 14 days if no further activity occurs. Commenting on this issue will remove the stale tag. If you want to talk through the issue or help us understand the priority and context, feel free to add a comment or join us in the Checkov slack channel at https://slack.bridgecrew.io Thanks!
Closing issue due to inactivity. If you feel this is in error, please re-open, or reach out to the community via slack: https://slack.bridgecrew.io Thanks!
This is still a valid issue.
Please reopen, issue still persist