terraform-provider-aws
terraform-provider-aws copied to clipboard
[Bug]: Root resource was present, but now absent
Terraform Core Version
0.12.0
AWS Provider Version
5.27.0
Affected Resource(s)
aws_spot_instance_request, aws_iam_policy_attachment.policy_attachment2
Expected Behavior
Attach the Managed IAM Policy to the spot instance
Actual Behavior
│ Error: Provider produced inconsistent result after apply │ │ When applying changes to aws_iam_policy_attachment.policy_attachment2, │ provider "provider["registry.terraform.io/hashicorp/aws"]" produced an │ unexpected new value: Root resource was present, but now absent. │ │ This is a bug in the provider, which should be reported in the provider's │ own issue tracker. ╵ ╷ │ Error: Provider produced inconsistent result after apply │ │ When applying changes to aws_iam_policy_attachment.policy_attachment, │ provider "provider["registry.terraform.io/hashicorp/aws"]" produced an │ unexpected new value: Root resource was present, but now absent. │ │ This is a bug in the provider, which should be reported in the provider's │ own issue tracker.
Relevant Error/Panic Output Snippet
No response
Terraform Configuration Files
resource "aws_spot_instance_request" "node" { for_each = { for vms in var.nodes : "${vms.name}" => vms if vms.spot_instance == true } ami = data.aws_ami.ubuntu.id instance_type = each.value["instance_type"] vpc_security_group_ids = [aws_security_group.internal.id] subnet_id = aws_subnet.private[each.value.availability_zone].id user_data = data.cloudinit_config.cloudinit[0].rendered
root_block_device { volume_size = each.value["volume_size"] delete_on_termination = "true" }
spot_price = each.value["spot_price"] wait_for_fulfillment = true spot_type = "one-time" tags = { "Name" = "${var.k8s_cluster_name}-${each.value["name"]}", "kubernetes.io/cluster/${var.k8s_cluster_name}" = "member", "Role" = "node" }
iam_instance_profile = aws_iam_instance_profile.kube-node.id }
Steps to Reproduce
Create an AWS aws_spot_instance_request with the provided code
Debug Output
No response
Panic Output
No response
Important Factoids
No response
References
No response
Would you like to implement a fix?
No, i wouldnt' like to implement a fix
Community Note
Voting for Prioritization
- Please vote on this issue by adding a 👍 reaction to the original post to help the community and maintainers prioritize this request.
- Please see our prioritization guide for information on how we prioritize.
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request.
Volunteering to Work on This Issue
- If you are interested in working on this issue, please leave a comment.
- If this would be your first contribution, please review the contribution guide.
Hello!
Is there any movement on this?
I'm having the same issue.
Error: Provider produced inconsistent result after apply
When applying changes to
module.configure_lambda["devices"].aws_iam_policy_attachment.s3_policy_attachment,
provider "provider[\"registry.terraform.io/hashicorp/aws\"]" produced an
unexpected new value: Root object was present, but now absent.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.
That relevant resources looks like this:
resource "aws_iam_role" "iam_for_lambda" {
name = "${var.lambda_name}-${var.env}"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_policy_attachment" "s3_policy_attachment" {
name = "s3-policy-attachment"
policy_arn = var.s3_policy_arn
roles = [aws_iam_role.iam_for_lambda.name]
}
Hello!
It's been nearly a year since this was first reported. Are there any news on this issue?
Is there a workaround that doesn't involve running the apply twice?
Hello!
@ewbankkit Maybe it's time to fix this one? Almost 2 years since this issue was created.
I also got:
Error: Provider produced inconsistent result after apply
When applying changes to aws_iam_policy_attachment.eks_node_role_route53_policy, provider "provider[\"registry.terraform.io/hashicorp/aws\"]" produced an unexpected new value: Root object was present, but now absent.
This is a bug in the provider, which should be reported in the provider's own issue tracker.
This is also blocking us and this afternoon, I did a bit of a deep dive in hopes that this helps someone else deliver a patch.
I have found the matching error statement here: https://github.com/hashicorp/terraform-provider-aws/blob/0d00a2f407cba2c80b0d20dc8ee2fa13345b1c7a/internal/service/iam/policy_attachment.go#L109
Working backwards, I can see my stack is hitting the resourcePolicyAttachmentUpdate lifecycle hook: https://github.com/hashicorp/terraform-provider-aws/blob/main/internal/service/iam/policy_attachment.go#L125-L144
This in turn is running the updateRoles function: https://github.com/hashicorp/terraform-provider-aws/blob/0d00a2f407cba2c80b0d20dc8ee2fa13345b1c7a/internal/service/iam/policy_attachment.go#L215-L229
In my traces, I can see a role detachment:
MASKED [DEBUG] provider.terraform-provider-aws_v5.89.0_x5: HTTP Response Received: http.response_content_length=212 rpc.method=DetachRolePolicy tf_mux_provider=*schema.GRPCProviderServer tf_provider_addr=registry.terraform.io/hashicorp/aws tf_resource_type=aws_iam_policy_attachment tf_rpc=ApplyResourceChange @caller=github.com/hashicorp/aws-sdk-go-base/[email protected]/logging/tf_logger.go:45 http.status_code=200 tf_aws.sdk=aws-sdk-go-v2 tf_aws.signing_region= tf_req_id=MASKED aws.region=MASKED http.response.body="<DetachRolePolicyResponse xmlns="https://iam.amazonaws.com/doc/2010-05-08/">
<ResponseMetadata>
<RequestId>MASKED</RequestId>
</ResponseMetadata>
</DetachRolePolicyResponse>
This leads me to suspect that the logic within the updateRoles function is incorrectly dropping a role. Which means that function returns an error, which later matches the condition of there being no matching resources to apply the resource to, therefore the observed error: https://github.com/hashicorp/terraform-provider-aws/blob/0d00a2f407cba2c80b0d20dc8ee2fa13345b1c7a/internal/service/iam/policy_attachment.go#L108
Following some research into Terraform providers, I wonder if this reddit post is relevant: https://www.reddit.com/r/Terraform/comments/m5nv14/comment/gr29zct/?utm_source=share&utm_medium=web2x&context=3
I am leaning towards their being missing logic for catching unchanged attachments on resource updates. This doesn't explain other policies we have within our stack that are also re-provisioned however, as it appears to only occur with a select few role attachments. I don't have enough time at the moment to further investigate but I hope this is a helpful push in the right direction!
Another workaround that we just discovered and verified is related to reusing an policy in two or more attachments. We duplicated the policy equal to the number of attachments, and the underlying issue disappeared.
A third workaround we found was to use one attachment with all roles that require attaching, rather than duplicating the policy and attaching it independently.
A third workaround we found was to use one attachment with all roles that require attaching, rather than duplicating the policy and attaching it independently.
Following further testing we observed that the stack would deploy without error but the underlying roles would sometimes lose their policy attachments.
Another workaround that we just discovered and verified is related to reusing an policy in two or more attachments. We duplicated the policy equal to the number of attachments, and the underlying issue disappeared.
We reverted to this and everything works as expected.