terraform-provider-newrelic
terraform-provider-newrelic copied to clipboard
Error: expected entity tag team to have been updated but was not found
Please include the following with your bug report
:warning: Important: Failure to include the following, such as omitting the Terraform configuration in question, may delay resolving the issue.
- [x ] Your New Relic
provider
configuration (sensitive details redacted) - [ x] A list of affected resources and/or data sources
- [x ] The configuration of the resources and/or data sources related to the bug report (i.e. from the list mentioned above)
- [x ] Description of the current behavior (the bug)
- [ x] Description of the expected behavior
- [ x] Any related log output
Terraform Version
Terraform v1.0.11
on linux_amd64
+ provider registry.terraform.io/newrelic/newrelic v2.34.0
Affected Resource(s)
-
newrelic_entity_tags.nr_dashboard_tags
issue seen v2.30.0, v2.30.2 and v2.34.0
Terraform Configuration
https://gist.github.com/thedebugger/2e26fbb8ca484d464a0701f98f8a0d27
Actual Behavior
terraform errored with " expected entity tag team to have been updated but was not found"
Expected Behavior
terraform should update the nr_dashboard_tags accordingly
Steps to Reproduce
Please list the steps required to reproduce the issue, for example:
- Create nr_one_dashboard and nr_dashboard_tags
- Delete the dashboard resource manually from state file and from NR (or manually change the nr_dashboard_tags to something non-existent)
- run terraform apply
Debug Output
Can provide debug output if required since we have hundred of resources. But here are the info logs https://gist.github.com/thedebugger/2e26fbb8ca484d464a0701f98f8a0d27
Important Factoids
no
References
- Most likely it is failing here https://github.com/newrelic/terraform-provider-newrelic/blob/aa6ceb1e2ef89f0c369f3c4f059646615afb2faa/newrelic/resource_newrelic_entity_tags.go#L185
I am experiencing the same error in newrelic provider version 2.46.2
It's still happening in version 2.48.0
Still not working at 2.49.1 - more to it, now I can't use my module because each apply introduces same changes because of tainted tag resources.
Thanks all. We will take a look.
Just to add some context to this, we're seeing this issue using v4.19.0
of the Pulumi New Relic provider, which uses v2.49.0
of terraform-provider
under the hood. This error has been intermittently happening over the past couple months.
Just an update on this. This happens only when you use count
or for_each
on the resource (creating multiple newrelic_entity_tags resources
).
Apparently, creating multiple tags using a single newrelic_entity_tags
resource would not produce described behavior.
One way to circumvent this is to create separate newrelic_entity_tags
resource for each entity and to use Terraform dynamic
keyword on the tag
block.
EXAMPLE:
locals {
tags = [{
key = "team",
values = [var.team]
}, {
key = "service",
values = [var.service]
}]
}
resource "newrelic_entity_tags" "this" {
guid = newrelic_nrql_alert_condition.this.entity_guid
dynamic "tag" {
for_each = local.tags
content {
key = tag.value["key"]
values = tag.value["values"]
}
}
}
For what it's worth, to add to what @bkalcho posted: I'm not sure there's a way to implement the dynamic
tagging using the pulumi plugin, so I can't validate that possible solution on my end.
That being said, we're running into this issue creating 3 tags
under a single entityTag
resource, so we're seeing the error occur in a scenario that @bkalcho said he isn't.
I will confirm that we are still seeing this issue in Terraform provider version 3.0.2
s: Still creating... [20s elapsed] ╷ │ Error: expected entity tag type to have been updated but was not found │ │ with module.cluster_alerts.module.synthetic_slash_alert.newrelic_entity_tags.this_tags, │ on ../modules/newrelic_alert/main.tf line 51, in resource "newrelic_entity_tags" "this_tags": │ 51: resource "newrelic_entity_tags" "this_tags" {
resource "newrelic_entity_tags" "this_tags" {
guid = newrelic_nrql_alert_condition.this.entity_guid
tag {
key = "Environment"
values = [var.environment]
}
dynamic "tag" {
for_each = var.tags
content {
key = tag.key
values = try(
[tostring(tag.value)],
tolist(tag.value),
)
}
}
}
tags = {
type = "synthetic"
}
}
Thanks all the extra info. I've raised the retry mechanism again to 60 seconds and raised a case with our API team to take a closer look. This seems to be getting worse over time.
This was raised with the alerting team internally as there is nothing we can do on the Terraform side to fix this issue. As mentioned retry mechanism timeout has been raised to 60 seconds which will hopefully avoid the error we've been seeing.
As there is no action we can take on the Terraform side we will close the ticket. Feel free to continue the discussion if needed.
I regret to inform you that this didn't work.
Is there an issue tracking this internally we can follow instead?
module.cluster_alerts.module.app_endpoint_alert.newrelic_entity_tags.this_tags: Still modifying... [id=MzUwMTkzMHxBSU9QU3xDT05ESVRJT058Mjc1NTg4NDI, 10s elapsed]
module.cluster_alerts.module.app_endpoint_alert.newrelic_entity_tags.this_tags: Still modifying... [id=MzUwMTkzMHxBSU9QU3xDT05ESVRJT058Mjc1NTg4NDI, 20s elapsed]
module.cluster_alerts.module.app_endpoint_alert.newrelic_entity_tags.this_tags: Still modifying... [id=MzUwMTkzMHxBSU9QU3xDT05ESVRJT058Mjc1NTg4NDI, 30s elapsed]
module.cluster_alerts.module.app_endpoint_alert.newrelic_entity_tags.this_tags: Still modifying... [id=MzUwMTkzMHxBSU9QU3xDT05ESVRJT058Mjc1NTg4NDI, 40s elapsed]
module.cluster_alerts.module.app_endpoint_alert.newrelic_entity_tags.this_tags: Still modifying... [id=MzUwMTkzMHxBSU9QU3xDT05ESVRJT058Mjc1NTg4NDI, 50s elapsed]
module.cluster_alerts.module.app_endpoint_alert.newrelic_entity_tags.this_tags: Still modifying... [id=MzUwMTkzMHxBSU9QU3xDT05ESVRJT058Mjc1NTg4NDI, 1m0s elapsed]
╷
│ Error: expected entity tag type to have been updated but was not found
│
│ with module.cluster_alerts.module.app_endpoint_alert.newrelic_entity_tags.this_tags,
│ on ../modules/newrelic_alert/main.tf line 51, in resource "newrelic_entity_tags" "this_tags":
│ 51: resource "newrelic_entity_tags" "this_tags" {
│
╵``
Is this on the latest version @meyerkev? Would it be possible to get a debug log?
Yes, this was on the latest version. I setup autopatching for a reason.
I'm quitting this job Friday, so how would I generate a debug log?
You can find the instructions here: https://www.terraform.io/internals/debugging
Since this can contain sensitive data you can encrypt it, or send it directly to [email protected]
I'm having the same issue here. newrelic v2.48.2
@otoru, I am no longer having this issue and I don't know why.
If you can replicate, can you follow up with @kidk?
This is not a joke. I turned on debug logs to share the information here on the thread and everything worked perfectly :clown:
This is not the first time as the debug log slows everything down, which gives the API more time to settle. 😄
If you can get INFO log that will also help.
I'm still chasing this internally, but it seems this is an issue that might take some time.
An improvement in the API has been deployed recently. Anyone still experiencing this?
I am still getting it. Using newrelic provider version 3.6.1.
I am quite new to terraform and so it's possible I'm doing something wrong. However, I note that my tf
s are designed to apply tags to 37 APMs, and I do use the count
construct. It would be really unwieldy to have to declare a separate resource for each one.
Earlier, when I only had three APMs to be touched, I didn't get the error.
Below is a sanitized version that shows how I'm doing it for one environment. In the end I have three environments and three "appgroups" (X
represents an appgroup), resulting in a total of (currently) 37 APMs.
terraform {
required_providers {
newrelic = {
source = "newrelic/newrelic"
}
}
}
provider "newrelic" {}
variable "apmnames_X" {
description="create APM resources with these names"
type=list(string)
default=[ "appone","apptwo","appthree" ]
}
data "newrelic_entity" "apm_X" {
count=length(var.apmnames_X)
name = "foo.bar.X.${var.apmnames_X[count.index]}"
type = "APPLICATION"
domain = "APM"
}
resource "newrelic_entity_tags" "apmtags_X" {
count=length(var.apmnames_X)
guid = data.newrelic_entity.apm_X[count.index].guid
tag {
key = "Appname"
values = ["X.PROD"]
}
tag {
key = "AppGroup"
values = ["X"]
}
tag {
key = "WorkloadGroup"
values = ["X.PROD"]
}
}
Brief excerpt of the error section of the output:
newrelic_entity_tags.apmtags_Y[1]: Still creating... [50s elapsed]
newrelic_entity_tags.apmtags_Y[2]: Still creating... [50s elapsed]
newrelic_entity_tags.apmtags_Y[4]: Still creating... [50s elapsed]
╷
│ Error: expected entity tag AppGroup to have been created but was not found
│
│ with newrelic_entity_tags.apmtags_X[1],
│ on apm-X.tf line 22, in resource "newrelic_entity_tags" "apmtags_X":
│ 22: resource "newrelic_entity_tags" "apmtags_X" {
│
╵
╷
│ Error: expected entity tag AppGroup to have been created but was not found
│
│ with newrelic_entity_tags.apmtags_X[5],
│ on apm-X.tf line 22, in resource "newrelic_entity_tags" "apmtags_X":
│ 22: resource "newrelic_entity_tags" "apmtags_X" {
│
╵
╷
│ Error: expected entity tag AppGroup to have been created but was not found
│
│ with newrelic_entity_tags.apmtags_X[7],
│ on apm-X.tf line 22, in resource "newrelic_entity_tags" "apmtags_X":
│ 22: resource "newrelic_entity_tags" "apmtags_X" {
│
Also seeing this error for the first time this afternoon. We are applying 4 tags to 3 conditions. Seems to be random if we re-apply which items fail each time.
Provider version 3.6.1
Relevant TF snippets:
database_alert_condition_tags_infra = { "conditionOwner" = "InfraTeam" "conditionCategory" = "Infrastructure" "conditionResponder" = "SupportDesk" "notifyWindow" = "Continuous" }
resource "newrelic_entity_tags" "its-infra-database-cpu1" { for_each = local.database_alert_condition_tags_infra
guid = newrelic_nrql_alert_condition.database_cpu1.entity_guid
tag { key = each.key values = [each.value] } }
Error: expected entity tag conditionOwner to have been updated but was not found on its-infra-database.tf line 241, in resource "newrelic_entity_tags" "its-infra-database-cpu1": 241: resource "newrelic_entity_tags" "its-infra-database-cpu1" {
For anyone still experiencing this issue, do you still see the issue if you run the terraform
command using the -parallelism
option set to a lower number than the default 10
?
e.g. terraform apply -parallelism=2
I had tried it with parallelism=2
back when I encountered this issue early last month, and doing that did not remove the error (i.e. yes, I still saw the issue then). I haven't come back to it since. We did work around it by using python to generate a terraform file with no looping, just a separate structure for each item.
After further investigation, it looks like the problem has to do with how the HCL is being constructed in the reported scenarios. @trustthewhiterabbit's HCL is actually looping over the same resource repeatedly due to the scope of the resources for_each
loop and it not referencing an index for the guid
attribute, hence the repeated error Still creating... [50s elapsed]
.
@bkalcho was on the right track with using a dynamic
block, but applying to multiple entities is where it got tricky for others I think.
Since this is a relatively common scenario, we've updated the docs for newrelic_entity_tags
with a more advanced example of how to apply a set of tags to multiple entities.
Here is the example we've put in the docs. I've tested it with many entities and many tags with success.
locals {
apps = toset([
"Example App Name 1",
"Example App Name 2",
])
custom_tags = {
"tag-key-1" = "tag-value-1"
"tag-key-2" = "tag-value-2"
"tag-key-3" = "tag-value-3"
}
}
data "newrelic_entity" "foo" {
for_each = local.apps
name = each.key # Note: each.key and each.value are the same for a set
type = "APPLICATION"
domain = "APM"
}
resource "newrelic_entity_tags" "foo" {
for_each = local.apps
guid = data.newrelic_entity.foo[each.key].guid
dynamic "tag" {
for_each = local.custom_tags
content {
key = tag.key
values = [tag.value]
}
}
}
Hope this helps and please let us know if you continue to run into the reported errors.
Note this can also be accomplished using count
as well.
e.g.
locals {
apps = [
"Example App Name 1",
"Example App Name 2",
]
custom_tags = {
"tag-key-1" = "tag-value-1"
"tag-key-2" = "tag-value-2"
"tag-key-3" = "tag-value-3"
}
}
data "newrelic_entity" "foo" {
count = length(local.apps)
name = local.apps[count.index]
type = "APPLICATION"
domain = "APM"
}
resource "newrelic_entity_tags" "foo" {
count = length(local.apps)
guid = data.newrelic_entity.foo[count.index].guid
dynamic "tag" {
for_each = local.custom_tags
content {
key = tag.key
values = [tag.value]
}
}
}