terraform-provider-newrelic
terraform-provider-newrelic copied to clipboard
Got Intermittent error when apply channel using terraform
Hi there,
Thank you for opening an issue. In order to better assist you with your issue, we kindly ask to follow the template format and instructions. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests only. General usage questions submitted as issues will be closed and redirected to New Relic's Explorers Hub https://discuss.newrelic.com/c/build-on-new-relic/developer-toolkit.
Please include the following with your bug report
:warning: Important: Failure to include the following, such as omitting the Terraform configuration in question, may delay resolving the issue.
- [x] Your New Relic
provider
configuration (sensitive details redacted) on the below - [x] A list of affected resources and/or data sources
channel
andpolicy-channel
- [x] The configuration of the resources and/or data sources related to the bug report (i.e. from the list mentioned above) on the below
- [x] Description of the current behavior (the bug)
- [x] Description of the expected behavior
- [ ] Any related log output
The Issue
So for the issue is we got intermittent error like this when applying , sometimes it's success, but sometimes got error, we don't know why, that's happen in the same manifest terraform, and when we create a channel
.

Backgroud tech stack
currently we are using terraform with terraform-modules repository for store the template terraform manifest, Also we are using aws S3 and dynamodb for backend and save the terraform state
terraform-module repository
On the terraform-module consists of several folder that represent each item on newrelic alert, for the example we have folder for channel,condition,policy and etc.
.
├── $FOLDER
│ ├── main.tf
│ ├── output.tf
│ └── variables.tf
└── $FOLDER
├── main.tf
├── output.tf
└── variables.tf
inside folder template for newrelic-channel
- main.tf
# https://registry.terraform.io/providers/newrelic/newrelic/latest/docs/resources/alert_channel
//$POLICY_NAME
locals {
load = <<EOF
{
"account_id": "$ACCOUNT_ID",
"account_name": "$ACCOUNT_NAME",
"closed_violations_count_critical": "$CLOSED_VIOLATIONS_COUNT_CRITICAL",
"closed_violations_count_warning": "$CLOSED_VIOLATIONS_COUNT_WARNING",
"condition_description": "$DESCRIPTION",
"condition_family_id": "$CONDITION_FAMILY_ID",
"condition_name": "$CONDITION_NAME",
"current_state": "$EVENT_STATE",
"details": "$EVENT_DETAILS",
"duration": "$DURATION",
"event_type": "$EVENT_TYPE",
"incident_acknowledge_url": "$INCIDENT_ACKNOWLEDGE_URL",
"incident_id": "$INCIDENT_ID",
"incident_url": "$INCIDENT_URL",
"metadata": "$METADATA",
"open_violations_count_critical": "$OPEN_VIOLATIONS_COUNT_CRITICAL",
"open_violations_count_warning": "$OPEN_VIOLATIONS_COUNT_WARNING",
"owner": "$EVENT_OWNER",
"policy_name": "$CONDITION_NAME",
"policy_url": "$POLICY_URL",
"runbook_url": "$RUNBOOK_URL",
"severity": "$SEVERITY",
"targets": "$TARGETS",
"timestamp": "$TIMESTAMP",
"violation_callback_url": "$VIOLATION_CALLBACK_URL",
"violation_chart_url": "$VIOLATION_CHART_URL",
"condition_id": "$CONDITION_ID",
"channel": "%s",
"multi_alert": %v,
"bypass_warn": %v,
}
EOF
}
resource "newrelic_alert_channel" "slack" {
name = var.alert_channel_name
type = "webhook"
config {
base_url = "http://xxxxx/xxxx/xxxxx/newrelic"
payload_string = format(
local.load,
var.alert_channel_slack_channel_id,
var.multi_alert_title,
var.alert_channel_slack_bypass_warn,
)
payload_type = "application/json"
}
}
- output.tf
output "id" {
value = newrelic_alert_channel.slack.id
}
- variables.tf
terraform {
backend "s3" {}
}
variable "alert_channel_name" {
type = string
description = "The name of the channel"
}
variable "alert_channel_slack_channel_id" {
type = string
description = "Slack channel ID"
}
variable "alert_channel_slack_bypass_warn" {
type = bool
description = "Bypass warning"
default = false
}
variable "multi_alert_title" {
type = bool
description = "make title also showing the title base on facet"
default = true
}
Terraform Version
Run terraform -v
to show the version. If you are not running the latest version of Terraform, please upgrade because your issue may have already been fixed.
Terraform v0.12.29
Affected Resource(s)
Please list the resources as a list:
-
newrelic_alert_channel
If this issue appears to affect multiple resources, it may be an issue with Terraform's core, so please mention this.
Terraform Configuration
Please include your
provider
configuration (sensitive details redacted) as well as the configuration of the resources and/or data sources related to the bug report.
Please see on the above
Actual Behavior
What actually happened?
we got intermittent error like this when applying channel
and policy-channel
Error: Provider produced inconsistent result after apply
When applying changes to newrelic_alert_channel.slack, provider
"registry.terraform.io/-/newrelic" produced an unexpected new value for was
present, but now absent.
This is a bug in the provider, which should be reported in the provider's own
issue tracker.

Expected Behavior
What should have happened? the apply should be success
Steps to Reproduce
Please list the steps required to reproduce the issue:
-
terraform apply
Debug Output
Please provider a link to a GitHub Gist containing the complete debug output: https://www.terraform.io/docs/internals/debugging.html. Please do NOT paste the debug output in the issue; just paste a link to the Gist. we difficult get the debug error because the error is intermittent
Panic Output
If Terraform produced a panic, please provide a link to a GitHub Gist containing the output of the crash.log
.
Important Factoids
Are there anything atypical about your accounts that we should know? For example: Running in EC2 Classic? Custom version of OpenStack? Tight ACLs?
References
Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:
Hi @bachrilq ,
I'm looking this over on our side, and while I do that I have a couple questions for you:
After you get this error, if you log in to your account and check for the channel that you were trying to create does it exist in the system? The "produced an unexpected new value for was present, but now absent." error can sometimes come from a resource being created, but when Terraform tries to read the created resource it doesn't get returned yet. If this is what is happening here then knowing that will help us get this corrected.
Does this happen for multiple different channel types or only 1 specific type? We are in the process of migrating to a new notification system (More details here: https://discuss.newrelic.com/t/plan-to-upgrade-alert-notification-channels-to-workflows-and-destinations/188205), and for accounts currently being migrated we are blocking new types of notification channels from being added to the account because these need to be created in the new system instead.
I will let you know if I find anything else on our side.
Hi @emetcalf9,
thank you for your response, ok so when it happens the channel didn't create on the newrelic, and yesterday I tried only with the channel type webhook
, not yet tried to create another type.
Because today currently looks more stable compared with yesterday, the error has been reduced significantly, maybe is there any Issue from your side?
Hi @bachrilq - it's been a long time, but we've recently tried reproducing this issue with the latest version of the New Relic Terraform Provider, and we find it's working fine - the alert_channel
resource is successfully applied from Terraform, and is seen in the New Relic UI too.
Can you please try and let us know if the issue still persists? Also, for your information - as described in the previous comment, the resources newrelic_alert_channel
and newrelic_alert_policy_channel
are deprecated - please consider using newrelic_notification_channel
instead. Thanks!
We haven’t heard back from you in a long time so we will close the ticket. If you feel this is still a valid request or bug, feel free to create a new issue.