terraform-provider-aws
terraform-provider-aws copied to clipboard
Updates to aws_appflow_flow: Plugin crashed / Plugin did not respond
Community Note
- Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
- Please do not leave "+1" or other comments that do not add relevant new information or questions, they generate extra noise for issue followers and do not help prioritize the request
- If you are interested in working on this issue or have submitted a pull request, please leave a comment
Terraform CLI and Terraform AWS Provider Version
18:08 $ terraform -v
Terraform v1.2.8
on darwin_arm64
+ provider registry.terraform.io/hashicorp/archive v2.2.0
+ provider registry.terraform.io/hashicorp/aws v4.27.0
+ provider registry.terraform.io/hashicorp/awscc v0.30.0
+ provider registry.terraform.io/honeycombio/honeycombio v0.6.0
Affected Resource(s)
- aws_appflow_flow
Terraform Configuration Files
Please include all Terraform configurations required to reproduce the bug. Bug reports without a functional reproduction may be closed without investigation.
resource "aws_appflow_flow" "test_flow" {
name = "test-flow"
description = "new description"
source_flow_config {
connector_profile_name = "salesforce-production"
connector_type = "Salesforce"
source_connector_properties {
salesforce {
enable_dynamic_field_update = false
include_deleted_records = true
object = "User"
}
}
}
destination_flow_config {
connector_type = "S3"
destination_connector_properties {
s3 {
bucket_name = aws_s3_bucket.this.bucket
bucket_prefix = "test-delete-me"
s3_output_format_config {
file_type = "JSON"
aggregation_config {
aggregation_type = "None"
}
prefix_config {}
}
}
}
}
trigger_config {
trigger_type = "OnDemand"
}
task {
source_fields = [
"Email",
]
task_properties = {}
task_type = "Filter"
connector_operator {
salesforce = "PROJECTION"
}
}
task {
destination_field = "Email"
source_fields = ["Email"]
task_properties = {
"DESTINATION_DATA_TYPE" = "string"
"SOURCE_DATA_TYPE" = "string"
}
task_type = "Map"
connector_operator {
salesforce = "NO_OP"
}
}
}
Debug Output
https://gist.github.com/tjefferson08/9df138d70049d359c6c7c4734e918f4e#file-debug-log
Panic Output
Expected Behavior
Actual Behavior
Steps to Reproduce
-
terraform apply
Important Factoids
N/A
References
N/A
What happens if you remove prefix_config {}
?
@camro With prefix_config
removed, I can still create the flow.
Attempting to update it yields this plan:
# module.salesforce_export.aws_appflow_flow.test_flow will be updated in-place
~ resource "aws_appflow_flow" "test_flow" {
id = "arn:aws:appflow:us-east-2:065302916061:flow/test-flow"
name = "test-flow"
tags = {}
# (4 unchanged attributes hidden)
- destination_flow_config {
- connector_type = "S3" -> null
- destination_connector_properties {
- s3 {
- bucket_name = "meritamerica-salesforce-export-production" -> null
- bucket_prefix = "test-delete-me" -> null
- s3_output_format_config {
- file_type = "JSON" -> null
- aggregation_config {
- aggregation_type = "None" -> null
}
- prefix_config {}
}
}
}
}
+ destination_flow_config {
+ connector_type = "S3"
+ destination_connector_properties {
+ s3 {
+ bucket_name = "meritamerica-salesforce-export-production"
+ bucket_prefix = "test-delete-me"
+ s3_output_format_config {
+ file_type = "JSON"
+ aggregation_config {
+ aggregation_type = "None"
}
}
}
}
}
+ destination_flow_config {
}
# (4 unchanged blocks hidden)
}
and applying that plan yields this error:
│ Error: updating AppFlow Flow (arn:aws:appflow:us-east-2:065302916061:flow/test-flow): InvalidParameter: 2 validation error(s) found.
│ - missing required field, UpdateFlowInput.DestinationFlowConfigList[1].ConnectorType.
│ - missing required field, UpdateFlowInput.DestinationFlowConfigList[1].DestinationConnectorProperties.
│
│
│ with module.salesforce_export.aws_appflow_flow.test_flow,
│ on ../../modules/salesforce_export/test_flow.tf line 1, in resource "aws_appflow_flow" "test_flow":
│ 1: resource "aws_appflow_flow" "test_flow" {
Looks like you've got a second destination_flow_config
. Can you paste the appflow part of your .TF file
I'm using the same appflow resource/config that was posted in the original issue description
The only modification I made here was to remove the prefix config as you suggested
I am also running into this error it doesn't matter if I build the terraform from scratch or import. When I modify and apply I get the same errors
Error: updating AppFlow Flow (arn:aws:appflow:us-east-1:277360468408:flow/sf_asset_flow): InvalidParameter: 2 validation error(s) found. │ - missing required field, UpdateFlowInput.DestinationFlowConfigList[0].ConnectorType. │ - missing required field, UpdateFlowInput.DestinationFlowConfigList[0].DestinationConnectorProperties.
I'm using the same appflow resource/config that was posted in the original issue description
The only modification I made here was to remove the prefix config as you suggested
Did you ever get a fix to this issue?
Did you ever get a fix to this issue?
No, unfortunately not. I've worked around it by switching to the appflow flow resource from the awscc provider
The awscc
version works well for me; you might try that if you're in a bind.
Thanks. This is unfortunate.
Hi all, not a solution but a two-step workaround (until it's resolved).
Apparently terraform expects a definition with attributes in a specific order. So in order (and for "Salesforce"
connector type) an aws_appflow_flow
resource has to look like:
resource "aws_appflow_flow" "flow" {
name = var.flow_name
kms_arn = var.kms_arn
destination_flow_config {
connector_type = var.definition_connector_type
destination_connector_properties {
// here relevant configuration
}
}
source_flow_config {
connector_type = "Salesforce"
connector_profile_name = var.source_connector_profile_name
source_connector_properties {
// here relevant configuration
}
}
task {
source_fields = [
for name, mapping in var.mappings : name
]
task_properties = {
"DESTINATION_DATA_TYPE" = "string"
}
task_type = "Filter"
connector_operator {
salesforce = "PROJECTION"
}
}
dynamic "task" {
for_each = var.mappings
content {
destination_field = task.key
source_fields = [
task.key
]
task_properties = {
"DESTINATION_DATA_TYPE" : task.value.dstType,
"SOURCE_DATA_TYPE" : task.value.srcType
}
task_type = "Map"
connector_operator {
salesforce = "NO_OP"
}
}
}
trigger_config {
// here relevant configuration
}
}
with the mappings
variable defined as:
variable "mappings" {
type = map(object({
srcType = string
dstType = string
}))
}
Terraform maps are lexicographically ordered by default so it will make sure that we've got the order for the tasks right. Somehow this is required. I don't know why, but I needed to add the "DESTINATION_DATA_TYPE" = "string"
line in the PROJECTION
not to see updates in terraform plan
for the resource. Also don't use MAP_all
if it is not really needed as if you do - Terraform will actually retain in the state the full set of mapping tasks autogenerated by Appflow. If so you'd need to use:
lifecycle {
ignore_changes = [task]
}
When you deploy successfully and then Terraform fails on an update - read through the errors and add the missing fields in your definition (that Terraform recognized missing and thus different from it's state definition so it wants to update) and have them put in expected order in your definition before retrying.
Okay, so this works when one has the resource definition for create well defined from the start. As Terraform has been failing for me on any update I used this suggestion to create a module with a null_resource
that has as triggers
all the elements I know might change and then I use replace_triggered_by
on the flow.
resource "null_resource" "flow_change_notifier" {
triggers = {
flowtype = var.flowtype
field_names = join(",", [for n, d in var.mappings : n])
field_src_types = join(",", [for n, d in var.mappings : d.srcType])
field_dst_types = join(",", [for n, d in var.mappings : d.dstType])
kms_arn = var.kms_arn
}
}
So effectively I'm recreating a flow on every modification with this entry in the flow definition:
lifecycle {
replace_triggered_by = [null_resource.flow_change_notifier]
}
This is not nice, but worked for me.
I also had issues with Terraform "attempting to fix" the changes that occur to the appflow flow resource outside of TF. Both with the task (when using the Map_all task type), and with the destination (in my case, event bridge).
Until there's a better solution, I'm using the lifecycle meta-arg to get around these problems. As described in that doc:
The ignore_changes feature is intended to be used when a resource is created with references to data that may change in the future, but should not affect said resource after its creation. In some rare cases, settings of a remote object are modified by processes outside of Terraform, which Terraform would then attempt to "fix" on the next run.
In this case, I've added the meta argument
lifecycle {
ignore_changes = [
task,
destination_flow_config
]
}
I am facing this issue as well
Terraform v1.5.7 on darwin_amd64
provider registry.terraform.io/hashicorp/aws v5.26.0 provider registry.terraform.io/hashicorp/template v2.2.0
This functionality has been released in v5.27.0 of the Terraform AWS Provider. Please see the Terraform documentation on provider versioning or reach out if you need any assistance upgrading.
For further feature requests or bug reports with this functionality, please create a new GitHub issue following the template. Thank you!
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.