Only one resolver is allowed per field
AppSync does not seem to detach and attach resolvers when they are removed or renamed in CloudFormation resulting in edge cases that require manual intervention.
Reproduction Steps
The simplest version of this to replicate is to change the name of an AppSync resolver resulting in an error where the old resolver is still attached and the new one fails to attach with the error Only one resolver is allowed per field.
Using CDK for conciseness:
api = aws_appsync.GraphQLApi(
self,
"test_api",
name="test_api",
schema_definition=aws_appsync.SchemaDefinition.FILE,
log_config=aws_appsync.LogConfig(
exclude_verbose_content=False, field_log_level=aws_appsync.FieldLogLevel.ALL,
),
schema_definition_file="resources/schema.graphql",
xray_enabled=True,
)
api.add_none_data_source("ping", "Ping").create_resolver(
type_name="Query",
field_name="ping",
request_mapping_template=aws_appsync.MappingTemplate.from_string(
'{"version": "2018-05-29"}'
),
response_mapping_template=aws_appsync.MappingTemplate.from_string(
'$util.toJson("pong")'
),
)
Schema:
type Query {
ping: String
}
The above code deploys an AppSync API and attaches a resolver that echos "pong" when the "ping" field is queried. Once deployed simply changing the name of the datasource reproduces this error. Line 12 can be changed to api.add_none_data_source("ping2", "Ping").create_resolver( (note "ping2"). This results in the old datasource being deleted (but not detached) and the new datasource being created and failing to attach due to it clashing with the old datasource.
Other
This issue has also been raised on the Amplify community - https://github.com/aws-amplify/amplify-cli/issues/682
This is :bug: Bug Report
I also see this problem when using Terraform to deploy AppSync.
Update: I deleted the whole AppSync API from the console, re-run the TF scripts and got the same error!
Hey, there are a lot of similar issues (e.g. in the cdk repository: https://github.com/aws/aws-cdk/issues/13269#issuecomment-1022239503), is there any progress on this issue?
I encountered this issue when I renamed a table (@model), and had an explicitly named secondary index and relation (@hasMany.) It seemed to be a chicken & egg type issue, one that wasn't encountered in local testing. I resolved this by renaming my secondary index and query field, and then redeploying with amplify push. Demo below for illustration. Presumably, after you've completed this once, you could revert back to the original names & won't need to change the rest of your system to support the new naming. Hope the work around for this particular circumstance helps those who don't want to wait for a bug fix.
Child object property:
example: ID! @index(name: "exampleName", sortKeyFields: ["from"], queryField: "exampleQueryField")
Parent object relation:
exampleObjects: [Example] @hasMany(indexName: "exampleName", fields: ["id"])
Changed to:
Child object property:
example: ID! @index(name: "exampleNameNew", sortKeyFields: ["from"], queryField: "exampleQueryFieldNew")
Parent object relation:
exampleObjects: [Example] @hasMany(indexName: "exampleNameNew", fields: ["id"])```
Since this update in CDK this is an even bigger issue now! CDK has standardised naming on resolvers but this causes all existing IDs to change. The suggestion there is to hardcode all of the IDs to the old versions but this isn't practical on any reasonably sized project. Any chance this could get some attention?
:wave: we are experiencing this issue for our customer use-case which can be described as below: We have several AppSync resolvers that are part of say "TestStack" that is already deployed successfully. Now our use-case is to move some resolvers out to a new stack say "CustomTestStack". We currently do this like suggested in this thread by keeping the logical ID of the resolvers and attached pipeline functions same in CustomTestStack (I have verified this from the CFN generated).
However, during the deployment it fails with only one resolver is allowed per field error.
The new stack also has a DependsOn relation to the original stack, so we would expect the resolvers to be deleted first before re-attaching them from the new stack.
I have attached the CFN logs below:

pic2

From the logs for original "Test" stack (pic2), we see that it's waiting in phase "update_complete_cleanup_in_progress" where I would assume the resolvers to be soft deleted and detached. Then it wouldn't cause the subsequent new stack to fail with the error mentioned above.
Please let me know if you need any other information.
I ran into this issue yesterday. This is how I resolved it:
- List the resolvers for the model in question
aws appsync list-resolvers --api-id yourappsncapid --type-name YourModelTypeNameThis will give you a JSON list of resolvers, the first one will likely have a source of DYNAMODB, this is the main model resolver. You should also see a second resolver with a fieldName that matches the field you are getting theonly one reolver is allowed per fielderror
{
"resolvers": [
{
"typeName": "YourModelTypeName",
"fieldName": "foo",
"resolverArn": "arn:aws:appsync:us-east-1:1234456789:apis/someid/types/YourModelTypeName/resolvers/foo",
"requestMappingTemplate": "$util.qr......",
"responseMappingTemplate": "$util.toJson($ctx.prev.result)",
"kind": "PIPELINE",
"pipelineConfig": {
"functions": [
"....",
"..."
]
}
},
{
"typeName": "YourModelTypeName",
"fieldName": "bar",
"resolverArn": "arn:aws:appsync:us-east-1:1234456789:apis/someid/types/YourModelTypeName/resolvers/bar",
"requestMappingTemplate": "...",
"responseMappingTemplate": "$util.toJson($ctx.prev.result)",
"kind": "PIPELINE",
"pipelineConfig": {
"functions": [
"ksdflsjfklsdfjsdfjlksdfj"
]
}
}
]
}
-
Delete the existing field resolver
aws appsync delete-resolver --api-id yourappsncapid --type-name YourModelTypeName --field-name bar -
Run the list command from step one again to confirm the field resolver has been deleted.
-
Redeploy the same build and pray
In my case the build hit the error again but for a different model & field. I had to run the same procedure above for that model & field.
But then!!! My build succeeded