Append custom resolvers to auto generated DDB resolvers
Is this feature request related to a new or existing Amplify category?
function
Is this related to another service?
No
Describe the feature you'd like to request
GIVEN Todo model, amplify auto generates createTodo mutation attached to DynamoDB resolver. Allow side effects on autogenerated mutations to perform custom operations. Example: Send an email (lambda function) after createTodo mutation. Currently there is no option to perform side affects on auto generated operations.
Describe the solution you'd like
Since all resolvers are pipeline resolvers, provide a way to append, prepend user-defined resolvers during schema design.
Auto-generated mutation:
Type Mutation {
createTodo(input:CreateTodoInput): Todo
}
Proposal
- Append to auto-generated resolver - executes after generated resolver
# @appendResolvers will append any user defined @function resolvers onto amplify generated resolvers.
# pipelineResolvers = [originalDDBCreateTodo, FunctionName-${env}]
# we need some kind of @ts-ignore to ignore compile time error on a future generated input ie, CreateTodoInput
Type Mutation {
@appendResolvers
createTodo(@ts-ignore input:CreateTodoInput): Todo @function(name: “FunctionName-${env}”)
}
- Prepend to auto-generated resolver - executes before generated resolver
# @prependResolvers will prepend any user defined @function resolvers onto amplify generated resolvers.
# pipelineResolvers = [FunctionName-${env}, originalDDBCreateTodo]
# we need some kind of @ts-ignore to ignore compile time error on a future generated input ie, CreateTodoInput
Type Mutation {
@prependResolvers
createTodo(@ts-ignore input:CreateTodoInput): Todo @function(name: “FunctionName-${env}”)
}
Describe alternatives you've considered
- Described solution for this feature request can be achieved using AppSync console manually - doesn't work in a CI/CD environment where manual change are overwritten on deployments.
- Move DDB resolvers to lambda - this is more manual work, slows down development beats the purpose of amplify.
Additional context
This is very important to our requirements to have side effects and simplify developer workflow to make it more intuitive by generating side effects from schema.
Is this something that you'd be interested in working on?
- [ ] 👋 I may be able to implement this feature request
- [ ] ⚠️ This feature might incur a breaking change
Please take a look at https://docs.amplify.aws/cli/graphql/custom-business-logic/. Amplify supports a few ways of extending, overriding, or otherwise customizing resolvers.
I reviewed existing ways to extending unfortunately none of them address the problem I described. I will try to restate the problem in simple terms:
Requirement: Insert a record into Todo DDB table and send an email after insertion is complete. Should take advantage of Amplify's code generation, below
Manual Steps:
Modify auto-generated createTodo DDB pipeline resolver using AppSync console to add a new function:
- Add a new function to pipeline resolver which invokes
sendEmaillambda function.
Result: Creates Todo record and invokes sendEmail function.
It sounds like you want to extend an Amplify-generated resolver. That sounds like the use case covered here. Could you invoke your Lambda from a postDataLoad slot?
@cjihrig Thanks for your comment. postDataLoad is a vtl, can it invoke lambda, please let me know if you have an example.
AppSync supports Lambda resolvers, so it should work. Check out this tutorial.
@cjihrig I am aware of lambda resolvers but our requirement is to have a single generated mutation (createTodo) attached to pipeline resolver (executes auto generated DDB followed in invoking custom lambda). This is an important requirement for us to declaratively achieve it to improve time to market.
@cliren you should be able to use a postDataLoad slot to invoke Lambda.
@cjihrig That will be awesome, can you help me with an example?
Please check out the conversation in https://github.com/aws-amplify/amplify-cli/issues/9623. That should cover most of what you're trying to accomplish (adding a resolver slot and configuring the resource). If you still have implementation questions, I encourage you to check out the Amplify Discord server or Stack Overflow.
Example is not clear enough, I didn't find any references on how to invoke a lambda from postDataLoad slot. Reopening to clarify with an example if not, explore recommended solution option.
Hi @cliren ! I am also trying to make this work too! I did create a separate issue in hopes of getting some traction, you might want to follow it. https://github.com/aws-amplify/amplify-category-api/issues/687
I really think this feature is important and of great value, it's unfortunate that there no documentation supporting it.
Reopening based on offline conversation with @cliren. The provided steps didn't solve the original request.
Any update on this?
Does anyone know if there is any workable way to do this? It seems like a common pattern and I'm surprised that this is so difficult with Amplify.
Two simple examples of use cases:
- We have a
RewardsMembermodel that represents a member in a loyalty program, and the loyalty accounts are backed by a third-party API. We want to extend the defaultcreateRewardsMemberresolver to first check to see if the account exists in the third-party system and create it if necessary, before creating the record in our DynamoDB table. - We have models subject to auditing, like
Ticketwhere we need a logging audit trail of who changed what and when. We want to insert a Lambda function before and/or after the default generated CRUD operations likecreateTicketso that we can check the API request to see who sent it and log information about what's changing.
In both of those examples, we want to use the Amplify code generation. Because that's part of why we're using Amplify. We want to extend the generated code, we don't want to replace it. If we chain @function directives together in the schema then we get a pipeline resolver that can call more than one function -- but we would have to reinvent the wheel and reimplement the CRUD operations ourselves as Lambda functions. We do not want to waste our time on that and we don't want to take the risk of introducing a bug related to _version or _deleted or other DataStore attributes.
If we instead do it by extending the existing VTL templates that Amplify generated, then we can't call our Lambda functions. Because nobody seems to know how to do that.
I tried copying generated VTL code from a @function directive into my own VTL templates and I tried to use that to insert a "HELLO, WORLD!" Lambda function into the pipeline with the generated VTL resolver from Amplify. I failed.
Can anyone provide any working example?
There are some hints here: https://github.com/aws-amplify/amplify-category-api/issues/687
But that's still not a working example, since nobody seems to know how to get the function ID.
We're also looking to do this, and our use case is very similar to originally reported request, but possibly even simpler:
When creating a User model managed by Amplify and persisted in Dynamo DB, we want to add a precondition check to query DynamoDB by the email address passed into the createUser mutation to verify it isn't already in use.
We were planning on doing that by doing an earlier PutItem with an attribute_not_exists condition on a separate UserEmail table where the primary key of that table is the email. We want to do this as we don't want to have the primary key of the User itself being the email but a UUID so it's immutable (as we allow the user to change their email address).
This would allow us to prevent creating users with the same email address.
So basically we want to add an additional DDB resolver into the createUser pipeline with a separate DDB table datasource.
I'm going to try the suggestions in #687 but I'm not convinced it will work, and it would be great for Amplify to support my use case natively without a lot of complicated custom coding as preventing duplicate emails or other business identifiers seems like a fairly common requirement.
I'm happy to create a separate ticket (as related but different to this) but based on this ticket and #687 I'm not sure it's worthwhile?
We're also looking to do this, and our use case is very similar to originally reported request, but possibly even simpler:
When creating a
Usermodel managed by Amplify and persisted in Dynamo DB, we want to add a precondition check to query DynamoDB by thecreateUsermutation to verify it isn't already in use.We were planning on doing that by doing an earlier
PutItemwith anattribute_not_existscondition on a separateUserEmailtable where the primary key of that table is the email. We want to do this as we don't want to have the primary key of theUseritself being the email but a UUID so it's immutable (as we allow the user to change their email address).This would allow us to prevent creating users with the same email address.
So basically we want to add an additional DDB resolver into the
createUserpipeline with a separate DDB table datasource.I'm going to try the suggestions in #687 but I'm not convinced it will work, and it would be great for Amplify to support my use case natively without a lot of complicated custom coding as preventing duplicate emails or other business identifiers seems like a fairly common requirement.
I'm happy to create a separate ticket (as related but different to this) but based on this ticket and #687 I'm not sure it's worthwhile?
Since you are targeting a DDB data source, it will work, at least in v1. I think v2 has zero support for this unfortunately. If you are using v1 and need help adding that step in the generated pipeline, I can provide an example.
Since you are targeting a DDB data source, it will work, at least in v1. I think v2 has zero support for this unfortunately. If you are using v1 and need help adding that step in the generated pipeline, I can provide an example.
That's vey encouraging, thanks Alex. When you say v1 and v2 I assume you're referring to "Gen 1" vs "Gen 2". If so, we're still using Gen 1, so that should work.
If you're able to provide a basic example, that would be amazing, and I'd definitely appreciate it. But if that's too time consuming for you currently, I will try and give it a go based on the comments here and in #687 and see how I get on.