aws-appsync-community
aws-appsync-community copied to clipboard
Unit testing for AWS AppSync resolvers
Hi, are you planning to add support for Pipeline Resolvers in unit testing soon? Expressions like previous.result are not working right now.
I'm having the same issue as well. I try to test handling of previous results in the VTL template using the evaluate_mapping_template
method of boto3 SDK. Simple cases work but after passing "prev": {"result": "foo"}
into my context I'm just getting back an exception:
Cannot construct instance of `com.amazonaws.deepdish.transform.model.PreviousResult` (no Creators, like default constructor, exist): cannot deserialize from Object value (no delegate- or property-based Creator)
That' s right. In addition, support for outErrors would also be very welcome.
I think the majority of peoples gripes come from VTL with its upfront learning curve, untestability and high cost on maintenance. I'm a huge fan of AppSync, its fast, it scales, and GraphQL is amazing for separating backend and frontend concerns. However, while there are other niggles, VTL is definitely one of the biggest. There was an RFC over a year ago asking for input on a JS resolver. Would be really cool to see this as that would eliminate a lot of the headache!
I am facing the exact same error.
I created a function which is used in a pipeline resolver and consuming result produced by the prior function. To test the request template of the function, I defined the context as following:
context = {
arguments: {
...
},
identity: {
...
},
prev: {
result: {
...
},
},
};
And getting the following error when calling appSync.evaluateMappingTemplate
:
message: 'Cannot construct instance of `com.amazonaws.deepdish.transform.model.PreviousResult` (no Creators, like default constructor, exist): no String-argument constructor/factory method to deserialize from String value
Providing javascript resolvers made testing a lot easier but occasionally you'd still be stuck with the API calls evaluateMappingTeamplate
and evaluateCode
though.
Additionally to the lack of support for PreviousResult
, you will eventually run into lack of support for the context.info
object which, if provided will return The info object is not supported when testing the mapping template
.
Also, I am missing context.stash
in the mapping results a lot. We make a lot of use of this for passing results across functions in pipelines. Not being able to check if my function reliably added data to stash, the two SDK calls limits the use cases for those two SDK calls a lot.
Providing javascript resolvers made testing a lot easier but occasionally you'd still be stuck with the API calls
evaluateMappingTeamplate
andevaluateCode
though. Additionally to the lack of support forPreviousResult
, you will eventually run into lack of support for thecontext.info
object which, if provided will returnThe info object is not supported when testing the mapping template
. Also, I am missingcontext.stash
in the mapping results a lot. We make a lot of use of this for passing results across functions in pipelines. Not being able to check if my function reliably added data to stash, the two SDK calls limits the use cases for those two SDK calls a lot.
Agreed here. Running into limitations being able to access context.info
and context.stash
in my unit tests.
Providing javascript resolvers made testing a lot easier but occasionally you'd still be stuck with the API calls
evaluateMappingTeamplate
andevaluateCode
though. Additionally to the lack of support forPreviousResult
, you will eventually run into lack of support for thecontext.info
object which, if provided will returnThe info object is not supported when testing the mapping template
. Also, I am missingcontext.stash
in the mapping results a lot. We make a lot of use of this for passing results across functions in pipelines. Not being able to check if my function reliably added data to stash, the two SDK calls limits the use cases for those two SDK calls a lot.Agreed here. Running into limitations being able to access
context.info
andcontext.stash
in my unit tests.
Having access to the context
in the result would be very helpful. I've also had other issues with the identity
for the input context
. It seems to only use the Lambda Auth identity:
'error': {'message': 'Unrecognized field "username" (class com.amazonaws.deepdish.common.identity.LambdaAuthIdentity), not marked as ignorable (one known property: "resolverContext"])\n at [Source: UNKNOWN; line: -1, column: -1] (through reference chain: com.amazonaws.deepdish.transform.model.MappingTemplateContext$MappingTemplateContextBuilder["identity"]->com.amazonaws.deepdish.common.identity.LambdaAuthIdentity["username"])'}
I am also running into limitations with the evaluateCode testing approach:
-
Like other people in this issue, I would like to be able to test what my resolvers place in the stash
-
I would also like to be able to test my resolvers use of extensions.setSubscriptionFilter. As far as I can tell there is no way to do that at present? (please let me know if I have missed one)
I tried the following context and it doesn't seem to work:
{
"identity": { "username": "test" }
}
It results in:
{
"error": {
"message": "Unrecognized field \"username\" (class com.amazonaws.deepdish.common.identity.LambdaAuthIdentity), not marked as ignorable (one known property: \"resolverContext\"])\n at [Source: UNKNOWN; line: -1, column: -1] (through reference chain: com.amazonaws.deepdish.transform.model.MappingTemplateContext$MappingTemplateContextBuilder[\"identity\"]->com.amazonaws.deepdish.common.identity.LambdaAuthIdentity[\"username\"])"
}
}
@oliverschenk your need to place your data inside the resolverContext
{
"identity": { "resolverContext": { "username": "test" } }
}
I tried the following context and it doesn't seem to work:
{ "identity": { "username": "test" } }
It results in:
{ "error": { "message": "Unrecognized field \"username\" (class com.amazonaws.deepdish.common.identity.LambdaAuthIdentity), not marked as ignorable (one known property: \"resolverContext\"])\n at [Source: UNKNOWN; line: -1, column: -1] (through reference chain: com.amazonaws.deepdish.transform.model.MappingTemplateContext$MappingTemplateContextBuilder[\"identity\"]->com.amazonaws.deepdish.common.identity.LambdaAuthIdentity[\"username\"])" } }
@oliverschenk you need to add all the expected identity
parameters:
identity: {
sub : "uuid",
issuer : " https://cognito-idp.{region}.amazonaws.com/{userPoolId}",
username : "Nadia",
claims : { },
sourceIp :[ "x.x.x.x" ],
defaultAuthStrategy : "ALLOW",
}
See the answer on the stack overflow post I made about it
Thanks @perryn and @chenz0OO - I believe both methods worked.
I am using Cognito User Pools and I think resolverContext is for Lambda, but in any case, for testing it seems to accept either.