amplify-category-api icon indicating copy to clipboard operation
amplify-category-api copied to clipboard

Appsync resolver to have access to ENV variable

Open mattiLeBlanc opened this issue 6 years ago • 52 comments

Is your feature request related to a problem? Please describe. When creating a BatchPutItem template, I have to provide the table name for the batch operation. Since I create my table via Cloudformation and with an ENV value attached, I cannot use the BatchPutItem since I don't have access to the current Environment value in the resolver template.

A workaround I am using right now is the first call a Lambda in a pipeline resolver and passing on the Environment value from the first function to the second that does the batchPutItem. However, this is kinda unnecessary and requires an extra lambda call while the ENV value should be available during runtime. It looks like it is just not exposed via $ctx.

Describe the solution you'd like Expose the environment value in the $ctx object.

mattiLeBlanc avatar Jul 03 '19 07:07 mattiLeBlanc

@mattiLeBlanc Are you referring to a lambda environment variable or a parameter? Currently in AppSync you can pass the table name as a field in the schema otherwise you'll need to specify the table name in the request mapping template. This seems similar to this following feature request aws-amplify/amplify-category-api#439 . I'll review this issue with the team as well.

SwaySway avatar Jul 03 '19 23:07 SwaySway

My Function resolver (Appsync Pipeline) uses a BatchPutItem:

#set($postsdata = [])
#foreach($id in ${ctx.args.groups})
    #set($item = {
        "pk": $id,
        "sk": "POST:POST_ID=$ctx.stash.postId",
        "type": "POST_IN_GROUP",
        "title": $ctx.args.title
    })
    $util.qr($postsdata.add($util.dynamodb.toMapValues($item)))
#end

{
    "version" : "2018-05-29",
    "operation" : "BatchPutItem",
    "tables" : {
        "coralconsole_$ctx.stash.env": $utils.toJson($postsdata)
    }
}

and as you can see I am using a stashed env variable at the table name in an attempt to make this work.

However, in Cloudformation we already have an ENV variable available so it might be possible to expose that into the $ctx object so that we don't have to a call a Lambda function in a pipeline to be able to specify the unique environment table name.

mattiLeBlanc avatar Jul 04 '19 00:07 mattiLeBlanc

I also have same issue. On top of that how do I add table name when I am testing api locally?

apoorvmote avatar Aug 10 '19 10:08 apoorvmote

👍 Also having issues with Batch*Item operations - table name differs per environment. The resolver pipeline has an association with the DataSource. Why is table name inherent for Query, PutItem, GetItem, but needed for BatchItem operations? I'd rather not shell out to a lambda to evaluate.

cianclarke avatar Oct 07 '19 20:10 cianclarke

workaround of a pipeline where you use a lambda to import the environment variables into the stash will impact performance in bigger appsync apis.

with batchGetItem you really see that appsync is still in it's beginning stages.

really hope that aws will add environment variables for mapping resolvers. adding the option for getting the type name and field name in the resolver would also be great.

and remove the BS option that you need to set the database name again, even with a datasource with that database added, that is just a straight up bug in aws appsync.

robboerman2 avatar Dec 19 '19 15:12 robboerman2

One way we resolved this is by using the AWS CDK to provision our Cloud Resources. When we build our DIST, we read the templates of the resolvers and inject the database name. Then that is deployed. Works pretty well.

mattiLeBlanc avatar Dec 20 '19 02:12 mattiLeBlanc

@mattiLeBlanc hmm that could be a good workaround. could you share some of that code that you made with AWS CDK to accomplish that?

robboerman2 avatar Dec 20 '19 08:12 robboerman2

I’ll see if I can make an extract of our setup. To be continued

On Fri, 20 Dec 2019 at 19:10, robboerman2 [email protected] wrote:

@mattiLeBlanc https://github.com/mattiLeBlanc hmm that could be a good workaround. could you share some of that code that you made with AWS CDK to accomplish that?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <aws-amplify/amplify-category-api#408?email_source=notifications&email_token=ABIKJK5L63ANSXJRYWOIEELQZR4WJA5CNFSM4H5HYCX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEHMHJFQ#issuecomment-567833750>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABIKJKZM4FY6T3DQZSXVPNDQZR4WJANCNFSM4H5HYCXQ .

mattiLeBlanc avatar Dec 20 '19 12:12 mattiLeBlanc

@mattiLeBlanc ok, waiting patiently on your response :)

robboerman2 avatar Dec 23 '19 12:12 robboerman2

Hi Rob,

Well it is a bit hard to give you our full CDK implementation because we haven't open sourced it (yet). Still in development.

But the bit where we doing the injection is where we define the template for a resolver:

/**
   * Add a Resolver to the API
   */
  public addResolver(config: ResolverConfig) {

    const options: any = {
      apiId: this.api.attrApiId,
      typeName: config.type,
      fieldName: config.name,
      requestMappingTemplate: this.addEnv(ResolverService.Instance.resolvers[ config.type ][ `${config.template}-req` ]),
      responseMappingTemplate: ResolverService.Instance.resolvers[ config.type ][ `${config.template}-res` ]
    };

    if (config.kind === ResolverKind.PIPELINE && config.pipelineFunctions && Array.isArray(config.pipelineFunctions)) {
      options.kind = ResolverKind.PIPELINE;
      options.pipelineConfig = {
        functions: []
      };
      config.pipelineFunctions.forEach(name => {
        options.pipelineConfig.functions.push(this.pipelineFunctions[ `${name}` ].attrFunctionId);
      });
    } else {
      options.dataSourceName = config.dataSourceName;
    }
    return new CfnResolver(this, `Resolver_${config.name}`, options);
  }

The important bit here is the this.addEnv which is used at the requestMappingTemplate property. This function is nothing more then an concat function:

  protected addEnv(template: string) {
    return `#set($env=${JSON.stringify(this.resolverEnvironment)})\n${template}`;
  }

The resolverEnvironment is a property of an AppSync Construct (class) that creates a AWS Resource using the Constructs (check the CDK example for Typescript).

So when you deploy your API for an environment (local, dev or staging etc..) it will automatically inject the $env variable in your template.

mattiLeBlanc avatar Dec 23 '19 22:12 mattiLeBlanc

@mattiLeBlanc thanks for the example, this will help.

robboerman2 avatar Dec 24 '19 09:12 robboerman2

@mattiLeBlanc thanks for the example, this will help.

I hope it does. We found implementing the CDK pretty cumbersome at the start, especially with a bigger project with 3 stacks and one root stack. But I hope you will figure it out. Otherwise, just ask me in this thread.

mattiLeBlanc avatar Dec 24 '19 21:12 mattiLeBlanc

@mattiLeBlanc I was unable to find resolverEnvironment anywhere in https://github.com/aws-samples/aws-cdk-examples, or any of the docs, are you referring to another git repo/cdk constructs example code?

alimeerutech avatar Jul 11 '20 01:07 alimeerutech

Adding an update here that as of now AppSync does not support adding environment variables into resolver functions. We are looking at other ways we can address this. We also welcome any PRs or discussions on potential solutions on this.

SwaySway avatar Aug 12 '20 19:08 SwaySway

+1 for this feature

Recently was using Amplify+AppSync and had to create custom resolvers for BatchPutItem. Works well when deployed, but because of the table name being different when I develop locally with amplify mock api the resolver becomes essentially useless.

Dizzzmas avatar Sep 25 '20 16:09 Dizzzmas

@mattiLeBlanc I was unable to find resolverEnvironment anywhere in https://github.com/aws-samples/aws-cdk-examples, or any of the docs, are you referring to another git repo/cdk constructs example code?

Sorry for the late reply:

resolverEnvironment is something we added to our own Stack, so it is not a standard property you would find like region or account. We get our environment from process.env.ENV and we set it in Bitbucket (deploy variables) or in our local terminal env variables.

Does that make sense?

mattiLeBlanc avatar Sep 26 '20 07:09 mattiLeBlanc

+1 for that feature. Any news on that?

Thx and all the best!

beerth avatar Nov 23 '20 07:11 beerth

yes already supported, trough substitutions

robboerman2 avatar Nov 23 '20 13:11 robboerman2

yes already supported, trough substitutions

May you please provide some more details - also an example would be great. Really appreciate your support!

beerth avatar Nov 23 '20 14:11 beerth

I'm also vouching for that. Having to manage these table names in resolver is a real pain in the butt. People don't think about it and push the resolvers code to git with the table names changed all the time.

There should be an easy way to get the table names in the given environment, that would be a life saver.

And don't tell me, well just ask people not to push these modified files, you know people, they forgot as soon as you let them go. That's human nature's...

maroy1986 avatar Nov 23 '20 19:11 maroy1986

+1 :+1:

Pretty much a deal-breaker feature that's missing at the moment.

alexchumak avatar Dec 08 '20 05:12 alexchumak

Hey, Any news with that? Does someone has been successful putting the APPSYNC_ID and the ENV in the VTL in params?

Much Thanks!

idobleicher avatar Jan 06 '21 16:01 idobleicher

My very easy and very unsophisticated workaround is to create multiple fields for each env with hard-coded values.

kldeb avatar Jan 19 '21 21:01 kldeb

Hey, Any news with that? Does someone has been successful putting the APPSYNC_ID and the ENV in the VTL in params?

Much Thanks!

@jonperryxlm came up with a fix in aws-amplify/amplify-cli#1946 where you can specify an additional function that feeds the api id and env into the stash which worked for me. The relevant bits 👇 - it requires setting up a Pipeline resolver though.

In CustomResources.json

"addEnvVariablesToStash": {
  "Type": "AWS::AppSync::FunctionConfiguration",
  "Properties": {
    "ApiId": {
      "Ref": "AppSyncApiId"
    },
    "DataSourceName": "NONE",
    "Description": "Sets $ctx.stash.env to the Amplify environment and $ctx.stash.apiId to the Amplify API ID",
    "FunctionVersion": "2018-05-29",
    "Name": "addEnvVariablesToStash",
    "RequestMappingTemplate": "{\n          \"version\": \"2017-02-28\",\n          \"payload\": {}\n        }",
    "ResponseMappingTemplate": {
      "Fn::Join": [
        "",
        [
          "$util.qr($context.stash.put(\"env\", \"",
          { "Ref" : "env" },
          "\"))\n$util.qr($context.stash.put(\"apiId\", \"",
	  { "Ref": "AppSyncApiId" },
	  "\"))\n$util.toJson($ctx.prev.result)"
        ]
      ]
    }
  }
},

And then in your resolver that needs it

{                                                                                                                 
  "version" : "2018-05-29",
  "operation" : "BatchGetItem",
  "tables" : {
    "MyTablename-${ctx.stash.apiId}-${ctx.stash.env}": {
      "keys": $util.toJson($ids),
      "consistentRead": true
    }
  }
}

wai-chuen avatar Jan 20 '21 14:01 wai-chuen

I don't think it's good practice to use stage variables as part of your table naming convention. Rather, it is recommended to use separate AWS accounts for all stages with AWS Organizations.

joekendal avatar Jul 07 '21 04:07 joekendal

Thanks for the workaround @wai-chuen. I want to add a +1 for this functionality. We also have tables with the ApiID and env name in their name ... LATER EDIT: $context need to be replaced with $ctx @aws As someone said before for Batch*Item operations you need the table name. This means that in order to reuse a vtl template for BatchDeleteItem for example you need to have the Table name available in the $ctx.stash or somewhere. Currently I am creating separate Pipeline resolvers for each entity type and separate templates, but If I would've had the table name available in the context I could use only one template.

eciuca avatar Jul 26 '21 08:07 eciuca

This solution is ridiculous. We still don't have support for passing these variables directly to a resolver?

thejasonfisher avatar Aug 18 '21 22:08 thejasonfisher

It would be nice to have this feature

PatrykMilewski avatar Oct 04 '21 13:10 PatrykMilewski

A nice to have feature X2, meanwhile, the other workaround is to create a lambda function that makes the batch operation and call it from the API

WiL-dev avatar Oct 08 '21 16:10 WiL-dev

Without this feature it makes it immensely complicated and not scalable to build batch operations or any other custom resolvers in AppSync. The tables need to be hardcoded which is an absolute no-go.

maziarzamani avatar Oct 10 '21 09:10 maziarzamani