amplify-cli icon indicating copy to clipboard operation
amplify-cli copied to clipboard

Support deploying lambda functions with custom CDK category

Open shishkin opened this issue 3 years ago • 32 comments

Before opening, please confirm:

  • [X] I have installed the latest version of the Amplify CLI (see above), and confirmed that the issue still persists.
  • [X] I have searched for duplicate or closed issues.
  • [X] I have read the guide for submitting bug reports.
  • [X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.

How did you install the Amplify CLI?

No response

If applicable, what version of Node.js are you using?

No response

Amplify CLI Version

7.4.5

What operating system are you using?

Mac

Amplify Categories

Not applicable

Amplify Commands

push

Describe the bug

Unable to deploy a lambda function via custom CDK stack through amplify. Same example works in a standalone CDK project. Error seems to imply that code assets are not built. I also don't see my function code zipped anywhere as CDK usually does.

Expected behavior

Lambda function deployed.

Reproduction steps

  1. Add custom CDK category
  2. Define lambda in CDK stack:
    const fn = new lambda.Function(this, "fn", {
      runtime: lambda.Runtime.NODEJS_14_X,
      code: lambda.Code.fromAsset(`${cdkPath}/functions`),
      handler: "hello",
    });
  1. amplify push

GraphQL schema(s)

# Put schemas below this line


Log output

# Put your logs below this line

UPDATE_FAILED               customcdk                    AWS::CloudFormation::Stack Tue Nov 23 2021 19:39:22 GMT+0100 (Central European Standard Time) Parameters: [AssetParametersc2d8a94782daaaf9412bb34f1a08f5e68ab8d9b98a7073fe0aac5317505259dcS3Bucket59FE5BCE, AssetParametersc2d8a94782daaaf9412bb34f1a08f5e68ab8d9b98a7073fe0aac5317505259dcS3VersionKeyC246F305, AssetParametersc2d8a94782daaaf9412bb34f1a08f5e68ab8d9b98a7073fe0aac5317505259dcArtifactHash2AAFE25B] must have values

Additional information

No response

shishkin avatar Nov 23 '21 19:11 shishkin

@shishkin Yes, the Amplify CLI does not package and upload the lambda function custom source code for you. You can use amplify add function for adding Lambda's where the CLI takes care of the packaging for you.

kaustavghosh06 avatar Nov 23 '21 19:11 kaustavghosh06

Thanks for clarification @kaustavghosh06 . Maybe I don't understand how Amplify integrates CDK, but this seems unfortunate since CDK can package assets. Is there a way to use custom build steps or hooks to get CDK behavior integrated into Amplify?

shishkin avatar Nov 23 '21 19:11 shishkin

Yeah, at this moment we don't support packaging assets out of the box. But you can try using a pre push hook - https://docs.amplify.aws/cli/project/command-hooks/ to package the Lambda assets, but haven't personally tried it with it.

Also, any specific reason you won't lean on using amplify add function to manage your Lambda functions?

kaustavghosh06 avatar Nov 23 '21 19:11 kaustavghosh06

Because I need custom resources provisioned and my lambda configured with env vars pointing to them. I couldn't find a way to get Arn outputs from CDK to Amplify lambda.

shishkin avatar Nov 23 '21 20:11 shishkin

Another issue I run into with Amplify functions is that S3 triggers from Amplify Storage setup object key filters that I can't figure out how to configure.

shishkin avatar Nov 23 '21 20:11 shishkin

I have run into this issue as well when trying to use the Amplify+CDK to deploy Lambda Functions from Docker images. The CDK will perform the docker build to create the docker image on the host machine and upload it to ECR. This does not work with the Amplify+CDK. I use this method of the CDK as I am deploying Lambda functions in Swift, which are not supported by Amplify with amplify add function.

Is asset packaging something that is on the roadmap?

dave-moser avatar Nov 29 '21 19:11 dave-moser

Found this thread after spending an afternoon trying to package up my custom lambda created in CDK. This doesn't work according to this thread. However, if we use amplify add function and try to reference it within an SNS topic (via new LambdaSubscription) created in CDK I get TypeScript errors. I've created a function via cdk.Fn.ref(dependencies.function.<functionName>.arn) but can't use it in the SNS subscription because it just returns a string (the arn name). How can I reference this function in my CDK?

Edit: I actually solved this. You need can reference a new Lambda.function from an ARN. I used the ARN that I created from the cdk.Fn.ref function.

andre347 avatar Nov 30 '21 16:11 andre347

I have been using Amplify extensively and before having the ability to use a custom CDK code all my backend was typically done using AWS SAM. I recently started migrating some of my cloudformation backend that contains IoT resources, queues, layers and Lambdas to CDK. The reason is that when you have multiple lambda functions It is much more convenient to use CDK than using amplify add function. It would be tremendously beneficial to fully support CDK deployments using the Amplify custom CDK as it would make amplify controls the entire front/back-ends deploy.

arturlr avatar Dec 07 '21 17:12 arturlr

I would also like to voice my support for using code: lambda.Code.fromAsset(... in custom cdk resources. I have a similar use case as @shishkin for adding additional s3 trigger functions and configuring prefixes, however, being able to use local function assets seems generally useful.

oste avatar Dec 17 '21 05:12 oste

I haven't tried it with Amplify myself, but this CDK module might do asset bundling as part of its resource synthesis and thus should not rely on CDK CLI to do bundling: https://docs.aws.amazon.com/cdk/api/latest/docs/aws-lambda-nodejs-readme.html. There is also this community construct for ESBuild: https://github.com/mrgrain/cdk-esbuild.

I just switched to using plain CDK instead of Amplify CLI as I found it's more productive over the long run to learn what Amplify is doing under the hood and replicate it with CDK, than waste days over days troubleshooting Amplify's confusing error messages and work around its idiosyncrasies. Some resources I've found for that: https://serverless-nextjs.com/docs/cdkconstruct/ to replicate Hosting and https://github.com/bobbyhadz/cdk-identity-pool-example to replicate Auth. The remaining building blocks are just plain DynamoDB and S3 constructs from CDK.

shishkin avatar Dec 17 '21 09:12 shishkin

I plan to use more and more cdk as well but figure I can still just use it within amplify. But if I continue to run into hurdles like this I guess completely breaking free makes sense. Thanks for the suggestion. Will try it out

oste avatar Dec 17 '21 20:12 oste

I can report that @aws-cdk/aws-lambda-nodejs has the same "Parameters must have values" issue, unfortunately.

oste avatar Dec 18 '21 03:12 oste

I raised this issue(https://github.com/aws/aws-cdk/issues/18090) with aws-cdk since what looked to be a promising addEventNotification function didn't seem to work with imported resources.

oste avatar Dec 20 '21 04:12 oste

We need this as well, or at least the ability to output variables from custom CDK code to the rest of amplify. Our use case is that we have a complex video encoding pipeline setup in the CDK and we need to be able to bundle customized lambda functions to work with it. We need role and bucket ARNs that are created in the CDK and we can't get them over to the amplify functions.

DylanBruzenak avatar Jan 19 '22 23:01 DylanBruzenak

@kaustavghosh06 as @DylanBruzenak said:

or at least the ability to output variables from custom CDK code to the rest of amplify.

We also need this ability..

nathanagez avatar Jan 21 '22 16:01 nathanagez

any update for this issue ?

chakch avatar Feb 24 '22 10:02 chakch

I tried to create a AwsCustomResource like

const ConfigurationSetName = 'defaultConfigSet';
    const configSet = new AwsCustomResource(this, ConfigurationSetName, {
        onUpdate: {
            service: 'SESV2',
            action: 'createConfigurationSet',
            parameters: {
                ConfigurationSetName,
                SendingOptions: { SendingEnabled: true },
            },
            physicalResourceId: {},
        },
        onDelete: {
            service: 'SESV2',
            action: 'deleteConfigurationSet',
            parameters: {
                ConfigurationSetName,
            },
        },
        policy: AwsCustomResourcePolicy.fromStatements([sesPolicy]),
        logRetention: 7,
    });

CDK would create the lambda function automatically and upload the code to S3 bucket. I was checking the cloudformation template created and I can only see the reference to the S3 Bucket, but no creation and upload functionality and get this error on amplify push

  • Error occurred while GetObject. S3 Error Code: NoSuchBucket. S3 Error Message: The specified bucket does not exist (Service: Lambda, Status Code: 400

Punith13 avatar Apr 07 '22 21:04 Punith13

Any update?

ezalorsara avatar Aug 01 '22 06:08 ezalorsara

+1

andreav avatar Oct 21 '22 19:10 andreav

@ykethan With the recent announcement of Amplify CLI Beta supporting CDK v2, does that come with proper support for deploying the full Cloud Assembly, including the assets?

mmoulton avatar Dec 02 '22 18:12 mmoulton

It would be awesome if the documentation here: https://docs.amplify.aws/cli/custom/cdk/ could reflect the limitations described in this issue.

renschler avatar Jan 09 '23 18:01 renschler

It would be awesome if the documentation here: https://docs.amplify.aws/cli/custom/cdk/ could reflect the limitations described in this issue.

Ditto. Disappointing waste of time thinking you can use the cdk as usual with a custom resource only to run into this issue.

csmcallister avatar Feb 01 '23 00:02 csmcallister

I run into the same problem with code: lambda.Code.fromAsset(.... It's kind of a pain because we're using Amplify for some things, but then when we want to add some extra policies or memory size to our functions, we have to do it manually in CloudFormation. This means more custom code in different places, and that's not really what we're looking for.

We've got two big projects that started out with Amplify, but then we added some extra stuff with CDK (like EventBridge and Step Functions as workflows). I thought it would be awesome to combine everything into one big repo with Amplify, CDK, and our frontend (which is SvelteKit, obviously!), but it's not quite doable yet.

Amplify is great for getting Auth, Storage and AppSync with DynamoDB up and running quickly, but we end up doing a lot of custom coding after that. If CDK and Amplify played nicely together, it would be an amazing tool for rapid development.

asmajlovicmars avatar Feb 12 '23 15:02 asmajlovicmars

I thought it would be awesome to combine everything into one big repo with Amplify, CDK, and our frontend (which is SvelteKit, obviously!), but it's not quite doable yet.

Amplify is great for getting Auth, Storage and AppSync with DynamoDB up and running quickly, but we end up doing a lot of custom coding after that. If CDK and Amplify played nicely together, it would be an amazing tool for rapid development.

Precisely! I ended up ditching Amplify for everything except Auth, Hosting, & Appsync.

Everything else I'm deploying w/ CDK in a separate repo.

renschler avatar Feb 14 '23 20:02 renschler

We're doing everything with pure CDK these days. Works pretty well once you get everything rolling.

DylanBruzenak avatar Feb 14 '23 22:02 DylanBruzenak

Yeah, at this moment we don't support packaging assets out of the box. But you can try using a pre push hook - https://docs.amplify.aws/cli/project/command-hooks/ to package the Lambda assets, but haven't personally tried it with it.

Also, any specific reason you won't lean on using amplify add function to manage your Lambda functions?

Really sad to have to go that way but here is a working solution for a flink application (same principle apply for lambda or other asset) with hook :

amplify/hooks/pre-push.js

const fs = require('fs');
const path = require('path');
const AWS = require('aws-sdk');
const zip = require('adm-zip');
const crypto = require('crypto');

try {
  const parameters = JSON.parse(fs.readFileSync(0, { encoding: 'utf8' }));
  // console.log('Parameters: ', JSON.stringify(parameters));

  // Retrieve amplify env
  const { envName } = parameters.data.amplify.environment;
  // console.log('Amplify envName: ', envName);

  // Retrieve the S3 bucket name from the amplify/team-provider-info.json
  const teamProviderInfo = JSON.parse(
    fs.readFileSync(path.join(__dirname, '../team-provider-info.json'), {
      encoding: 'utf8',
    })
  );
  // console.log('teamProviderInfo: ', JSON.stringify(teamProviderInfo));
  const s3BucketName =
    teamProviderInfo[envName].awscloudformation.DeploymentBucketName;

  // Load profile used by amplify
  const localInfo = JSON.parse(
    fs.readFileSync(path.join(__dirname, '../.config/local-aws-info.json'), {
      encoding: 'utf8',
    })
  );

  const profile = localInfo[envName].profileName;

  // console.log('Profile: ', profile);

  // TODO: Add envName to the zip file name
  // Zip content of amplify/backend/custom/measurementAggregator/flink as flink-{hash}.zip
  const flinkCodePath = path.join(
    __dirname,
    '../backend/custom/measurementAggregator/flink'
  );

  // Calculate hash of flink/tumbling-windows.py file
  const hash = crypto.createHash('sha256');
  hash.update(fs.readFileSync(path.join(flinkCodePath, 'tumbling-windows.py')));
  const hashValue = hash.digest('hex');

  const zipDestinationPath = path.join(__dirname, `flink-${hashValue}.zip`);

  // Check if zip file already exists. if it does that means content has not changed and we can skip the upload
  if (!fs.existsSync(zipDestinationPath)) {
    // eslint-disable-next-line new-cap
    const zipFile = new zip();
    zipFile.addLocalFolder(flinkCodePath);
    zipFile.writeZip(zipDestinationPath);

    // Upload zipDestinationPath to the S3 bucket
    const credentials = new AWS.SharedIniFileCredentials({ profile });
    AWS.config.credentials = credentials;
    const s3 = new AWS.S3();

    const uploadParams = {
      Bucket: s3BucketName,
      Key: zipDestinationPath.split('/').pop(),
      Body: fs.readFileSync(zipDestinationPath),
    };

    s3.upload(uploadParams, (err, data) => {
      if (err) {
        console.error('Error', err);
        throw err;
      }
      if (data) {
        console.log('Upload Success', data.Location);
        process.exit(0);
      }
    });
  } else {
    console.log('No changes detected. Skipping upload');
    process.exit(0);
  }
} catch (error) {
  console.log(error);
  process.exit(1);
}

amplify/backend/custom/measurementAggregator/cdk-stack.ts

import * as cdk from '@aws-cdk/core';
import * as AmplifyHelpers from '@aws-amplify/cli-extensibility-helper';
import { AmplifyDependentResourcesAttributes } from '../../types/amplify-dependent-resources-ref';
import * as iam from '@aws-cdk/aws-iam';
import * as lambda from '@aws-cdk/aws-lambda';
import * as kinesis from '@aws-cdk/aws-kinesis';
import * as flink from '@aws-cdk/aws-kinesisanalytics-flink';
import { KinesisEventSource } from '@aws-cdk/aws-lambda-event-sources';
import * as s3 from '@aws-cdk/aws-s3';
import * as crypto from 'crypto';
import * as fs from 'fs';
import * as path from 'path';

export class cdkStack extends cdk.Stack {
  constructor(
    scope: cdk.Construct,
    id: string,
    props?: cdk.StackProps,
    amplifyResourceProps?: AmplifyHelpers.AmplifyResourceProps
  ) {
    super(scope, id, props);
    /* Do not remove - Amplify CLI automatically injects the current deployment environment in this input parameter */
    new cdk.CfnParameter(this, 'env', {
      type: 'String',
      description: 'Current Amplify CLI env name',
    });

    const beforeAgregate = new kinesis.Stream(this, 'BeforeAgregate', {
      streamName: 'AldoBeforeAggregate',
    });

    // Calculate hash of flink/tumbling-windows.py file
    const hash = crypto.createHash('sha256');
    hash.update(fs.readFileSync(path.join(__dirname, '../flink/tumbling-windows.py')));
    const hashValue = hash.digest('hex');
    const fileKey = `flink-${hashValue}.zip`;

    // Get the deployment bucket name from the amplify meta file
    const amplifyProjectInfo = AmplifyHelpers.getProjectInfo();
    console.log('amplifyProjectInfo', JSON.stringify(amplifyProjectInfo));
    const envName = amplifyProjectInfo.envName;
    const teamProviderInfo = JSON.parse(
      fs.readFileSync(path.join(__dirname, '../../../../team-provider-info.json'), {
        encoding: 'utf8',
      })
    );
    const s3BucketName = teamProviderInfo[envName].awscloudformation.DeploymentBucketName;
    console.log('s3BucketName', s3BucketName);
    
    const bucket = s3.Bucket.fromBucketName(this, 'FlinkAppCodeBucket', s3BucketName);

    const afterAgreagte = new kinesis.Stream(this, 'AfterAgreagte', {
      streamName: 'AldoAfterAggregate',
    });

    const propertyGroups: flink.PropertyGroups = {
      'consumer.config.0': {
        'aws.region': cdk.Aws.REGION,
        'input.stream.name': beforeAgregate.streamName,
        'scan.stream.initpos': 'LATEST',
      },
      'kinesis.analytics.flink.run.options': {
        jarfile: 'flink-sql-connector-kinesis-1.15.2.jar',
        python: 'tumbling-windows.py',
      },
      'producer.config.0': {
        'aws.region': cdk.Aws.REGION,
        'output.stream.name': afterAgreagte.streamName,
        'shard.count': '4',
      },
    };

    const agregateStreams = new flink.Application(this, 'App', {
      code: flink.ApplicationCode.fromBucket(bucket, fileKey),
      runtime: flink.Runtime.of('FLINK-1_15'),
      propertyGroups: propertyGroups,
      role: new iam.Role(this, 'Role', {
        assumedBy: new iam.ServicePrincipal('kinesisanalytics.amazonaws.com'),
        inlinePolicies: {
          FlinkPolicy: new iam.PolicyDocument({
            statements: [
              new iam.PolicyStatement({
                actions: ['s3:GetObject*', 's3:GetBucket*', 's3:List*'],
                resources: [bucket.bucketArn, bucket.bucketArn + '/*'],
              }),
            ],
          }),
        },
      }),
    });

    beforeAgregate.grantRead(agregateStreams);
    afterAgreagte.grantWrite(agregateStreams);

    // Load the lambda function that will produce and consume the streams
    // TODO: fix ... not working at all
    const retVal: AmplifyDependentResourcesAttributes = AmplifyHelpers.addResourceDependency(
      this,
      amplifyResourceProps?.category ?? 'custom',
      amplifyResourceProps?.resourceName ?? 'SimulationEngine',
      [
        {
          category: 'function',
          resourceName: 'SmartDashShelfMeasurementIngestion',
        },
      ]
    );

    const consumerProducerFunction = lambda.Function.fromFunctionArn(
      this,
      'SmartDashShelfMeasurementIngestion',
      cdk.Fn.ref(retVal.function.SmartDashShelfMeasurementIngestion.Arn)
    );

    beforeAgregate.grantWrite(consumerProducerFunction);
    afterAgreagte.grantRead(consumerProducerFunction);

    // Trigger consumerProducerFunction when a new record is added to the afterAgregate stream
    consumerProducerFunction.addEventSource(
      new KinesisEventSource(afterAgreagte, {
        startingPosition: lambda.StartingPosition.LATEST,
      })
    );
  }
}

Long story short :

  • In prehook, calculate hash (to avoid uploading each time and updating stack for nothing), zip asset, upload to deployment bucket
  • In CDK, same, calculate hash use this as ref in your asset fromBucket ... The key being to make sure the same fileName/S3Key is used in both pre-push hook and cdk

Interesting logic used to get profile and deployment bucket inside ;)

flochaz avatar Mar 16 '23 08:03 flochaz

Noting here that the reason Amplify custom resource stacks don't work beyond lvl1 resources very well is because the build custom resource step runs tsc vs whatever cdk is doing. cdk likely does role reconciliation dependencies after a compilation or something. Not to mention cdk has a "bootstrapped" bucket to push stuff to, not unlike Amplify's deployment bucket. I feel like they could be blended.

joekiller avatar May 26 '23 21:05 joekiller

I've been kicking around a way to make cdk and amplify more Reese cup and less salad dressing. Using a combination of amplify init, the amplify export to cdk, and an enhanced cdk construct, I'm able to have a reasonable workflow that utilities the best of amplify cli and cdk extensibility.

The main change is all amplify export and never amplify push. Just npm run deploy which runs the following.

amplify export --out cdk/lib --yes && cd cdk && npm install && cdk deploy --require-approval never exported-amplify-backend-stack && cd -

Example repo here: https://github.com/joekiller/amplify-scratch

joekiller avatar May 30 '23 01:05 joekiller

Is there any update on fixing that issue?

I would like to deploy static files to S3 bucket with CDK's https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_s3_assets-readme.html or https://docs.aws.amazon.com/cdk/api/v2/docs/aws-cdk-lib.aws_s3_deployment-readme.html but it doesn't work with Amplify. It doesn't upload anything to CDK's assets buckets.

mateuszboryn avatar Dec 28 '23 12:12 mateuszboryn

This ticket has been open since 2 years now. Many people are complaining. Could you please increase priority of this issue? I think that in Firebase there are no such hidden issues.

mateuszboryn avatar Dec 28 '23 13:12 mateuszboryn