aws-cdk
aws-cdk copied to clipboard
(aws-lambda-nodejs): Uploaded file must be a non-empty zip
What is the problem?
I updated lambda function dependancies and deploying the lambda function but it fails with following error.

I also have another api under the same project and I updated its lambda function dependancies and it was deployed successfully.
Both api's and its lambda functions are almost identical to each other. However only one gets deployed and another one doesn't.
I deleted the cdk.out folder and tried to deploy again and it fails with same error each time.
Reproduction Steps
I have simple lambda function that I am trying to deploy as follows
import { NodejsFunction } from 'aws-cdk-lib/aws-lambda-nodejs';
import { Architecture, Code, Function, Runtime } from 'aws-cdk-lib/aws-lambda';
const authFn = new NodejsFunction(this, 'authNodeJs', {
runtime: Runtime.NODEJS_14_X,
entry: `${__dirname}/../auth/index.ts`,
handler: 'auth',
architecture: Architecture.ARM_64,
memorySize: 1024,
environment: {
CLIENT_ID: appClientID
}
})
const auth1Fn = new Function(this, 'authGolang', {
runtime: Runtime.GO_1_X,
code: Code.fromAsset(`${__dirname}/../auth-1/`, {
bundling: {
image: Runtime.GO_1_X.bundlingImage,
user: 'root',
command: [
'bash', '-c', [
'cd /asset-input',
'go build -o main main.go',
'mv /asset-input/main /asset-output/'
].join(' && ')
]
}
}),
handler: 'main',
memorySize: 512,
environment: {
CLIENT_ID: appClientID
}
})
What did you expect to happen?
I expected it to deploy all of my lambda functions.
What actually happened?
It failed with error Uploaded file must be a non-empty zip
CDK CLI Version
2.8.0
Framework Version
No response
Node.js Version
v16.13.2
OS
Ubuntu 20.04 on WSL 2
Language
Typescript
Language Version
~3.9.7
Other information
No response
Looks like for some (unknown) reason an empty zip file landed on S3 for this asset.
It should be fixed If you manually remove the asset file from the bootstrap bucket and then retry.
The real issue is docker build with golang is not working. For my usecase golang lambda is still experimental. I am just keeping an eye on it for future use. I am heavily into NodeJS lambdas and they are working fine. So I just deleted single golang function and everything got deployed.
Again the issue is not resolved. But golang lambda is not priority for me so I am closing this thread.
⚠️COMMENT VISIBILITY WARNING⚠️
Comments on closed issues are hard for our team to see. If you need more assistance, please either tag a team member or open a new issue that references this one. If you wish to keep having a conversation with other community members under this issue feel free to do so.
In my case it was a human error. I had a CDK project with TypeScript where I had a lambda written in JavaScript. In .gitignore I had an entry that simply excluded javascript files. Once I checked out the repository on another computer the JavaScript file with the lambda was obviously not there. Took me at least two hours to realise that.
This is my typescript code:
const senderLambda = new lambda.Function(params.scope, params.functionName, {
runtime: lambda.Runtime.NODEJS_14_X,
handler: 'sender.handler',
code: lambda.Code.fromAsset(path.join(__dirname, 'email-sender-lambda')),
functionName: params.functionName,
environment: {
EMAIL_FROM: params.emailFrom,
EMAIL_TO: params.emailTo,
EMAIL_BCC: params.emailBcc
}
});
The lambda was meant to be in email-sender-lambda/sender.js.
Guess it would be good to have a different error/warning message there or simply fail the deployment as soon as the file cannot be found during compilation.
I'm seeing this issue when trying to upgrade to CDK v2.
assets.json file:
{
"version": "16.0.0",
"files": {
"4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659": {
"source": {
"path": "asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659",
"packaging": "zip"
},
"destinations": {
"<redacted>-us-west-2": {
"bucketName": "cdk-hnb659fds-assets-<redacted>-us-west-2",
"objectKey": "4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659.zip",
"region": "us-west-2",
"assumeRoleArn": "arn:${AWS::Partition}:iam::<redacted>:role/cdk-hnb659fds-file-publishing-role-<redacted>-us-west-2"
}
}
},
"0af5e7a7e0c998e4fa0c980dc1158a921cc5b19392ddc8dc5d92a0a5a62155fc": {
"source": {
"path": "ses-validation-stack.template.json",
"packaging": "file"
},
"destinations": {
"<redacted>-us-west-2": {
"bucketName": "cdk-hnb659fds-assets-<redacted>-us-west-2",
"objectKey": "0af5e7a7e0c998e4fa0c980dc1158a921cc5b19392ddc8dc5d92a0a5a62155fc.json",
"region": "us-west-2",
"assumeRoleArn": "arn:${AWS::Partition}:iam::<redacted>:role/cdk-hnb659fds-file-publishing-role-<redacted>-us-west-2"
}
}
}
},
"dockerImages": {}
}
My cdk.out directory:
cdk.out/tree.json
cdk.out/ses-validation-stack.template.json
cdk.out/asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659/
cdk.out/asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659/aws-sdk-patch/
cdk.out/asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659/aws-sdk-patch/opensearch-2021-01-01.service.json
cdk.out/asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659/aws-sdk-patch/opensearch-2021-01-01.paginators.json
cdk.out/asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659/index.d.ts
cdk.out/asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659/index.js.map
cdk.out/asset.4a575666d1c2c6412590d2a56f328e040a81ad1ef59aecee31ae9b393d05f659/index.js
cdk.out/manifest.json
cdk.out/cdk.out
cdk.out/ses-validation-stack.assets.json
So the asset files are certainly in the cdk.out directory and where they are supposed to be according the the asset.json. I'll try to dig some more into how this differs between CDK v1 and v2...
Assets and directory structure are identical (excluding hashes) with CDK v1. Seems to have been an issue introduced in v2? The error output would imply to me that there's something wrong with how the zip path is being given somewhere, but the verbose output didn't give much additional insight on that.
his does appear to be the result of some pathing assumptions. In our setup, we bundle cdk.out and deploy it separately from where it was synthesized. I get this error when doing that, but not if I deploy from the same place I ran synth.
Reopening issue for visibility and tracking
So I originally thought this issue was exclusive to CDK v2, but it turns out that this is not the case. We hadn't seen it prior to the migration because assets were being cached. This issue is blocking us from doing any deployments to new environments.
Based on my testing, the issue was introduced between v1.133.0 and v1.139.0
Is there a way to disable all asset caching? Is the easiest way to handle this to delete the CDK bootstrap stack as well and recreate it? If I deploy with a good version, destroy my stack, then deploy a bad version, it will still succeed. I can only reproduce the issue with a newly bootstrapped account which is a pain in the ass to debug...
This appears to be a result of the --no-staging flag. I can successfully deploy zip assets when staging is enabled, but not when it is disabled. This is inconsistent with the behavior of other asset types. For example, docker image tarballs upload just fine when staging is disabled. We have staging disabled so that we don't copy our docker image tarballs unnecessarily to the cdk.out directory since they can be quite large. The zip packaging type seems to want to work with --no-staging, but it doesn't.
So I guess the question is: is this a bug or is this intended behavior with poor error messaging?
For reference, when --no-staging is passed asset sources inside of assets.json refer to a non-existent tmp directory like this one: /tmp/jsii-kernel-GqHEA2
This will probably be my last update here. The asset code is abstracted down so many levels it's head spinning to try and understand what it is actually doing and there's little to no documentation on its internals.
Here's what I've gathered: the --no-staging flag makes it so that assets are not copied to cdk.out. The aws-lambda-nodejs package will perform a build on Code assets and put the build output in a folder inside of /tmp, aws-lambda-nodejs expects that these files will be copied into cdk.out and will delete the directory when it is finished, but when --no-staging is passed, the assets.json will still point to this non-existent folder in /tmp and fail to zip it.
Without any comments from the maintainers, it's hard to say if this behavior is expected or not, whether it should be fixed, or whether to add clearer messaging on the effects. I'd be happy to make any one of these changes, but I need to know that this is something the CDK team is interested in seeing fixed before I dedicate more time to this.
I think I know why this is happening for my specific case. I found this occurring (as in the error showing but then the deploy succeeding) when I was using the code
new s3deploy.BucketDeployment(this, `DeployWithInvalidation1`, {
sources: [s3deploy.Source.asset('../out', { exclude: ['!*\\.*'] })],
destinationBucket: rootSiteBucket,
distribution,
distributionPaths: ['/*'],
prune: false,
});
My explanation for why it's occurring is the \\. escape characters needed to do globbing on files that don't have . character because . is already a special character. It may have something to do with cdk script running without checking for escape characters vs escape characters working correctly during deployment?
I am not using zip files just plain old directory reference to out dir.
Got the fail: 🚨 WARNING: EMPTY ZIP FILE 🚨 message so heading over here to provide details. This has started out of the blue in the last few days. We build the first time, get this warning, build again and everything works fine. See below the details (worth noting we've experienced this with 3 diff developers all who have very different setups (OS etc))
OS: Linux (WSL2) 5.10.102.1-microsoft-standard-WSL2 x86_64 - Ubuntu 20.04.4 LTS Node: v16.13.0 CDK: 2.18.0 (build 75c90fa) Package Manager: NPM 8.5.5
We're using the @aws-cdk/aws-lambda-go-alpha construct. With the following bundling options:
this.lambda = new goLambda.GoFunction(this, id, {
entry: props.entry,
environment: props.environmentVariables,
architecture: lambda.Architecture.ARM_64,
vpc: props.vpc,
timeout: Duration.seconds(300),
logRetention: props.environment === 'prod' ? logs.RetentionDays.FIVE_YEARS : logs.RetentionDays.ONE_DAY,
insightsVersion: lambda.LambdaInsightsVersion.VERSION_1_0_119_0,
tracing: lambda.Tracing.ACTIVE,
layers: props.layers,
bundling: {
cgoEnabled: true,
goBuildFlags: ['-ldflags "-s -w"','-trimpath'],
environment: {
"GOOS": "linux",
"GOARCH": "arm64",
...(process.platform == "linux") && { "CC": "aarch64-linux-gnu-gcc" },
...(process.platform == "darwin") && { "CC": "aarch64-unknown-linux-gnu-gcc" }
}
}
})
Expecting the built bootstrap binary in the zip. I've already deleted contents of cdk.out so not sure what the actual contents was.
Finally, I don't think this is reproducible, as I said it's a hit and miss when it happens. Let me know if I can provide any other info or help with troubleshooting.
This just happened with us as well using the aws_lambda_python_alpha.PythonFunction construct.
It didn't happen again after deleting the cdk.out directory and re-synthesizing.
OS: macOS Monterery v12.2.1 Node: v16.14.2 CDK: 2.10.0 (build e5b301f) Package Manager: pip
I had this issue occurring from aws_lambda_python_alpha.PythonFunction. Somehow I had got into a state where cdk.out/asset.{hash}/ folder had the correct files, but there was a corresponding ZIP file uploaded to the CDK S3 artifacts bucket which was empty. It's possible this empty zip was uploaded due to me cancelling cdk deploy at the wrong time.
I was able to resolve my error by deleting the empty ZIP file from S3 and deleting cdk.out
OS: Ubuntu 20.04.4 LTS Node: v14.18.3 CDK: 2.8.0 (build 8a5eb49) Package Manager: pip
Creating lambda layer:
layer = LayerVersion(
scope=self,
id='ExampleLayer',
code=Code.from_docker_build(path=f'{root}/lambda_layer')
)
Which contains this dockerfile:
FROM node:16.13.2
RUN ls
RUN mkdir -p /asset/bin/
RUN cp -L /usr/local/bin/node /asset/bin/node
RUN npm install --prefix /asset/bin [email protected]
RUN ln -s /asset/bin/node_modules/aws-cdk/bin/cdk /asset/bin/cdk <--- This line breaks the deployment (not the build)
RUN /asset/bin/cdk --version
RUN /asset/bin/node --version
The line, where it creates a symbolic link (ln -s) breaks AWS CDK and it always produces an empty zip.
Also, on a fresh deployment (fresh build) I always get this error (when using symlinks):
AwsCdkServerlessStack: deploying... [0%] start: Publishing 816a3bd516eda114880e099f1dc8b2cc022b7f54f95537aab836507dac214120:current [0%] start: Publishing 4964d66ada9b47b1aa20846dd0bb38d6614bcdc356fb538b8d1e74a9c8a3d862:current [50%] success: Published 4964d66ada9b47b1aa20846dd0bb38d6614bcdc356fb538b8d1e74a9c8a3d862:current (node:1126) UnhandledPromiseRejectionWarning: Error: ENOENT: no such file or directory, stat '/Users/laimonassutkus/Desktop/AwsCdkServerless/SourceCode/cdk.out/asset.816a3bd516eda114880e099f1dc8b2cc022b7f54f95537aab836507dac214120/bin/cdk' (Use
node --trace-warnings ...to show where the warning was created) (node:1126) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag--unhandled-rejections=strict(see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1) (node:1126) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.
Message in codebuild step of our cdk pipeline told me to come here. Happening to me now as well. Docker image: lambda.Runtime.NODEJS_14_X.bundlingImage Asset should contain Lambda code but it's empty.
*Using CDK v1.149.0
Is this file the culprit potentially?
assembly-CORESharedServicesCodePipeline-test-us-east-1-Deploy/.cache/03f68eef42c44051ca20172644612baf89e32096a165aedade31e26caefa45ca.zip
seems like it may be... I deleted that file and all its version from S3 and added a command to the Lambda Docker build container to delete the cdk.out folder before running cdk synth again but to no avail
I'm sure this isn't very helpful, but I'm starting to see this more and more often as the number of lambdas increases:
OS: OSX 12.3.1 (21E258) NodeJS Version: NODEJS_14_X, CLI Version: 2.22.0 Package manager: NPM 6.14.16 What is the asset supposed to contain: Appsync lambder handler Reproducible: Nope, it's happening randomly.
My issue was resolved... somehow on the initial deploy, Docker didn't have access to the node_modules in Lambda and empty assets were uploaded. I deleted the cdk.out folder locally, tore down the stacks and the pipeline, re-uploaded, and was good to go.
I saw the error simply when I had LayerVersion pointing out to folder with subfolders but without files.
I had the same symptom today on a large Node 16 project using CDK directly (no Serverless or other framework). Deleting /tmp/cdk.out resolved the issue.
I had this today, on a pretty small project:
OS version: Debian GNU/Linux 11
Nodejs version: 16.15.1
CLI version: 2.26.0 (build a409d63)
package manager: npm
what the asset is supposed to contain: Compiled JS from source Typescript files
reproducable: Went away after I deleted my cdk.out directory
Annecdotally, I've been doing a lot of deploys, and had quite a lot of asset folders by the time this happened.
@LeeMartin77 & @thovden was it persistant until you deleted the cdk.out dir? It's always been a once off when I've had it, (deploy again without changing anything will pass).
Wonder if deleting that dir will reduce the amount it happens.
For me it was persistent before deleting the cdk.out folder.
On Mon, Jun 6, 2022, 04:25 Aron @.***> wrote:
@LeeMartin77 https://github.com/LeeMartin77 & @thovden https://github.com/thovden was it persistant until you deleted the cdk.out dir? It's always been a once off when I've had it, (deploy again without changing anything will pass).
Wonder if deleting that dir will reduce the amount it happens.
— Reply to this email directly, view it on GitHub https://github.com/aws/aws-cdk/issues/18459#issuecomment-1146969450, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAJUVAAFVRUVQSSX3QRIQQLVNVOQBANCNFSM5MB7RRGQ . You are receiving this because you were mentioned.Message ID: @.***>
I ended up clearing my folder before trying again, but can confirm clearing out the folder did make it work without error.
I also just got this issue - it happened after I ran a cdk destroy, and when I did a deploy again I got this message.
Mac OS 12.1.4, MacBook Pro 14" 2021 M1 Pro NPM version 8.12.1 Node version 18.4 CDK version 2.31.1
I am building a CDK project all in Typescript - Lambdas are Typescript compiled using esbuild.
I removed the cdk.out folder and tried again and that worked fine.