serverless-layers icon indicating copy to clipboard operation
serverless-layers copied to clipboard

AWS Error: Function code combined with layers exceeds the maximum allowed size of 262144000 bytes

Open ptimson opened this issue 3 years ago • 19 comments

Issue

Just dropping this down somewhere to note / for people with this problem. Not sure if it's a bug or expected behaviour.

I tried to introduce serverless-layers to my existing project and got:

An error occurred: *** - Function code combined with layers exceeds the maximum allowed size of 262144000 bytes

The zip of the code was now only 40kb and the layer 40mb (150mb decompressed). However the previous bundle code size was pretty large and I think CloudFormation was trying to first apply the layer to the function before updating the function code which resulted in the error above.

Workaround

I deleted the stack and re-created it. An alternative (although not tested) update the code to something with no imports deploy. Once successful try and add layers to the project.

ptimson avatar Sep 13 '20 21:09 ptimson

Thanks for this @ptimson. I just hit this exact issues and I'm curious if anyone else who has seen this has other workarounds. Deleting my stack is not easily done at this time given the usage of it in a production environment.

joelash avatar Mar 22 '21 14:03 joelash

@joelash Are you able to deploy it removing a large library first? I don't really have another solution I assume AWS haven't fixed this yet. Would be keen to know if anyone else has another solution though like you asked but I wouldn't hold out too many hopes!

ptimson avatar Mar 22 '21 15:03 ptimson

@ptimson I found a different workaround that might be slightly better, but still requires downtime of the service. I commented out following lines after each function:

layers: 
  - {Ref: PythonRequirementsLambdaLayer}

for all of my functions. This was my one change and I deployed like this once. Once that deployed successfully I added those lines back and was able to deploy again.

joelash avatar Mar 23 '21 02:03 joelash

@ptimson @joelash any luck with this, I'm facing same thing and both:

  • removing imports
  • removing layers definition from lambda functions, deploy then add it again and redeploy didn't solve the problem!

karim-awd avatar Apr 07 '21 01:04 karim-awd

We faced the exact same problem when using the UnitedIncome/serverless-python-requirements. Since our workload is being used in production by many many users we had to slim down our dependencies in a previous deployment (and the requirements layer .zip was very helpful to figure out the large ones) and then push the PR with the Lambda Layer.

I don't think there is other way of doing it with the current inner workings of CloudFormation. We also think this is the way CF works: first attach the new Lambda Layer to the current Lambda Function, then update the code.

flpStrri avatar Apr 27 '21 16:04 flpStrri

Any solution - Tried few things and none of them work:

  1. Removed the existing layers and redeployed - CF got created but adding the layers back throws the same error
  2. Added the S3 zip location for the layers and referred them while creating the Layers. Still getting the same error . Any help would be highly appreciated

Soni1712 avatar Oct 15 '21 13:10 Soni1712

Did anyone work out how to get around this, seeing it with nodejs too

rcoundon avatar Mar 22 '22 19:03 rcoundon

@rcoundon and @Soni1712 did you see my comment above? It worked, but it created a bit of downtime during the deploys

joelash avatar Mar 22 '22 19:03 joelash

@rcoundon and @Soni1712 did you see my comment above? It worked, but it created a bit of downtime during the deploys

Yes, I did, thank you. It was basically what I ended up with but was curious if anyone had a better approach.

rcoundon avatar Mar 22 '22 22:03 rcoundon

@ptimson I found a different workaround that might be slightly better, but still requires downtime of the service. I commented out following lines after each function:

layers: 
  - {Ref: PythonRequirementsLambdaLayer}

for all of my functions. This was my one change and I deployed like this once. Once that deployed successfully I added those lines back and was able to deploy again.

This worked! thanks @joelash , but can you explain why? Because technically, in both cases, both zips (layers and functions) uploaded. So I'm confused, why this works :)

haktan-suren avatar Jun 05 '22 19:06 haktan-suren

@haktan-suren this is my best recollection of why this works, but keep in mind it's been 14ish months since I found this workaround. Let me know if this makes sense or not.

I believe what this did was cause your functions to be deployed without dependencies. Since no function references the -{Ref: PythonRequirementsLambdaLayer} the layer does not get deployed. This made the function package much smaller since all dependencies were missing.

I think the way the deploy works is that the layers get deployed first. So the size is computed by newLayers + currentlyDeployedFunctions. The issues came about that since the currently deployed functions also had those dependencies package. By deploying the functions without dependencies and without layers you get a smaller size for the "currentlyDeployedFunctions". Then when you do the secondary deploy with using the layers and the layers get deployed the computed size this accurate.

joelash avatar Jun 08 '22 02:06 joelash

Hi @joelash, I'm got some time to investigate this issue as well, still not 100% sure yet why happening. But I reckon you guys could try using --no-compile to avoid generating *.pyc files. It would reduce considerably the layer size.

custom:
  serverless-layers:
    dependenciesPath: requirements.txt
    packageManagerExtraArgs: '--no-compile --no-color'
    compatibleRuntimes: ["python3.8"]

agutoli avatar Jun 24 '22 16:06 agutoli

For us, deleting the CloudFormation stack and redeploying worked. I think what @ptimson wrote is correct:

I think CloudFormation was trying to first apply the layer to the function before updating the function code which resulted in the error above.

This is very easy to reproduce. (1) Deploy a lambda with an unzipped size close to 250MB (using the serverless-python-requirements plugin). (2) Deploy the same function, but now instead of serverless-python-requirements use serverless-layers.

I think a combination of numpy pyarrow pandas pycountry and pydantic will be enough to get to 250MB

luksfarris avatar Jul 06 '22 16:07 luksfarris

+1

Solution that worked for me: - Remove/Comment lambda layer refs in function declaration - Deploy - Add/Uncomment lambda layer refs in function declaration

blochmat avatar Aug 24 '22 10:08 blochmat

+1

Solution that worked for me: - Remove/Comment lambda layer refs in function declaration - Deploy - Add/Uncomment lambda layer refs in function declaration

Thank you – this worked for me, too.

pgib avatar May 03 '23 03:05 pgib

Solution that worked for me:

  • Remove/Comment lambda layer refs in function declaration
  • Deploy
  • Add/Uncomment lambda layer refs in function declaration

Just wanted to say Thank You as this solved my problem today as well.

jleven avatar Jul 13 '23 22:07 jleven

Same, this worked for chalice too! <3

spookyuser avatar Oct 02 '23 08:10 spookyuser

+1

Solution that worked for me: - Remove/Comment lambda layer refs in function declaration - Deploy - Add/Uncomment lambda layer refs in function declaration

It's sad that this still appears to be the only solution. Makes creating new deployments a much more complicated ordeal now that you need to choreograph new layer uploads and function references as two distinct steps.

GeorgeKaraszi avatar Oct 23 '23 18:10 GeorgeKaraszi

Have run in to this a couple of times now and wanted to share the workaround that worked for us.

We are deploying using the Serverless Framework and essentially what we do is rename the Lambda in the config file. This creates a brand new Lambda, instead of trying to modify the existing Lambda. We are running the Lambda in a Step Function so we update that to use the new Lambda name.

This all seems to work although with minor disruption while the changeover happens, I think due to the way things align in Step Functions. It is preferable to removing and re-adding layers or doing a remove/redeploy as our deployments take several minutes and would result in considerable down time.

Anyway, this might be an option for anyone in this situation and might work with other triggers.

jeffski avatar Apr 30 '24 04:04 jeffski