amplify-cli
amplify-cli copied to clipboard
Multi-env layers always updating without changes. Environments hardcoded makes productive working impossible
Before opening, please confirm:
- [X] I have installed the latest version of the Amplify CLI (see above), and confirmed that the issue still persists.
- [X] I have searched for duplicate or closed issues.
- [X] I have read the guide for submitting bug reports.
- [X] I have done my best to include a minimal, self-contained set of instructions for consistently reproducing the issue.
How did you install the Amplify CLI?
npm
If applicable, what version of Node.js are you using?
16.10.0
Amplify CLI Version
7.6.4
What operating system are you using?
Mac
Amplify Categories
function
Amplify Commands
env, pull, push, status, update
Describe the bug
Our plan was to use lambda layers to improve productivity and share code across multiple functions. We are working in an multi-env project, where we are facing a lot of issues with layers.
- When switching between environments via
amplify env checkout xyz
all lambda layer versions are getting updated to new versions with new descriptions/timestamps without any change in the source code from the layer. So when a team member switches from e.g. dev to sandbox1 environment, all layer files are modified and needs to be committed with git, without any change in the code. Because we are using multiple layers tracking all files and check if everything is fine is a lot of work. - We are hosting the app on amplify as well and everytime we deploy a new version triggered by a change on GitHub, the backend is building in the amplify deployment workflow again and updates again all layers. So everytime we push our code to git and amplify deploys it, the backend is due to the automatic layer updates on a different version than our local amplify project. This means if we check the status of the amplify environment with
amplify status
all functions with layers and the layers itself are onUpdate
because local and cloud backends are always different
These continuous updating layers and git modifications by the amplify cli makes working nearly impossible. When a team member wants to merge code from e.g. dev to staging the following workflows are necessary
-
git checkout staging
-
git merge dev
-> Sometimes this is causing conflicts in git with layer versions even though we didn't made any change in the layer config -
amplify env checkout staging
-> Now in every layer everyparameters.json
file gets automatically modified with the current timestamp at thedescription
property. -
amplify push
-> Amplify CLI detects changes in layers, but we didn't make any changes. Next update of the layer. No we are out of sync from local layer versions -
git add . && commit -m "merged from dev in staging" && git push
-> This fires the CD-Pipeline in the amplify console. At the build step amplify is building the backend again and updates again the lambda layers. Now local and cloud layers are completely different versions, but we didn't change anything in there code! -
amplify pull
to get the updated layer versions from the build pipeline again to the local project -
git add. && commit -m" fetching new layer versions
-> Committing the new versions
Now if we push it again to the cloud we would began at number 5 again
With this incredible bad developer experience we have to delete all layers and copy the desired code directly into the different lambda functions. This is very time consuming but makes less headache to work with our team than using layers.
Expected behavior
When working with different environments and team members, the environment shouldn't be hardcoded due to changes in git. Deploying to the cloud should not update all layers again so the local configuration is always wrong after deploying the app. Deploying to cloud and working on different environments shouldn't trigger any change in git tracked files. This should only, of course, happen, when we really change the code of the layer and not only the environment without touching the layer
Reproduction steps
- Create new function with
amplify add function
- Create new layer with
amplify add function
- Attach the layer to the function
amplify update function
- Create new env
amplify add env test
- Switch to the env
amplify env checkout test
You can see that the environment is hardcoded and was modified automatically by the CLI
- Add hosting in the console for CI/CD
- Push your changes to the cloud
- Wait that the deployment finishes
- Check the status with
amplify status
You can see that the function status is update
because the CI/CD builded the backend again and updated the layer and is now one version before the local one
GraphQL schema(s)
# Put schemas below this line
Log output
# Put your logs below this line
Additional information
No response
Hey @chris-mds :wave: thanks for raising this! I have been able to reproduce this behavior, where checking out an Amplify environment will produce updates to the layer's CloudFormation template and updates the layer's parameters.json
file as well, however I am not able to reproduce the Update
status in the attached functions. To clarify, when attaching a function to the layer are you selecting the latest Lambda Layer version?
Here is a sample diff after switching environments: https://github.com/josefaidt/9386/commit/86a6995a59e179c127470055d59a3c9c15393315
And if we then check out the same environment again we will see changes to the team-provider-info.json
file: https://github.com/josefaidt/9386/commit/2c997403ec1ed2deda6e949f2f8bdbbaffd676d6
Hi @josefaidt,
Thank you for your answer. Yes I am always selecting latest layer version.
Is it supposed to be updated every time? In my opinion the environment should be declared as a variable like in every other category to prevent having changes in git.
I also found the issue for the update status: This is due to the CI/CD pipeline from hosting.
When creating / updating a function on my local environment my machine is creating the package-lock.json
file with lockfile-version 2 because we are using npm 7.
When the CI/CD pipeline is building the backend again, it is using an other npm version so the package-lock.json
is created with lockfile-version 1. Now local environment and cloud environment are different and we have our update status.
How could we use layers without having these updating issues?
Hey @chris-mds thank you for that clarification! Ideally the layer version would not be updated when checking out an existing environment given no changes were made to the layer. For the npm issue, I would recommend updating npm as a part of your buildspec (amplify.yml
) to match the npm version used locally. This can be accomplished by adding an additional step to the backend build phase
version: 1
backend:
phases:
# IMPORTANT - Please verify your build commands
build:
commands:
- npm i -g npm@7
- '# Execute Amplify CLI with the helper script'
- amplifyPush --simple
# ...
Marking as a bug 🙂
Hi @josefaidt , Thank you for your fast reply! I will test your suggestion on a new test environment and will give feedback soon.
related to https://github.com/aws-amplify/amplify-cli/issues/8216
Thanks for catching the lambda layer increment. We are sorry that it bothers you. Here's what we found about the issue:
- It happens on Amplify Hosting when deploying a new commit from Git
- It happens on local if pull and push by the following steps
amplify pull --appId <app-id> --envName <env>
amplify push --force
Note: The local command would create a new version and delete all the old versions.
It creates a new layer because Amplify CLI detected code change (code). It looks like a design because it would print "Content changes in Lambda layers detected." (code) when it detects layer change, but no matter if there's any layer change, it always create a new layer (code).
After more investigation, I found it's because if we create a new folder and amplify pull
the resource, the previousHash
would be empty (code, code).
previousHash
(latestPushedVersionHash
) is created locally for amplify push
, so a new folder doesn't have it.
It can not be resolved by a quick fix and need to re-design how we should handle the situation.
Would be very nice if it would not always redeploy the layer(s) when no change has happened to them. It would shave a lot of time from total deployment time. Now all lambda's that are using the layer(s) also redeploy all the time.