serverless-python-requirements
serverless-python-requirements copied to clipboard
ENOENT: no such file or directory, scandir '.serverless/requirement'
When I run sls deploy
I get the following error multiple times in a row:
Error: ENOENT: no such file or directory, scandir '.serverless/requirements'
Each time, I verify that .serverless/requirements
does in fact exist, it is a symlink to a cache folder of pip packages in my home directory. If I run sls deploy
3 or 4 times without changing anything, on the 3rd or 4th try it will run with no error. I am running on Win10/WSL2 but have never had issues with symlinks.
Serverless: Zipping required Python packages...
Error --------------------------------------------------
Error: ENOENT: no such file or directory, scandir '.serverless/requirements'
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: linux
Node Version: 12.18.1
Framework Version: 1.78.1 (standalone)
Plugin Version: 3.7.0
SDK Version: 2.3.1
Components Version: 2.33.4
We're seeing this exact same issue with the following pythonRequirements section in our YML:
pythonRequirements:
dockerizePip: non-linux
zip: true
#Get rid of unnecessary package files
slim: true
#But keep necessary binary files to avoid "ELF load command address/offset not properly aligned"
strip: false
It fails and fails in Gitlab, but occasionally (every third try or so) it works. EDIT: The deploy also works consistently when I run it locally
I had the same exact issue... hope someone can help
Any update on this?
Issue
NOTE: Scroll down for workaround/solution.
Came across this issue in my Gitlab CI pipeline when running this command in a gitlab build
job:
-
sls package --region $AWS_REGION --stage $CI_ENVIRONMENT_NAME --package .serverless
And passing the created artifact to a gitlab deploy
job which runs:
-
sls deploy --region $AWS_REGION --stage $CI_ENVIRONMENT_NAME --package .serverless
Serverless Error ----------------------------------------
Cannot read file artifact ".serverless/pythonRequirements.zip": ENOENT: no such file or directory, stat '.serverless/pythonRequirements.zip'
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information ---------------------------
Operating System: linux
Node Version: 16.3.0
Framework Version: 2.70.0 (local)
Plugin Version: 5.5.3
SDK Version: 4.3.0
Components Version: 3.18.1
Why this happens
According to the serverless-python-requirements documentation, caching is enabled by default.
What this does is it changes the paths to (among other things) pythonRequirements.zip
to a symlink. You can see this if you download the artifact to your local machine - in gitlab you can't see symlinks in the gui:
total 96
drwxr-xr-x@ 9 user group 288 Jan 7 10:08 .
drwxr-xr-x@ 3 user group 96 Jan 7 10:17 ..
-rw-r--r--@ 1 user group 4986 Jan 7 10:08 project.zip
-rw-r--r--@ 1 user group 2077 Jan 7 10:08 cloudformation-template-create-stack.json
-rw-r--r--@ 1 user group 10379 Jan 7 10:08 cloudformation-template-update-stack.json
lrwxrwxrwx@ 1 user group 133 Jan 7 10:08 pythonRequirements.zip -> /home/user/.cache/serverless-python-requirements/ff8b23b818b507958ef9b61047f665cde924fc7342f352dc2c6c1487333df041_x86_64_slspyc.zip
lrwxrwxrwx@ 1 user group 129 Jan 7 10:08 requirements -> /home/user/.cache/serverless-python-requirements/ff8b23b818b507958ef9b61047f665cde924fc7342f352dc2c6c1487333df041_x86_64_slspyc
-rw-r--r--@ 1 user group 131 Jan 7 10:08 requirements.txt
-rw-r--r--@ 1 user group 20447 Jan 7 10:08 serverless-state.json
The result is that the symlinks of these files get passed on as artifacts to the next job instead of the files themselves. And since it's a different job, the .cache
directory doesn't exist, which results in the file not found error.
Solution / Workaround
You can disable caching by setting both useDownloadCache
& useStaticCache
to false.
custom:
pythonRequirements:
useDownloadCache: false
useStaticCache: false
This fixed it for me.
I did that and I still have this problem.
For what it's worth, in my original report of this issue I was running deploys manually on my local machine. There was no CI involved and it seemed to be purely an issue with the plugin.
I am also experiencing this
╰─ sls deploy
Running "serverless" from node_modules
Deploying ...omitted... to stage dev (us-east-1)
WarmUp: Creating warmer "default" to warm up 1 function
✖ Stack ...omitted... failed to deploy (9s)
Environment: darwin, node 14.17.4, framework 3.21.0 (local) 3.21.0v (global), plugin 6.2.2, SDK 4.3.2
Credentials: Local, "default" profile
Docs: docs.serverless.com
Support: forum.serverless.com
Bugs: github.com/serverless/serverless/issues
Error:
[OperationalError: ENOENT: no such file or directory, scandir '.serverless/requirements'] {
cause: [Error: ENOENT: no such file or directory, scandir '.serverless/requirements'] {
errno: -2,
code: 'ENOENT',
syscall: 'scandir',
path: '.serverless/requirements'
},
isOperational: true,
errno: -2,
code: 'ENOENT',
syscall: 'scandir',
path: '.serverless/requirements'
}
For anyone coming across this - it seems if slimPatternsAppendDefaults
is set to false
, this error occurs. After hours of trying, I turned slimPatternsAppendDefaults: true
and commented out my slimPatterns
section and was able to deploy.
Very frustrating.
What helped me is adding requirements.txt to the service folder (next to the serverless.yml). I am deploying via Serverless'es CI/CD (by committing to the branch, not from local machine)
UPDATE: when solution above did not work - the only one worked was this setup in serverless.yml:
pythonRequirements:
layer: false
usePipenv: false
useStaticCache: false
useDownloadCache: false
slim: false