serverless-python-requirements
serverless-python-requirements copied to clipboard
Issue with serverless deploy, requirements.txt not found
I have the following serverless.yml:
# Welcome to Serverless!
#
# This file is the main config file for your service.
# It's very minimal at this point and uses default values.
# You can always add more config options for more control.
# We've included some commented out config examples here.
# Just uncomment any of them to get that config option.
#
# For full config options, check the docs:
# docs.serverless.com
#
# Happy Coding!
service: awsTest
# You can pin your service to only deploy with a specific Serverless version
# Check out our docs for more details
# frameworkVersion: "=X.X.X"
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
invalidateCaches: true
dockerizePip: true
dockerImage: lambda-python3.6-with-mysql-build-deps
provider:
name: aws
runtime: python3.6
role: arn:aws:iam::443746630310:role/EMR_DefaultRole
# you can overwrite defaults here
# stage: dev
# region: us-east-1
# you can add statements to the Lambda function's IAM Role here
# iamRoleStatements:
# - Effect: "Allow"
# Action:
# - "s3:ListBucket"
# Resource: { "Fn::Join" : ["", ["arn:aws:s3:::", { "Ref" : "ServerlessDeploymentBucket" } ] ] }
# - Effect: "Allow"
# Action:
# - "s3:PutObject"
# Resource:
# Fn::Join:
# - ""
# - - "arn:aws:s3:::"
# - "Ref" : "ServerlessDeploymentBucket"
# - "/*"
# you can define service wide environment variables here
# environment:
# variable1: value1
# you can add packaging information here
#package:
# include:
# - include-me.py
# - include-me-dir/**
# exclude:
# - exclude-me.py
# - exclude-me-dir/**
functions:
emotion-analysis:
handler: handler.emotionAnalysis
events:
- http:
path: emotionAnalysis
method: post
audio-analysis:
handler: handler.audioAnalysis
events:
- http:
path: vokaturiAnalysis
method: post
# The following are a few example events you can configure
# NOTE: Please make sure to change your handler code to work with those events
# Check the event documentation for details
# events:
# - http:
# path: users/create
# method: get
# - s3: ${env:BUCKET}
# - schedule: rate(10 minutes)
# - sns: greeter-topic
# - stream: arn:aws:dynamodb:region:XXXXXX:table/foo/stream/1970-01-01T00:00:00.000
# - alexaSkill
# - iot:
# sql: "SELECT * FROM 'some_topic'"
# - cloudwatchEvent:
# event:
# source:
# - "aws.ec2"
# detail-type:
# - "EC2 Instance State-change Notification"
# detail:
# state:
# - pending
# - cloudwatchLog: '/aws/lambda/hello'
# - cognitoUserPool:
# pool: MyUserPool
# trigger: PreSignUp
# Define function environment variables here
# environment:
# variable2: value2
# you can add CloudFormation resource templates here
#resources:
# Resources:
# NewResource:
# Type: AWS::S3::Bucket
# Properties:
# BucketName: my-new-bucket
# Outputs:
# NewOutput:
# Description: "Description for the output"
# Value: "Some output value"
and the requirements.txt:
cycler==0.10.0
decorator==4.1.2
imutils==0.4.3
Keras==2.1.1
matplotlib==2.1.0
networkx==2.0
numpy==1.13.3
olefile==0.44
opencv-python==3.3.0.10
pandas==0.21.0
Pillow==4.3.0
pyparsing==2.2.0
python-dateutil==2.6.1
pytz==2017.3
PyWavelets==0.5.2
PyYAML==3.12
scikit-image==0.13.1
scikit-learn==0.19.1
scipy==1.0.0
six==1.11.0
sklearn==0.0
dlib==19.7.0
I am using this Dockerfile to compile dlib and boost:
FROM amazonlinux:latest
RUN touch /var/lib/rpm/*
RUN yum install -y yum-plugin-ovl && cd /usr/src
#RUN yum check-update
#RUN rpm --rebuilddb
RUN yum history sync
RUN yum install -y wget
RUN yum install -y sudo
RUN yum install -y sudo && sudo yum install -y yum-utils && sudo yum groupinstall -y development
RUN sudo yum install -y https://centos6.iuscommunity.org/ius-release.rpm && sudo yum install -y python36u && yum install -y python36u-pip && yum install -y python36u-devel
#RUN yum install -y grub2
RUN ln -s /usr/include/python3.6m /usr/include/python3.6
RUN wget --no-check-certificate -P /tmp http://flydata-rpm.s3-website-us-east-1.amazonaws.com/patchelf-0.8.tar.gz
RUN tar xvf /tmp/patchelf-0.8.tar.gz -C /tmp
RUN cd /tmp/patchelf-0.8 && ./configure && make && sudo make install
RUN yum install -y blas-devel boost-devel lapack-devel gcc-c++ cmake git
RUN git clone https://github.com/davisking/dlib.git
RUN cd dlib/python_examples/
RUN mkdir build && cd build
RUN cmake -DPYTHON_INCLUDE_DIR=$(python3.6 -c "from distutils.sysconfig import get_python_inc; print(get_python_inc())") -DPYTHON_LIBRARY=$(python3.6 -c "import distutils.sysconfig as sysconfig; print(sysconfig.get_config_var('LIBDIR'))") -DUSE_SSE4_INSTRUCTIONS:BOOL=ON dlib/tools/python
RUN sed -i 's/\/\/all/all/' Makefile && sed -i 's/\/\/preinstall/preinstall/' Makefile
RUN cmake --build . --config Release --target install
RUN cd ..
RUN mkdir ~/dlib
RUN cp dlib.so ~/dlib/__init__.so
RUN cp /usr/lib64/libboost_python-mt.so.1.53.0 ~/dlib/
RUN touch ~/dlib/__init__.py
RUN patchelf --set-rpath '$ORIGIN' ~/dlib/__init__.so
When I run serverless deploy, I get the following error:
Error --------------------------------------------------
Error: Could not open requirements file: [Errno 2] No such file or directory: '.serverless/requirements.txt'
at ServerlessPythonRequirements.installRequirements (/Users/manavdutta1/Downloads/awsTest/node_modules/serverless-python-requirements/lib/pip.js:80:11)
From previous event:
at PluginManager.invoke (/usr/local/lib/node_modules/serverless/lib/classes/PluginManager.js:366:22)
at PluginManager.spawn (/usr/local/lib/node_modules/serverless/lib/classes/PluginManager.js:384:17)
at Deploy.BbPromise.bind.then.then (/usr/local/lib/node_modules/serverless/lib/plugins/deploy/deploy.js:120:50)
From previous event:
at Object.before:deploy:deploy [as hook] (/usr/local/lib/node_modules/serverless/lib/plugins/deploy/deploy.js:110:10)
at BbPromise.reduce (/usr/local/lib/node_modules/serverless/lib/classes/PluginManager.js:366:55)
From previous event:
at PluginManager.invoke (/usr/local/lib/node_modules/serverless/lib/classes/PluginManager.js:366:22)
at PluginManager.run (/usr/local/lib/node_modules/serverless/lib/classes/PluginManager.js:397:17)
at variables.populateService.then (/usr/local/lib/node_modules/serverless/lib/Serverless.js:104:33)
at runCallback (timers.js:785:20)
at tryOnImmediate (timers.js:747:5)
at processImmediate [as _immediateCallback] (timers.js:718:5)
From previous event:
at Serverless.run (/usr/local/lib/node_modules/serverless/lib/Serverless.js:91:74)
at serverless.init.then (/usr/local/lib/node_modules/serverless/bin/serverless:42:50)
at <anonymous>
I have no idea why this is happening. I have the requirements.txt under .serverless in my local directory and it looks fine. Does anyone know why this is happening?
You should keep your requirements.txt
in the root of your service, the plugin creates the file at .serverless/requirements.txt
.
Are you running on windows @manav95? I noticed you enabled dockerizePip. See Issue #105
im using mac os x
Check your shared drives in the docker for Mac settings.
I'm having the exact same issue as @manav95. I'm using default docker image on debian jeesie
Just thought of something it might be.. Add this to your Dockerfile
:
RUN mkdir /var/task
WORKDIR /var/task
I'm hitting this too. Runs perfectly locally (OSX) but when using Codeship and the following Dockerfile:
FROM docker:dind
RUN apk add --update \
nodejs \
python3 \
py-pip \
build-base \
&& pip install virtualenv \
&& rm -rf /var/cache/apk/*
RUN mkdir /var/task
WORKDIR /var/task
COPY . /var/task
RUN npm install -g serverless
RUN sls plugin install -n serverless-python-requirements
Then when I run serverless deploy
I get the same error.
I have the same problem.
Also when I am trying to pull image inside the dicker:dind
image:
I got:
(image: codeship_smr-chatbot) (service: smr-chatbot) Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
(image: codeship_smr-chatbot) (service: smr-chatbot) Removing intermediate container 0f5038935f88
(step: deploy to dev) error ✗
(step: deploy to dev) error loading services during run step: failure to build Image{ name: "smr-chatbot", branch: "dev", dockerfile: "/Users/sitin/Documents/Workspace/Chimplie/smr-chatbot/Dockerfile", cache: true }: The command '/bin/sh -c docker pull lambci/lambda:build-python3.6' returned a non-zero code: 1
I am running via jet
and have add_docker: true
.
I get this error too when I try using the plugin on CircleCI to automate deployment. I don't use any custom Docker images but circleci/python:3.6.4. The plugin configuration I use is as follows:
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: true
And yes, everything runs perfectly on my local machine which runs macOS.
I've been playing with this a bit more and there's definitely something about running the pip install through docker, from within another docker.
I guess one way to get around this would be to run the pip install command without docker, given we're already within a docker container - as long as the host docker is the right kind to build the package for lambda.
If there was an extended version of https://github.com/lambci/docker-lambda/tree/master/python3.6 that we could use to run serverless deploy
from then we could set dockerizePip: false
.
@thesmith Yes, this is the current workaround I use. Thank you for posting it – can be useful for other users who are hitting this issue.
So this Dockerfile seems to be working, obviously dockerizePip has to be false:
FROM lambci/lambda:build-nodejs6.10
ENV AWS_DEFAULT_REGION eu-west-1 \
PYTHONPATH=/var/runtime \
PKG_CONFIG_PATH=/var/lang/lib/pkgconfig:/usr/lib64/pkgconfig:/usr/share/pkgconfig
RUN curl https://lambci.s3.amazonaws.com/fs/python3.6.tgz | tar -xz -C / && \
sed -i '/^prefix=/c\prefix=/var/lang' /var/lang/lib/pkgconfig/python-3.6.pc && \
curl https://www.python.org/ftp/python/3.6.1/Python-3.6.1.tar.xz | tar -xJ && \
cd Python-3.6.1 && \
LIBS="$LIBS -lutil -lrt" ./configure --prefix=/var/lang && \
make -j$(getconf _NPROCESSORS_ONLN) libinstall inclinstall && \
cd .. && \
rm -rf Python-3.6.1 && \
pip3 install awscli virtualenv --no-cache-dir
RUN npm install -g serverless
COPY . .
RUN npm install
Annoyingly this means you have to flip dockerizePip between deploying via CI and locally.
Ah. yeah I'll ahve to check docker-in-docker out at some point.
Re this @thesmith:
Annoyingly this means you have to flip dockerizePip between deploying via CI and locally.
You an do something like:
custom:
pythonRequirements:
dockerizePip: ${self:custom.isCI.${env:CI}, self:custom.isCI.false}
isCI:
true: true
false: non-linux
(this assumes you have a CI
env var set to true
in CI (CircleCI does this automatically, not sure how standard it is, but it'd be easy to add the var or adapt this technique to your CI provider)
@thesmith How do you use this Dockerfile in order to have circleci working ? I am a bit lost... thanks !
@tgensol FWIW it's Codeship, not CircleCI.
On Codeship Pro, the install / deploy process is itself run in a docker image built using that Dockerfile (with dockerizePip set to false).
We have the same problem. I can package and deploy from Mac but the build fails on AWS.
custom
section of our serverless.yml
:
custom:
pkgPyFuncs:
buildDir: _build
requirementsFile: requirements.txt
cleanup: true
useDocker: false
Error log on AWS:
Error --------------------------------------------------
ENOENT: no such file or directory, open 'requirements.txt'
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Stack Trace --------------------------------------------
Error: ENOENT: no such file or directory, open 'requirements.txt'
at Object.fs.openSync (fs.js:667:18)
at Object.fs.readFileSync (fs.js:572:33)
at generateRequirementsFile (/codebuild/output/src659347776/src/node_modules/serverless-python-requirements/lib/pip.js:163:6)
at installRequirements (/codebuild/output/src659347776/src/node_modules/serverless-python-requirements/lib/pip.js:34:5)
at values.filter.map.f (/codebuild/output/src659347776/src/node_modules/serverless-python-requirements/lib/pip.js:215:11)
at Array.map (<anonymous>)
at ServerlessPythonRequirements.installAllRequirements (/codebuild/output/src659347776/src/node_modules/serverless-python-requirements/lib/pip.js:210:8)
From previous event:
at PluginManager.invoke (/usr/lib/node_modules/serverless/lib/classes/PluginManager.js:372:22)
at PluginManager.spawn (/usr/lib/node_modules/serverless/lib/classes/PluginManager.js:390:17)
at Deploy.BbPromise.bind.then.then (/usr/lib/node_modules/serverless/lib/plugins/deploy/deploy.js:123:50)
From previous event:
at Object.before:deploy:deploy [as hook] (/usr/lib/node_modules/serverless/lib/plugins/deploy/deploy.js:113:10)
at BbPromise.reduce (/usr/lib/node_modules/serverless/lib/classes/PluginManager.js:372:55)
From previous event:
at PluginManager.invoke (/usr/lib/node_modules/serverless/lib/classes/PluginManager.js:372:22)
at PluginManager.run (/usr/lib/node_modules/serverless/lib/classes/PluginManager.js:403:17)
at variables.populateService.then (/usr/lib/node_modules/serverless/lib/Serverless.js:102:33)
at runCallback (timers.js:763:18)
at tryOnImmediate (timers.js:734:5)
at processImmediate (timers.js:716:5)
at process.topLevelDomainCallback (domain.js:101:23)
From previous event:
at Serverless.run (/usr/lib/node_modules/serverless/lib/Serverless.js:89:74)
at serverless.init.then (/usr/lib/node_modules/serverless/bin/serverless:42:50)
at <anonymous>
Get Support --------------------------------------------
Docs: docs.serverless.com
Bugs: github.com/serverless/serverless/issues
Issues: forum.serverless.com
Your Environment Information -----------------------------
OS: linux
Node Version: 9.9.0
Serverless Version: 1.27.2
[Container] 2018/05/14 09:56:20 Command did not exit successfully SLS_DEBUG=* sls deploy --stage $STAGE exit status 1
[Container] 2018/05/14 09:56:20 Phase complete: BUILD Success: false
[Container] 2018/05/14 09:56:20 Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: SLS_DEBUG=* sls deploy --stage $STAGE. Reason: exit status 1
Our buildspec.yml
:
env:
variables:
DOCKER_VERSION: "17.09.1"
STAGE: "dev"
phases:
pre_build:
commands:
- pip install pip --upgrade
- npm install -g serverless
- npm install
- npm -v
- sls -v
build:
commands:
- SLS_DEBUG=* sls deploy --stage $STAGE
And our dependencies defined in package.json
:
{
"dependencies": {
"serverless": "^1.27.2",
"serverless-package-python-functions": "^0.2.5",
"serverless-pseudo-parameters": "^1.6.0",
"serverless-python-requirements": "^4.0.3",
"serverless-step-functions": "^1.4.1"
}
}
I also tried appending the following config to the custom
entry in the serverless.yml
which didn't help:
pythonRequirements:
dockerizePip: false
Update
Adding a Pipfile and Pipfile.lock to the root of my repository along with installation of pipenv
solved my problem (which consequently creates the desired requirements.txt
). I guess adding an empty requirements.txt
would also do.
@mehdisadeghi no clue. but to be honest I wouldn't expect serverless-package-python-functions & serverless-python-requirements to work together. I don't think serverless-package-python-functions is necessary since this plugin gained the same functionality.
@dschep thanks for the tip! I'll give it a try and if it works without serverless-package-python-functions
I'll happily remove it! Less clutter is better.
I had this issue as well - or a similar one, anyway - running inside Gitlab CI (docker-in-docker) with dockerizePip: true
:
Serverless: Invoke deploy
Serverless: Invoke package
Serverless: Invoke aws:common:validate
Serverless: Invoke aws:common:cleanupTempDir
Serverless: Generated requirements from /builds/group/project/requirements.txt in /builds/group/project/.serverless/requirements.txt...
Serverless: Installing requirements from /root/.cache/serverless-python-requirements/03e86de11d16dfbe8247f02d3d303294_slspyc/requirements.txt ...
Serverless: Docker Image: lambci/lambda:build-python3.6
Serverless: Using download cache directory /root/.cache/serverless-python-requirements/downloadCacheslspyc
Could not open requirements file: [Errno 2] No such file or directory: '/var/task/requirements.txt'
and I accidentally stumbled upon a workaround.
I was already going to start using download and static caching, and I wanted the cache dir to be inside my .serverless
directory, so that it would be saved and restored between jobs. So, I ended up with these settings:
custom:
pythonRequirements:
dockerizePip: true
useDownloadCache: true
useStaticCache: true
cacheLocation: ./.serverless/.requirements_cache
And, lo and behold, that also fixed the packaging issue.
If you notice above, the plugin is trying to map a requirements.txt
file into the container in /var/task/
:
Serverless: Installing requirements from /root/.cache/serverless-python-requirements/03e86de11d16dfbe8247f02d3d303294_slspyc/requirements.txt ...
My guess is that the Gitlab CI runner disallows this or interferes with it somehow, because when I set cacheLocation
as above, I get this instead:
Serverless: Installing requirements from /builds/group/project/.serverless/.requirements_cache/c1982a9b5b5e665faaa0cd35390c1b60_slspyc/requirements.txt ...
```
which works perfectly.
This could also be because the `/builds/group/project` directory is already being mapped into the container as a volume, allowing `pip` in the container to find the path to `requirements.txt`. Either way, hopefully this helps someone else with a similar dockerish problem.
@brettdh the cache location shouldn’t be inside of the .serverless folder as that folder is temporary in nature and nothing would ever cache there. It might work I’ve never actually tested in there but that’s my first gut instinct. Can you maybe pastebin more of your full serverless config and your gitlabci file and I’ll take a look?
that folder is temporary in nature and nothing would ever cache there.
What does this mean? Is the folder cleared out before every deploy?
I can just as easily (I think) put it in my project's root dir alongside .serverless/
instead, but I'm not sure I understand what you think will happen if I put it inside .serverless/
as I've done.
It might work
So far so good :) But like I said, I'm curious as to what the danger might be.
I'll try to put together a minimal repro project on gitlab.com when I get a chance.
The serverless framework deletes that folder and recreates it every time you deploy, defeating the purpose of cache completely is what I mean.
Yep, that'd be a good reason to move it out 😅 Thanks for the tip!
I had this issue as well - or a similar one, anyway - running inside Gitlab CI (docker-in-docker) with
dockerizePip: true
:Serverless: Invoke deploy Serverless: Invoke package Serverless: Invoke aws:common:validate Serverless: Invoke aws:common:cleanupTempDir Serverless: Generated requirements from /builds/group/project/requirements.txt in /builds/group/project/.serverless/requirements.txt... Serverless: Installing requirements from /root/.cache/serverless-python-requirements/03e86de11d16dfbe8247f02d3d303294_slspyc/requirements.txt ... Serverless: Docker Image: lambci/lambda:build-python3.6 Serverless: Using download cache directory /root/.cache/serverless-python-requirements/downloadCacheslspyc Could not open requirements file: [Errno 2] No such file or directory: '/var/task/requirements.txt'
and I accidentally stumbled upon a workaround.
I was already going to start using download and static caching, and I wanted the cache dir to be inside my
.serverless
directory, so that it would be saved and restored between jobs. So, I ended up with these settings:custom: pythonRequirements: dockerizePip: true useDownloadCache: true useStaticCache: true cacheLocation: ./.serverless/.requirements_cache
And, lo and behold, that also fixed the packaging issue.
If you notice above, the plugin is trying to map a
requirements.txt
file into the container in/var/task/
:Serverless: Installing requirements from /root/.cache/serverless-python-requirements/03e86de11d16dfbe8247f02d3d303294_slspyc/requirements.txt ...
My guess is that the Gitlab CI runner disallows this or interferes with it somehow, because when I set
cacheLocation
as above, I get this instead:Serverless: Installing requirements from /builds/group/project/.serverless/.requirements_cache/c1982a9b5b5e665faaa0cd35390c1b60_slspyc/requirements.txt ...
which works perfectly.
This could also be because the
/builds/group/project
directory is already being mapped into the container as a volume, allowingpip
in the container to find the path torequirements.txt
. Either way, hopefully this helps someone else with a similar dockerish problem.
Update: I've tried to use this method on gitlab ci while deploying. it works when it tries to use the cache directory but many times it doesn't use the cache directory in which case it fails. maybe if we could add a parameter to always use cache directory it could work ?
I think the fix would be an option to use docker cp
instead of volumes or binds for dockerizePip
. These CI systems generally employ a remote docker daemon as far as the main build can see.
@thesmith or @avli I've got the /var/task problem for a project I'm working on right now. I'm trying to use circleCI and serverless-python-requirements. Are you setting dockerizePip to false when deploying from the CI or are you using dockerizePip true? If you are using dockerizePip true did you place in your own docker image for it and then added /var/task folder?
When I try dockerizePip false I run into the lambda limit error, which is not good. Even when I use slimming.
Any clarification would be great here.
Getting the same issue to @chubzor, any news on a fix.
@alexcallow I ended up using this: custom: pythonRequirements: layer: true slim: true slimPatterns: - "**/test_*.py" strip: false
We abandoned deploying via local dev machines, and this worked for us when deploying with circleCI. Mind you we have numpy, pandas, scikit-learn in requirements.
I think we are hanging by inside of some size limit so this is not sustainable, but could be helpful for you.
If you have pyproject.toml
in your project but you don't use poetry
, please remember to set usePoetry: false
. The config will be
custom:
pythonRequirements:
dockerizePip: true
usePoetry: false
Related code:
-
https://github.com/UnitedIncome/serverless-python-requirements/blob/64e20db2a4acbf95a3d9391797b0c12544234a0c/index.js#L41
-
https://github.com/UnitedIncome/serverless-python-requirements/blob/master/lib/pip.js#L65
I'm encounter this as well when trying to use CircleCi, my executor is
executor:
name: node/default
tag: '10.4'`
which I belive means im doing docker-in-docker.
It seems this command
Running docker run --rm -v /home/circleci/.cache/serverless-python-requirements/2b94ea26f9dceaadc347670525eaa71ffd73487d3460b8428f6a406f834f65af_slspyc\:/var/task\:z -v /home/circleci/.cache/serverless-python-requirements/downloadCacheslspyc\:/var/useDownloadCache\:z sls-py-reqs-custom /bin/sh -c 'chown -R 0\\:0 /var/useDownloadCache && python3.6 -m pip install -t /var/task/ -r /var/task/requirements.txt --cache-dir /var/useDownloadCache && chown -R 3434\\:3434 /var/task && cp /usr/lib64/libpq.so.5 /var/task/ && chown -R 3434\\:3434 /var/useDownloadCache'..
is mounting /home/circleci/.cache/serverless-python-requirements/2b94ea26f9dceaadc347670525eaa71ffd73487d3460b8428f6a406f834f65af_slspyc\
to /var/task\:
but I found this in the CircleCI documentation:
https://support.circleci.com/hc/en-us/articles/360007324514-How-can-I-mount-volumes-to-docker-containers-
"It's not possible to use volume mounting with the docker executor, but using the machine executor it's possible to mount local directories to your running Docker containers. "
I switched from docker-in-docker:
jobs:
build:
executor:
name: node/default
tag: '10.4'
steps:
- checkout
- node/with-cache:
steps:
- run: npm install
- setup_remote_docker
- run: npx sls package -p ./artifacts/
to Machine:
jobs:
build:
machine: true
steps:
- checkout
- node/install-node:
version: 10.16.3
- node/install-npm:
version: 6.12.1
- node/with-cache:
steps:
- run: npm install
- run: npx sls package -p ./artifacts/
And it ran successfully.
Note: CircleCi warns that machine executors may become a premium feature in the future.