aws-sam-cli
aws-sam-cli copied to clipboard
Using SAM Local inside docker-compose
I currently am able to stand up an entire system of REST API microservices using docker-compose. These REST API microservices are written in all kinds of languages and frameworks, none of which are Lambda. This is necessary to run E2E testing for this given large scale application. The inability to use Lambda functions inside this large docker-compose file is the only thing that has stopped be from adding AWS Lambda to our microservices in the past.
SAM Local got super close to fulfilling this goal by at least allowing me to run AWS Lambda functions locally. However, due to the fact that it depends on docker-lambda, I cannot run a docker container inside another docker-container (or at least I really don't want to), so it's kind of a nonstarter.
What my question really boils down to is this: At any point is it planned to be able to run SAM Local inside a docker container, without the major concerns that come along with running docker-in-docker?
You can run Docker inside Docker - SAM Local works in a AWS CodeBuild Docker container. Again, there are the caveats you mentioned.
I took a quick look at Compose. It is like SAM Local but working off of a compose.yaml confit file instead of SAM templates. We should be able to write a plugin or extension for Compose, if possible, to use Compose as the mechanism to spin up a new Docker container and pass invokes to it.
You seem to be well experienced with Compose. Are there plugins or extensions that one can write to extend it?
You don't need to necessarily run Docker-in-Docker for this – you could just run it as a sibling couldn't you? ie, with:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
@sanathkr I definitely wouldn't call myself well experienced with compose, but from what I do know, there isn't a plugin system or anything like that. When Docker made it they kept it pretty bare bones. That being said, it works really well for what it does. I realize a ton of people might not use it, so building out a solution for the people who do, frankly, might spend resources that could be more valuable in other areas.
However, here is where I think it might be valuable to do it anyway: Being able to create an image with an instance of a SAM Local application would open up a ton of gates for how projects made with SAM Local can be consumed. Docker compose is just one instance of a toolset that takes docker images and does stuff with them. I am equally as concerned about the ability for someone to run a SAM Local application in an already dockerized CI environment. I'm seeing more and more engineering teams switching up their CI/CD pipelines to go fully dockerized. Tools like Codeship Pro popping up into existence is a great example of companies trying to push this idea across.
The way I see it, the only way to insure that people can integrate with those kinds of systems is to either a) allow engineers to wrap their SAM Local applications in their own docker applications, or potentially b) add some kind of export that allows sam local to build a docker image that is representative of the application it plans on hosting. Hopefully someone has a better idea, because I have all kinds of worries for each of those options. Who knows, maybe it is just better to create test environments in AWS itself for CI and CD, but depending on the level of effort it takes, I could see this as being a great way to enable docker-heavy engineering teams like mine to start making the move to AWS Lambda.
@mhart I could, totally, and that would work for my use case specifically (I'd still be up shit's creek if I ported over my current Dockerized CI/CD platform, but I can live with hunting another one down or writing some new fancy/ghetto integration layer).
However, I am worried that will reduce the adoptability of SAM Local itself. Not being able to create a plain Docker image for a SAM Local application (without Docker running in Docker) definitely creates a complication, and it is my theory that that complication will stop certain developers from giving SAM Local (and by proxy AWS Lambda) the chance it deserves.
Personally I feel that as more and more tools become containerized, it's actually incumbent on Docker itself to make integration easier – otherwise there are going to be more and more tools that you "can't" use.
And of course by "can't" I mean, you can ☺️. I think in this case that means just documenting that if you want to use SAM Local in Docker, you need to run a container that has (or installs) Docker and start it with -v /var/run/docker.sock:/var/run/docker.sock.
At the moment SAM Local is very much tied to running via Dockerized Lambda runtimes, and while this discussion is good feedback and useful, it's not something that is going to change anytime soon.
For the time being, if you want to run SAM Local inside Docker, you'll need to map through the docker.sock as @mhart mentioned.
Bummer, but fair enough.
@mhart Do you have an example of how to do what you’ve described, using -v /var/run/docker.sock:/var/run/docker.sock? I tried using https://github.com/cnadiminti/docker-aws-sam-local as such a Docker image, but I couldn’t get it to work in docker-compose (I would always get Unable to import module 'index': Error): https://github.com/cnadiminti/docker-aws-sam-local/issues/1
@GeoffreyBooth the docker image you're talking about has such an example: https://github.com/cnadiminti/docker-aws-sam-local#how-to-use-this-image
@mhart yes, but I can't seem to get it to work: https://github.com/cnadiminti/docker-aws-sam-local/issues/1. Have you?
@GeoffreyBooth that seems like Docker's working – just you don't have your Lambda JS files setup in the correct hierarchy?
It's looking for an index.js file with a handler function (by default)
The files in that folder work fine via sam local start-api, so I think that rules out file path or reference issues.
So have you gotten this to work? I'm just looking for a working example to follow.
@GeoffreyBooth yes, it works just fine for me.
$ git clone https://github.com/cnadiminti/docker-aws-sam-local
$ cd docker-aws-sam-local
$ make local-start-api
2017/11/01 15:14:53 Connected to Docker 1.32
2017/11/01 15:14:53 Fetching lambci/lambda:nodejs6.10 image for nodejs6.10 runtime...
nodejs6.10: Pulling from lambci/lambda
5aed7bd8313c: Already exists
d60049111ce7: Already exists
df2c17ad5e5e: Pull complete
93956b6301bb: Pull complete
Digest: sha256:7eb4ced6a15ae3c30effc4ec0cd3aabb2bd57c9a8330b37920c3d5d722d81083
Status: Downloaded newer image for lambci/lambda:nodejs6.10
Mounting index.handler (nodejs6.10) at http://0.0.0.0:3000/ [OPTIONS GET HEAD POST PUT DELETE TRACE CONNECT]
Mounting static files from /Users/michael/github/docker-aws-sam-local/example/public at /
You can now browse to the above endpoints to invoke your functions.
You do not need to restart/reload SAM CLI while working on your functions,
changes will be reflected instantly/automatically. You only need to restart
SAM CLI if you update your AWS SAM template.
2017/11/01 15:15:05 Invoking index.handler (nodejs6.10)
START RequestId: a2d65cf7-aed6-142e-3df2-084097da5f94 Version: $LATEST
2017-11-01T15:15:12.155Z a2d65cf7-aed6-142e-3df2-084097da5f94 LOG: Received an event
END RequestId: a2d65cf7-aed6-142e-3df2-084097da5f94
REPORT RequestId: a2d65cf7-aed6-142e-3df2-084097da5f94 Duration: 11.39 ms Billed Duration: 0 ms Memory Size: 0 MB Max Memory Used: 28 MB
@mhart Thanks for your help. I took another look at that Makefile. The docker-compose.yaml that I got to work looks like this:
version: '3'
services:
aws-sam-local:
image: cnadiminti/aws-sam-local
command: local start-api --docker-volume-basedir "$PWD/example" --host 0.0.0.0
ports:
- '3000:3000'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./example:/var/opt
So to follow your example:
git clone https://github.com/cnadiminti/docker-aws-sam-local
cd docker-aws-sam-local
# Save the previous code block as docker-compose.yaml in this folder
docker-compose up
And then curl http://localhost:3000/ from another shell:
Creating network "dockerawssamlocal_default" with the default driver
Creating dockerawssamlocal_aws-sam-local_1 ...
Creating dockerawssamlocal_aws-sam-local_1 ... done
Attaching to dockerawssamlocal_aws-sam-local_1
aws-sam-local_1 | 2017/11/01 17:50:39 Connected to Docker 1.32
aws-sam-local_1 | 2017/11/01 17:50:40 Fetching lambci/lambda:nodejs6.10 image for nodejs6.10 runtime...
aws-sam-local_1 | nodejs6.10: Pulling from lambci/lambda
aws-sam-local_1 | Digest: sha256:7eb4ced6a15ae3c30effc4ec0cd3aabb2bd57c9a8330b37920c3d5d722d81083
aws-sam-local_1 | Status: Image is up to date for lambci/lambda:nodejs6.10
aws-sam-local_1 |
aws-sam-local_1 | Mounting index.handler (nodejs6.10) at http://0.0.0.0:3000/ [OPTIONS GET HEAD POST PUT DELETE TRACE CONNECT]
aws-sam-local_1 | Mounting static files from /Users/Geoffrey/Sites/docker-aws-sam-local/example/public at /
aws-sam-local_1 |
aws-sam-local_1 | You can now browse to the above endpoints to invoke your functions.
aws-sam-local_1 | You do not need to restart/reload SAM CLI while working on your functions,
aws-sam-local_1 | changes will be reflected instantly/automatically. You only need to restart
aws-sam-local_1 | SAM CLI if you update your AWS SAM template.
aws-sam-local_1 |
aws-sam-local_1 | 2017/11/01 17:50:44 Invoking index.handler (nodejs6.10)
aws-sam-local_1 | START RequestId: 9aabc9ce-eb6c-126f-af7c-fc501bb46fe3 Version: $LATEST
aws-sam-local_1 | 2017-11-01T17:51:06.009Z 9aabc9ce-eb6c-126f-af7c-fc501bb46fe3 LOG: Received an event
aws-sam-local_1 | END RequestId: 9aabc9ce-eb6c-126f-af7c-fc501bb46fe3
aws-sam-local_1 | REPORT RequestId: 9aabc9ce-eb6c-126f-af7c-fc501bb46fe3 Duration: 8.97 ms Billed Duration: 0 ms Memory Size: 0 MB Max Memory Used: 28 MB
The part that was stumping me was --docker-volume-basedir "$PWD/example". Apparently the $PWD here is the path on the host, e.g. /Users/Geoffrey/Sites/docker-aws-sam-local on my Mac, not /var/opt inside the container as I expected it to be.
@GeoffreyBooth I have followed the sample you detailed here and successfully have gotten docker compose to start an instance of sam local as well as a dockerized instance of nats.io for it to hit. However, The sam local instance cannot seem to make a connection to the nats instance, getting an errconnectionrefused error. Is there anything specific that you must do to get aws sam local to make a connection to other dockerized containers?
I don't think so. This is what I used: https://github.com/cnadiminti/docker-aws-sam-local/pull/2/files
@codemonkey2012 I have the same problem. Do you find any solution ?
I think this should be reopened as there is enough demand from users to have sam local properly run in docker-compose
What issues are you having? We are using Sam 100% via docker compose at work, I could try to prepare some resources how to set it up if there is interest
@terlar
I'm also having issues getting sam local running inside docker compose. Would be very interested to hear more.
Particularly struggling with getting other containers to be able to communicate with the aws-sam-local container, when adding depends_on: aws-sam-local to the other container.
When trying to invoke a lambda function from the test container with:
lambda_client = boto3.client(
'lambda',
endpoint_url="http://localhost:3000",
use_ssl=False,
verify=False,
config=botocore.config.Config(
signature_version=botocore.UNSIGNED,
retries={'max_attempts': 0},
),
)
lambda_client.invoke(
FunctionName="HelloWorld",
)
And a docker-compose of:
services:
awssamlocal:
image: cnadiminti/aws-sam-local
command: local start-api --host 0.0.0.0 --docker-volume-basedir foo
ports:
- '3000:3000'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./foo:/var/opt
test:
image: bar
build:
context: .
depends_on:
- awssamlocal
command: -m unittest discover
I just get botocore.exceptions.EndpointConnectionError: Could not connect to the endpoint URL: "http://localhost:3000/2015-03-31/functions/HelloWorld/invocations"
And it's still not clear to me what the value of --docker-volume-basedir is supposed to be (running on linux).
The --docker-volume-basedir needs to be your host (local) system path, I am using $PWD. You might also need to set --docker-network and have a specific docker network to get the communication with other containers working.
A sample service using dynamodb:
services:
api:
image: our-private-tools-image
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- .:/var/opt:ro
ports:
- 3000
command: sam local start-api --region eu-west-1 -n .env-sam.json --docker-network my-net -v "$PWD" --host 0.0.0.0 --debug
dynamodb:
image: amazon/dynamodb-local:1.11.477
volumes:
- dynamodb-data:/data
ports:
- 8000
healthcheck:
test: ["CMD", "bash", "-c", "cat < /dev/null > /dev/tcp/localhost/8000 || exit 1"]
interval: 5s
timeout: 3s
start_period: 10s
user: root
command: -jar DynamoDBLocal.jar -sharedDb -dbPath /data
volumes:
dynamodb-data:
networks:
default:
external:
name: my-net
The problem with the current way of setting up sam is that it requires docker in docker. I don't see any good reason why this has to be so, and it's making set up way more complicated than it needs to be with a ton of different moving parts that are poorly documented.
@terlar this image https://hub.docker.com/r/cnadiminti/aws-sam-local/ does not support start-lambda command, just invoke and start-api so it's not really ideal, especially if you use Step Functions Local which needs a local Lambda Endpoint.
That being said I tried building my own image and running sam local docker inside docker, but the lambda execution gets so slow that the lambda times out during execution.
I still think this issue should be reopened as there is still traction no official way to do this. Ideally sam local should have an option to not spawn yet another container if you are already running one on your own via compose.
@Ghilteras I do have it working with start-lambda as well, allthough I maintain my own docker image for sam. E.g.
FROM alpine:3.9
ENV AWS_VERSION 1.16.10
ENV SAM_VERSION 0.15.0
ENV CFNLINT_VERSION 0.19.1
RUN apk add --no-cache groff python3 \
&& apk add --no-cache --virtual build-dependencies gcc musl-dev python3-dev \
&& pip3 install aws-sam-cli==${SAM_VERSION} \
&& pip3 install awscli==${AWS_VERSION} \
&& pip3 install cfn-lint==${CFNLINT_VERSION} \
&& pip3 install httpie \
&& apk del build-dependencies
ENV PAGER more
ENV PATH /var/opt/scripts:$PATH
WORKDIR /var/opt
EXPOSE 3000
CMD ["sh", "-c", "while sleep 3600; do :; done"]
With this I was able to run this via docker-compose like:
version: '3.5'
services:
api:
image: our-private-tools-image
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- .:/var/opt:ro
ports:
- 3000
command: sam local start-api --region eu-west-1 -n .env-sam.json --docker-network my-net -v "$PWD" --host 0.0.0.0 --debug
lambda:
image: our-private-tools-image
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- .:/var/opt:ro
ports:
- 3001
command: sam local start-lambda --region eu-west-1 -n .env-sam.json --docker-network my-net -v "$PWD" --host 0.0.0.0 --debug
dynamodb:
image: amazon/dynamodb-local:1.11.477
volumes:
- dynamodb-data:/data
ports:
- 8000
healthcheck:
test: ["CMD", "bash", "-c", "cat < /dev/null > /dev/tcp/localhost/8000 || exit 1"]
interval: 5s
timeout: 3s
start_period: 10s
user: root
command: -jar DynamoDBLocal.jar -sharedDb -dbPath /data
volumes:
dynamodb-data:
networks:
default:
external:
name: my-net
It might be slow, but I haven't seen it slow to the brink of timeouts. My use case is to start this lambda container before running the tests utilizing the lambda for example. One caveat is that I do have my lambda endpoint configurable via environment variable, in production this variable is unset.
E.g. with JS:
const lambda = new AWS.Lambda({ endpoint: config.get('lambda.endpoint') })
This code snippet will return null when the environment variable is not set and hence fallback to the AWS default endpoint, but in test env via docker I set it to something like http://lambda:3001 matching the docker-compose service above.
For completeness sake, if you are using SQS as well, you can add:
sqs:
image: roribio16/alpine-sqs
ports:
- 9324
- 9325
stdin_open: true
tty: true
We also configure these by environment variables, such as MY_SQS_QUEUE=http://sqs:9324/queue/default, for production you have something like this in your template.yaml (where MySqsQueue is your SQS queue resource):
Globals:
Function:
Environment:
Variables:
MY_SQS_QUEUE: !Ref MySqsQueue
I use the HELL out of @mhart's docker images. Basically I use them for base images via docker-compose to lock down SAM to a version, install node since Ruby (in dev) needs that too. I'd very much like to see this re-opened. It would be nice if sam local start-api allowed the current container it was run under to be seen as the runtime.
If that were done, I could stop doing extra leg-work in my Dockerfile to install Lambda Layer dependencies which just increases build/test times. I totally get that the none build containers are there for a purpose since they closely mimic the runtime, but runtime for dev would be great.
The problem here is that there is no official sam image for docker-compose, the fact that it works with a user image and a lot of hacking is not a good solution IMHO
Using containers for local development is becoming increasingly common and docker in docker is still a hassle (security and performance issues).
It would be great to simply mount my code directory on an API Gateway container and get it working.
I cannot imagine why this was closed, it's a very valid request and it doesn't even kind of work with the most recent version of anything. The amount of hoops one has to jump through to get this to mostly work is really astounding.
This might not yet fully address everything, but we publish build images on the ECR Public Gallery (e.g. public.ecr.aws/sam/build-python3.8) that include the SAM CLI.