Support for `runtime: provided` without requiring `useDocker`
Description
This PR allows usage of runtime: provided in serverless.yml and useDocker is unset or false in custom.serverless-offline:
GET /dev/hello (λ: hello)
× Unsupported runtime
× Uncaught exception
It now uses the execa library to do a local execution of the bootstrap script:
GET /dev/hello (λ: hello)
(λ: hello) RequestId: b440ae80-b0ca-4d0d-ab6f-ea3226cb9d1a Duration: 55.80 ms Billed Duration: 56 ms
Motivation and Context
- I have a project where I compile a Go application into a binary.
- The compiled binary is so simple that it doesn't need Docker as an abstraction layer for execution
- This allows for direct execution of the
bootstrapscript without launching it inside a docker container - This makes for a faster local execution of binaries
- This allows people to connect to debug ports for their compiled binaries (such as launching with
delve)
How this works:
- When Lambdas run, the end up starting an embedded server called the Runtime server
- Each invocation gets its own short-lived Runtime Server on a random port
- It has a couple basic endpoints:
- https://docs.aws.amazon.com/lambda/latest/dg/runtimes-api.html#runtimes-api-next
- https://docs.aws.amazon.com/lambda/latest/dg/runtimes-api.html#runtimes-api-response
- Each invocation when
runtime: providedanduseDocker: false|undefined, will create aRuntimeServeron a random port, and set theAWS_LAMBDA_RUNTIME_APIenvironment variable so that thebootstrapscript can interact with it. - Then as soon as
/responseis called, theeventis cleared from theRuntimeServerso subsequent invocations of/nextindefinitely block.
Notes
- Design choice: one
RuntimeServerper invocation:- We could, possibly, create a single runtime server and share it between multiple invocations. I thought this to be a premature optimization, so I opted to make a brand new RuntimeServer for each invocation, for a more lambda-like experience.
- A single
RuntimeServercould be shared by setting theAWS_LAMBDA_RUNTIME_APIenvironment variable to be:http://localhost:${some-non-random-port}/function-nameinstead ofhttp://localhost:${some-random-port} - I'd love your feedback on if you want this or not...
- This doesn't support Lambda Layers, yet. Although I presume it could
- For now it will throw an error if
useDocker: falseorundefinedand there arelayerson the function
- For now it will throw an error if
How Has This Been Tested?
- I copied
./tests/integration/docker/providedto./tests/lambda-run-mode/provided- I removed
useDocker: trueand began development until it worked
- I removed
Hi @dherault and @DorianMazur this PR might stir up some debate, so let me know what you think!
Long story short, I never liked the requirement to have Docker to run serverless applications that have a compiled binary. It also caused long-ish first-runs of a lambda function as it downloads and runs the base container and layers.
I found myself wishing Docker wasn't involved in the execution of the bootstrap script, so I decided to make this PR.
hi @dherault and @DorianMazur it's been about a week, have you been able to take a look at this?
Thanks for the PR @cnuss, but I'd stick with docker in this case. Let's wait for @dherault's response.
I'm trusting your guts @DorianMazur
Closing. I'm developing a framework outside of serverless and no longer need this