serverless-offline icon indicating copy to clipboard operation
serverless-offline copied to clipboard

Support for `runtime: provided` without requiring `useDocker`

Open cnuss opened this issue 1 year ago • 3 comments

Description

This PR allows usage of runtime: provided in serverless.yml and useDocker is unset or false in custom.serverless-offline:

GET /dev/hello (λ: hello)
× Unsupported runtime
× Uncaught exception

It now uses the execa library to do a local execution of the bootstrap script:

GET /dev/hello (λ: hello)

(λ: hello) RequestId: b440ae80-b0ca-4d0d-ab6f-ea3226cb9d1a  Duration: 55.80 ms  Billed Duration: 56 ms

Motivation and Context

  • I have a project where I compile a Go application into a binary.
  • The compiled binary is so simple that it doesn't need Docker as an abstraction layer for execution
  • This allows for direct execution of the bootstrap script without launching it inside a docker container
  • This makes for a faster local execution of binaries
  • This allows people to connect to debug ports for their compiled binaries (such as launching with delve)

How this works:

  • When Lambdas run, the end up starting an embedded server called the Runtime server
  • Each invocation gets its own short-lived Runtime Server on a random port
  • It has a couple basic endpoints:
    • https://docs.aws.amazon.com/lambda/latest/dg/runtimes-api.html#runtimes-api-next
    • https://docs.aws.amazon.com/lambda/latest/dg/runtimes-api.html#runtimes-api-response
  • Each invocation when runtime: provided and useDocker: false|undefined, will create a RuntimeServer on a random port, and set the AWS_LAMBDA_RUNTIME_API environment variable so that the bootstrap script can interact with it.
  • Then as soon as /response is called, the event is cleared from the RuntimeServer so subsequent invocations of /next indefinitely block.

Notes

  • Design choice: one RuntimeServer per invocation:
    • We could, possibly, create a single runtime server and share it between multiple invocations. I thought this to be a premature optimization, so I opted to make a brand new RuntimeServer for each invocation, for a more lambda-like experience.
    • A single RuntimeServer could be shared by setting the AWS_LAMBDA_RUNTIME_API environment variable to be: http://localhost:${some-non-random-port}/function-name instead of http://localhost:${some-random-port}
    • I'd love your feedback on if you want this or not...
  • This doesn't support Lambda Layers, yet. Although I presume it could
    • For now it will throw an error if useDocker: false or undefined and there are layers on the function

How Has This Been Tested?

  • I copied ./tests/integration/docker/provided to ./tests/lambda-run-mode/provided
    • I removed useDocker: true and began development until it worked

cnuss avatar May 27 '24 15:05 cnuss

Hi @dherault and @DorianMazur this PR might stir up some debate, so let me know what you think!

Long story short, I never liked the requirement to have Docker to run serverless applications that have a compiled binary. It also caused long-ish first-runs of a lambda function as it downloads and runs the base container and layers.

I found myself wishing Docker wasn't involved in the execution of the bootstrap script, so I decided to make this PR.

cnuss avatar May 27 '24 15:05 cnuss

hi @dherault and @DorianMazur it's been about a week, have you been able to take a look at this?

cnuss avatar Jun 04 '24 14:06 cnuss

Thanks for the PR @cnuss, but I'd stick with docker in this case. Let's wait for @dherault's response.

DorianMazur avatar Jun 18 '24 10:06 DorianMazur

I'm trusting your guts @DorianMazur

dherault avatar Aug 17 '24 19:08 dherault

Closing. I'm developing a framework outside of serverless and no longer need this

cnuss avatar Aug 28 '24 11:08 cnuss