aws-lambda-haskell icon indicating copy to clipboard operation
aws-lambda-haskell copied to clipboard

using a new child process upon each request

Open thkang2 opened this issue 8 years ago • 4 comments

Won't it be extremely large overhead to boot up a child haskell process and communicate it via stdin/stdout? Even if the haskell code is going to be like putStrLn "Hello World!"

I understand this isn't for production, just asking if you explored any other options, as I have interest in running haskell code on aws lambda myself.

Thanks!

thkang2 avatar Aug 02 '16 10:08 thkang2

Disclaimer: I've only done toy hello world examples so far with Haskell on lambda.

Ideally the executable would be running in the background, frozen from an earlier lambda invocation. (I don't think the current code allows for this, but it should be possible to make it do so.)

I'm not sure what the other options would be, aside from communicating with something other than stdin / stdout. (HTTP, TCP?)

With that said, I've heard Haskell has a pretty low overhead for starting up (see link below). Perhaps even with starting a new executable each time it'll be "fast enough", at least for a proof of concept that can later be optimized with a little more work?

http://roscidus.com/blog/blog/2013/06/09/choosing-a-python-replacement-for-0install/#speed-and-size

On Tuesday, August 2, 2016, thkang2 [email protected] wrote:

Won't it be extremely large overhead to boot up a child haskell process and communicate it via stdin/stdout? Even if the haskell code is going to be like putStrLn "Hello World!"

I understand this isn't for production, just asking if you explored any other options, as I have interest in running haskell code on aws lambda myself.

Thanks!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/abailly/aws-lambda-haskell/issues/3, or mute the thread https://github.com/notifications/unsubscribe-auth/ACG1ePAkgRE-wTGoMVmJcVu1wLtN-2Fbks5qbxZGgaJpZM4JacTR .

bwbaugh avatar Aug 02 '16 13:08 bwbaugh

Hi Wesley,

What do you mean by "frozen" ? Having some process forked which runs in the background and with which front-end communicates through TCP ? This assumes that the container is not immediately garbaged and can be reused across invocations. Am I correctly understanding what you suggest?

I think that startup overhead of Haskell process is indeed pretty slow, being native code. There is nothing like boot time you can observe on a JVM.

Le 2 août 2016 15:56, "Wesley Baugh" [email protected] a écrit :

Disclaimer: I've only done toy hello world examples so far with Haskell on lambda.

Ideally the executable would be running in the background, frozen from an earlier lambda invocation. (I don't think the current code allows for this, but it should be possible to make it do so.)

I'm not sure what the other options would be, aside from communicating with something other than stdin / stdout. (HTTP, TCP?)

With that said, I've heard Haskell has a pretty low overhead for starting up (see link below). Perhaps even with starting a new executable each time it'll be "fast enough", at least for a proof of concept that can later be optimized with a little more work?

http://roscidus.com/blog/blog/2013/06/09/choosing-a-python-replacement-for-0install/#speed-and-size

On Tuesday, August 2, 2016, thkang2 [email protected] wrote:

Won't it be extremely large overhead to boot up a child haskell process and communicate it via stdin/stdout? Even if the haskell code is going to be like putStrLn "Hello World!"

I understand this isn't for production, just asking if you explored any other options, as I have interest in running haskell code on aws lambda myself.

Thanks!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/abailly/aws-lambda-haskell/issues/3, or mute the thread < https://github.com/notifications/unsubscribe-auth/ACG1ePAkgRE-wTGoMVmJcVu1wLtN-2Fbks5qbxZGgaJpZM4JacTR

.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/abailly/aws-lambda-haskell/issues/3#issuecomment-236911882, or mute the thread https://github.com/notifications/unsubscribe-auth/AACdHc9_6dD6Cb5k62xiHiqVS5MyRz7yks5qb0ybgaJpZM4JacTR .

abailly avatar Aug 02 '16 14:08 abailly

Yes, the Lambda docs indicate that subsequent invocations MAY reuse an existing process (not guaranteed). I don't think you'd need TCP to take advantage of this.

Are you saying JVM boot time is slow? How does the overhead of starting a Haskell executable differ from starting a Python executable (interpreter) or a Node executable etc.? For the runtimes that Lambda natively supports, you may not notice the startup overhead if you run the function a lot due to reusing containers // the freeze-thaw process cycle. However, if you invoke lambda functions infrequently then no matter the runtime (even adding in Haskell) you will have slow (relatively) starts on the first lambda invocation.

I'd suggest trying it out and measure the timings / overhead for yourself (or get them into the README since this will likely be a FAQ). I do think the best option is to start a Haskell executable once when the lambda starts and then communicate with it for subsequent invocations, but you have to be careful that it's done correctly otherwise you end up with issues like #2.

On Tuesday, August 2, 2016, Arnaud Bailly [email protected] wrote:

Hi Wesley,

What do you mean by "frozen" ? Having some process forked which runs in the background and with which front-end communicates through TCP ? This assumes that the container is not immediately garbaged and can be reused across invocations. Am I correctly understanding what you suggest?

I think that startup overhead of Haskell process is indeed pretty slow, being native code. There is nothing like boot time you can observe on a JVM.

Le 2 août 2016 15:56, "Wesley Baugh" <[email protected] javascript:_e(%7B%7D,'cvml','[email protected]');> a écrit :

Disclaimer: I've only done toy hello world examples so far with Haskell on lambda.

Ideally the executable would be running in the background, frozen from an earlier lambda invocation. (I don't think the current code allows for this, but it should be possible to make it do so.)

I'm not sure what the other options would be, aside from communicating with something other than stdin / stdout. (HTTP, TCP?)

With that said, I've heard Haskell has a pretty low overhead for starting up (see link below). Perhaps even with starting a new executable each time it'll be "fast enough", at least for a proof of concept that can later be optimized with a little more work?

http://roscidus.com/blog/blog/2013/06/09/choosing-a-python-replacement-for-0install/#speed-and-size

On Tuesday, August 2, 2016, thkang2 <[email protected] javascript:_e(%7B%7D,'cvml','[email protected]');> wrote:

Won't it be extremely large overhead to boot up a child haskell process and communicate it via stdin/stdout? Even if the haskell code is going to be like putStrLn "Hello World!"

I understand this isn't for production, just asking if you explored any other options, as I have interest in running haskell code on aws lambda myself.

Thanks!

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/abailly/aws-lambda-haskell/issues/3, or mute the thread <

https://github.com/notifications/unsubscribe-auth/ACG1ePAkgRE-wTGoMVmJcVu1wLtN-2Fbks5qbxZGgaJpZM4JacTR

.

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub < https://github.com/abailly/aws-lambda-haskell/issues/3#issuecomment-236911882 , or mute the thread < https://github.com/notifications/unsubscribe-auth/AACdHc9_6dD6Cb5k62xiHiqVS5MyRz7yks5qb0ybgaJpZM4JacTR

.

— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/abailly/aws-lambda-haskell/issues/3#issuecomment-236924241, or mute the thread https://github.com/notifications/unsubscribe-auth/ACG1eCwdpYH_VinSKDqiZWnjVtsJEA_9ks5qb1X_gaJpZM4JacTR .

bwbaugh avatar Aug 02 '16 15:08 bwbaugh

The screenshot says it already really: "request duration 118ms, billed 200ms". Seems to have taken about 100x what you'd shoot for, for a server-side hello-world responder. Maybe that's just the first "warm-up" request, have you compared after hitting the program with multiple subsequent requests?

metaleap avatar Oct 01 '17 08:10 metaleap