functions
functions copied to clipboard
Stress test results locally on a mac in single-server mode with synchronous function calls
Hey folks,
First of all, I love what you guys are doing here and I think the approach to start as a single server and provide a smooth transition to a load-balanced architecture is the way to go.
One thing I wanted to evaluate before diving further was the performance of the single-server solution. I just ran the hello world example against a stress test on a local single-server setup and it started failing pretty quickly.
Here's the command I'm running:
echo "GET http://localhost:8080/r/myapp/hello" | vegeta attack -duration=20s | tee results.bin | vegeta report
Requests [total, rate] 1000, 50.05
Duration [total, attack, wait] 38.884186888s, 19.979999914s, 18.904186974s
Latencies [mean, 50, 95, 99, max] 13.353667945s, 5.123431ms, 30.007355986s, 30.008892365s, 30.011017934s
Bytes In [total, mean] 0, 0.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 0.00%
Status Codes [code:count] 0:1000
Error Set:
Get http://localhost:8080/r/myapp/hello: EOF
Get http://localhost:8080/r/myapp/hello: read tcp [::1]:49504->[::1]:8080: read: connection reset by peer
Get http://localhost:8080/r/myapp/hello: http: server closed idle connection
Get http://localhost:8080/r/myapp/hello: net/http: timeout awaiting response headers
It was working swimmingly and then it reached a point where it halted, then spit out a bunch of warnings:
WARN[1088] docker temporary error, retrying error="context deadline exceeded"
WARN[1088] retrying on docker errors timed out, restart docker or rotate this instance? error="context deadline exceeded"
Followed by errors:
ERRO[1147] Failed to run task action="server.handleRunnerRequest)-fm" app=myapp call_id=4d0decd6-3777-5d36-913f-f74b21e8249c error="context deadline exceeded" image="mattmueller/hello:0.0.1" route="/hello"
I was running this under the synchronous option, I was hoping that the requests would be held by the message queue until there were resources available, but it seems like that's only with the asynchronous option?
Any ideas? I'd love to see this become a more robust solution in the future. Keep up the good work!
@matthewmueller I ran into something similar.
It seems executing a docker container per HTTP/function request isn't a good strategy since it will try to start thousands of docker containers per second.
There should be a way to keep functions "warm" somehow, or to remove the need for wrapping every function in a docker container.
I was hoping to build a rest backend with iron but I'll be sticking with Lambda for HTTP concurrency.
@mazamats have you tried hot functions? This feature will prevent from running N similar containers per each request for one function. Hot function keeps container up until there's no more requests to process.
See https://github.com/iron-io/functions/blob/master/docs/hot-functions.md
@mazamats @matthewmueller Hey, folks, were you able to try hot functions?
@denismakogon tried briefly, but couldn't figure out how to configure it. from the docs:
{
"route":{
"app_name": "myapp",
"path": "/hot",
"image": "USERNAME/hchttp",
"memory": 64,
"type": "sync",
"config": null,
"format": "http",
"max_concurrency": "1"
}
}
is that just a func.yaml in json format?
So, @matthewmueller. In order to turn function into hot you need to create function with two additional parameters:
- format (there are two options:
httpandjson), this option allows to pass raw HTTP request via STDIN into a function (if you'd pickhttp), if you'd pickjsonyou'll get only data from HTTP request (basically, JSON that you'd send during function execution. - max_concurrency, this option stands the number of simultaneous hot functions for this functions.
So, term hot means that container will remain launched for some period of time unless no connection and will be suspended by idle timeout.
Got it to work, results below:
P.S. I initially tried to add format: http in the config section on the Functions UI since there wasn't an field to change the format. This caused some initial confusion, I eventually got it by updating format with postman
Normal Function
Running 10s test @ http://10.0.0.10:8080/r/coldapp/cold
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.70s 209.48ms 2.00s 75.00%
Req/Sec 4.14 3.25 10.00 64.29%
47 requests in 10.10s, 5.92KB read
Socket errors: connect 0, read 0, write 0, timeout 15
Requests/sec: 4.65
Transfer/sec: 600.36B
Hot Function
Running 10s test @ http://10.0.0.10:8080/r/myapp/hello
2 threads and 10 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 8.55ms 27.40ms 293.43ms 97.55%
Req/Sec 1.10k 164.30 1.40k 75.51%
21407 requests in 10.01s, 7.59MB read
Requests/sec: 2138.85
Transfer/sec: 777.00KB