Example should have performance numbers and comparisons
I'm doing some benchmarking, and while this might be apples to oranges,
- Invoking quickjs as a wasmtime precompiled using
--eval console.log('hello world')1000 times takes approx 7-8s - Invoking the hello-world.js embedding in the example 1000 times takes approx 80s or maybe 10x as much time.
time for x in `seq 1 1000`; do wasmtime --allow-precompiled quickcheck.cwasm --eval "console.log('hello world')"; done
real 0m7.849s
user 0m3.077s
sys 0m6.752s
$ time for x in `seq 1 1000`; do ./target/release/wasmtime-test > /dev/null; done
real 1m21.672s
user 0m45.470s
sys 0m38.263s
Using https://gitlab.com/api/v4/projects/47807501/packages/generic/js_interpreters/0.0.1/cweb_quickcheck.wasm for the example.
It would be good to understand better what performance we can expect and how to benchmark against other options.
Make sure to check with the --enableAot / enableAot option for ComponentizeJS here which uses https://github.com/bytecodealliance/weval.
Benchmarking for StarlingMonkey in general is tracking in https://github.com/bytecodealliance/StarlingMonkey/issues/102.
Out of curiosity, I tried running a similar script with enableAOT: true/false, and it seems that executing a similar script with enableAOT: true makes it slower 🤔 (Sorry if I misunderstand something).
# becnh.sh
for x in $(seq 1 1000)
do
./target/release/wasmtime-test > /dev/null
done
# on examples/hello-world/host
# enableAOT: true
$ /usr/bin/time ./bench.sh
818.09 real 760.74 user 43.66 sys
# enableAOT: false
$ /usr/bin/time ./bench.sh
315.31 real 293.17 user 27.95 sys
It depends on the exact workload whether the AOT flag improves performance or not. It's also worth testing on precompiled binaries since AOT sources are larger so will have a longer compile time in Wasmtime.
That makes sense, hello world app would be too small to benefit from weval :+1: