Execute js in a loop benchmark is very slow compared to graaljs
Here is a poor-man "benchmark" code:
package org.example
import com.caoccao.javet.interop.V8Host
import com.caoccao.javet.interop.V8Runtime
import org.graalvm.polyglot.Context
import kotlin.system.measureTimeMillis
object App {
@JvmStatic
fun main(args: Array<String>) {
V8Host.getNodeInstance().createV8Runtime<V8Runtime>().use { v8Runtime ->
v8Runtime.getExecutor("const hello = 10;").executeVoid()
val execute = v8Runtime.getExecutor("1 + hello")
val time = measureTimeMillis {
for (i in 0..1_000_000) {
execute.executeInteger()
}
}
println(time)
}
Context.create().use { context ->
context.eval("js", "const hello = 10;")
// warmup graaljs a little bit
measureTimeMillis {
for (i in 0..1_000_000) {
context.eval("js", "1 + hello").asLong()
}
}
val time = measureTimeMillis {
for (i in 0..1_000_000) {
context.eval("js", "1 + hello").asLong()
}
}
println(time)
}
}
}
4521 ms for Javet, 355 for graaljs. If remove graaljs warmup, starting from approx 40_000 iterations graalvm starts showing better results thatn Javet.
openjdk 21, mac os, not a graalvm.
Anyway, regardless of reply - many thanks for this wonderful project.
It doesn't prove anything. The Node.js runtime is for sure slow because it has a huge overhead. And you put Javet before the GraalJS, your CPU might be running in low energy E core, then switch to P core with full power.
You mean overhead in calling jni and co? I pretty sure that crossing boundaries is expensive, but wondering why graaljs is faster at this... Anyway, will try to pin jvm process to P core, to ensure that not affects 'benchmark'.
And main purpose of my research is estimation of boundaries crossing time. I want to use some JS runtime to run react ui inside game engine(minecraft) and I wonder if UI computations will be non-comparable shorter than actually calling things back and forth between JS runtime and Java runtime. And I expect there will be lot of calls, specially from JS to java in order to render UI primitives.
People usually run JS benchmark scripts inside the engines to measure their performance. Your test doesn't prove the engines' capabilities. It's just a JNI vs something else.
Exactly, that is what I try to measure "jni vs whatever happens under the hood", because that is important for my use-case.
Fine, please ensure your test is fair enough.