ChakraCore icon indicating copy to clipboard operation
ChakraCore copied to clipboard

[Perfomance] For-of vs For non-cached vs For-cached

Open MadProbe opened this issue 3 years ago • 15 comments

After running small benchmark on Windows 10 with x64_debug config, i've got these results:

13996ms (for-of), 90ms (for-non-cached), 63ms (for-cached) (on average after 10 subsequent runs)
For-of is slower than for-non-cached by 15551%
and For-of is slower than for-cached by 22215%!

Bench code:

{
    const a = Array.from({ length: 1e7 }, _ => 0);
    let start = (console.log('Start'), Date.now());
    for (const _ of a);
    console.log(Date.now() - start + 'ms (for-of)');
    start = Date.now();
    for (let i = 0; i < a.length; i++) { const _ = a[i]; }
    console.log(Date.now() - start + 'ms (for-non-cached)');
    start = Date.now();
    const length = a.length;
    for (let i = 0; i < length; i++) { const _ = a[i]; }
    console.log(Date.now() - start + 'ms (for-cached)');
}

MadProbe avatar Apr 28 '21 20:04 MadProbe

I can propose some optimizations to for-of loop:

  1. Array(Iterator) 1.1. ArrayIterator's builtin functions remain untouched. 1.2. cache length property if (array is frozen || array's length is not mutated) && it's or Array.prototype's length property is unchanged && array is not proxy, otherwise just keep referencing length property as excepted. 1.3. do same as in for-non-cached or for-cached cases skipping all iterator machinery.
  2. String(Iterator) 2.1. StringIterator's builtin functions remain untouched. 2.2. cache length property if string's object wrapper is not proxy, otherwise just keep referencing length property as excepted. 2.3. same as 1.3.

MadProbe avatar Apr 28 '21 21:04 MadProbe

There's a few things to think about here:

  1. doing the benchmark this way the loop control is almost certainly never being jitted so you're mostly looking at the interpreter's speed; for-of does get better in the JIT (though it doesn't catch up with the others).
  2. length cacheing is sort-of done in the jit already
  3. Roughly what we're talking about here is introducing an operation at the start of a for...of loop that checks if the iterator is the "standard" iterator and if it is that does something different - this could work, particularly as per spec at the start of a for...of you access the iterator once then cache the next method so it doesn't get accessed again/can't be overwritten in the loop. Downside is that we'd potentially have to emit the bytecode of the loopbody twice.

Basically I think we'd have to do something like this:

  1. check iterator type, if it is "standard" jump to LABEL B
  2. do setup for normal for...of
  3. normal for...of loop control, when finished jump to LABEL C
  4. loop body - bytecode for everything inside the loop
  5. jump to step 3 - normal loop control
  6. LABEL B
  7. Do setup for optimised loop control
  8. optimised loop control
  9. loop body
  10. jump to step 7 - optimised loop control
  11. LABEL C

If the loop body had large content this could be a large increase in bytecode size - which may be non-optimal for some use-cases.

Another key part of the performance difference is that a for...of has an embedded try...catch...finally statement (to trigger iterator.return() in certain circumstances) - obviously unneeded when the iterator is the default.

rhuanjl avatar Apr 29 '21 17:04 rhuanjl

I know that previous results are from interpreted mode, but here are results when same code is run in dynapogo mode (= with full JIT?):

Start
45390ms (for-of)
12ms (for-cached)
14ms (for-non-cached)

It's pretty bizarre to see result of for-of loop when you are comparing it with result from interpreted mode (3x increase in mode which meant to be more performant)

MadProbe avatar Apr 29 '21 17:04 MadProbe

Make the code a function and call it - the jit doesn't always work so well on global code; or more specifically, global code not inside a loop isn't jitted at all.

rhuanjl avatar Apr 29 '21 18:04 rhuanjl

Will IIFE work or i need named function to do that?

MadProbe avatar Apr 29 '21 18:04 MadProbe

If you're using dynopogo flags you can just put everything inside one function, though if you're trying to do it with one run (and not using profile info) you probably want to do -mic:1 -off:simplejit -bgjit- as the flags AND call the function twice like:

function test() {
  // test code
}
test();
test();

That should give you 1 interpreted run and 1 fully jitted run.

Explanation of flags: -mic:1 - run each function in the interpreter a maximum of 1 time before trying to jit it -off:simplejit - skip the intermediate/simple version of the jit -bgjit-- run the jit on the main thread (so you pause execution whilst jitting) - otherwise jit is run on a background thread

rhuanjl avatar Apr 29 '21 18:04 rhuanjl

I still get 3-4x increase in dynapogo mode using flags
Code:

{
    const a = Array.from({ length: 1e6 }, _ => 0);
    function test(a) {
        let start = (console.log('Start'), Date.now());
        for (const _ of a);
        console.log(Date.now() - start + 'ms (for-of)');
        start = Date.now();
        const length = a.length;
        for (let i = 0; i < length; i++) { const _ = a[i]; }
        console.log(Date.now() - start + 'ms (for-cached)');
        start = Date.now();
        for (let i = 0; i < a.length; i++) { const _ = a[i]; }
        console.log(Date.now() - start + 'ms (for-non-cached)');
    }
    if (Array.isArray(a))
        test(a);
}

Dynapogo mode PS code, with which i launch test code:

rm.exe -rf .\profile.dpl
.\ChakraCore\Build\VcBuild\bin\x64_debug\ch.exe .\test.js -maxInterpretCount:1 -maxSimpleJitRunCount:1 -bgjit- -dynamicprofilecache:profile.dpl -collectGarbage > $null
.\ChakraCore\Build\VcBuild\bin\x64_debug\ch.exe .\test.js -WERExceptionSupport -ExtendedErrorStackForTestHost -BaselineMode -forceNative -off:simpleJit -bgJitDelay:0 -dynamicprofileinput:profile.dpl -collectGarbage

MadProbe avatar Apr 29 '21 21:04 MadProbe

There are 3 ways of building CC: Debug, Test and Release. Debug and Test both have the ability to use flags, Release does not. Debug contains lots of extra internal checking so runs significantly slower than the other two. Test and Release have similar performance profiles.

Upshot of that is that you probably need to use a test build not a debug build to get the most useful results here.

rhuanjl avatar Apr 29 '21 22:04 rhuanjl

Note, testing this on macOS with the new WScript monotonicNow method and using flags to jit etc. I get these results:

RUN ONE - interpreted testForOf time: 586.1193810105324 testForNonCached time: 10.010149002075195 testForCached time: 11.063928008079528

RUN TWO - fullJit testForOf time: 153.81329196691513 testForNonCached time: 8.697234034538269 testForCached time: 8.02989399433136

rhuanjl avatar Apr 30 '21 15:04 rhuanjl

I still get 2.5x increase in dynapogo mode, tested on x64_test build:

======================Interpreted======================
754.0545997619629ms (for-of)
48.02579975128174ms (for-cached)
78.80779981613159ms (for-non-cached)
=======================Dynapogo=======================
1784.9175000190735ms (for-of)
12.22700023651123ms (for-cached)
13.011600017547607ms (for-non-cached)

MadProbe avatar Apr 30 '21 15:04 MadProbe

Is it possible that the time you're recording includes time spent jitting a loop body?

pleath avatar Apr 30 '21 15:04 pleath

I see the difference - the way I was testing it, the Array iterator methods get jitted during the interpreter run then can get inlined in the jitted run; whereas with the dynapogo flags (dynamic profile info and forcenative) your test function gets jitted first then the array iterator methods get jitted after so they can't be inlined.

Putting a single call to a[Symbol.iterator]().next(); before you call test(); gets rid of much of your discrepancy. and using -off:jitloopbody (per @pleath's suggestion) makes the situation more stable still.

rhuanjl avatar Apr 30 '21 15:04 rhuanjl

For reference my test case (with results above) was this:

function testForOf(arr)
{
    const start = WScript.monotonicNow();
    for (_ of arr) {
        _;
    }
    return WScript.monotonicNow() - start;
}

function testForNonCached(arr)
{
    const start = WScript.monotonicNow();
    for (let i = 0; i < arr.length; ++i) {
        arr[i];
    }
    return WScript.monotonicNow() - start;
}

function testForCached(arr)
{
    const start = WScript.monotonicNow();
    let len = arr.length
    for (let i = 0; i < len; ++i) {
        arr[i];
    }
    return WScript.monotonicNow() - start;
}

const a = Array.from({ length: 1e7 }, _ => 0);
print("RUN ONE - interpreted");

print("testForOf time: " + testForOf(a));
print("testForNonCached time: " + testForNonCached(a));
print("testForCached time: " + testForCached(a));

print("RUN TWO - fullJit");

print("testForOf time: " + testForOf(a));
print("testForNonCached time: " + testForNonCached(a));
print("testForCached time: " + testForCached(a));

And flags -mic:1 -bgjit- -off:simplejit

rhuanjl avatar Apr 30 '21 15:04 rhuanjl

Thinking more on the proposed optimisation wondering if we could do something like this (pseudocode):

canOpt = Op_IsDefaultIterator()
if (canOpt)
{
  initFasterLoop
  fastLabel:
  //some kind of op to put the first value into the iterator destination
}
else
{
  initSlowLoop
  slowLabel:
  current logic
}

// loop body here

if (canOpt)
{
  goto fastLabel
}
else
{
  goto slowLabel
}

This does of course mean an extra condition to check every iteration of the loop BUT the jit should be able to hoist it - and when it hits it would be a relatively massive optimisation - though dealing with the try/catch/finally that are needed on the default path but not the fastpath would be awkward.

rhuanjl avatar Apr 30 '21 15:04 rhuanjl

Is it possible that the time you're recording includes time spent jitting a loop body?

After addiction of -off:jitloopbody flag on run, which is run in interpreted mode, it really shows that time is spent jitting a loop body

MadProbe avatar Apr 30 '21 15:04 MadProbe