sliced icon indicating copy to clipboard operation
sliced copied to clipboard

unfair bench comparison

Open jimmywarting opened this issue 3 years ago • 6 comments

you are only testing a very short argumments list

/** @param {number[]} args */
function (...args) {
 // is also quite fast (on the plus side you can type this with jsdoc)
}

This is a result i got when executing the bench test

var arr = Array.from(crypto.randomBytes(1000))

s.add('sliced', function () {
  sliced(arr)
}).add('arr.slice()', function () {
  arr.slice()
}).on('cycle', function (evt) {
  console.log(String(evt.target));
}).on('complete', function () {
  console.log('fastest is %s', this.filter('fastest').pluck('name'));
})
.run();

// sliced x 325,245 ops/sec ±1.06% (95 runs sampled)
// arr.slice() x 2,068,647 ops/sec ±1.56% (87 runs sampled)
// fastest is [ 'arr.slice()' ]

Sliced can sometimes hurt the performances I think you should test more different versions of slices before claiming it is faster than any other slice method

Uint8Array.slice buffer.slice string.slice array.slice arguments Array.from(arguments).slice ...args slice

in all different sizes from 0, 10, 100, 1000, 10000

jimmywarting avatar Aug 13 '21 10:08 jimmywarting

Hi jimmy,

I partially agree. slice is more about Arrays so I would expect not Buffers or ArrayBuffers performance as relevant,

On the other hand, I tested it extensively, and have to say, that sliced in current implementation is basically always slower than Array.prototype.slice.call.

I made some improvements, which ensure that in some major cases you have definitely performance impact by using (my) sliced. But in simple cases like just simply cloning an array like slice(), I get a slower performance than sliced.

Are you interested in a collaboration?

Uzlopak avatar Jan 16 '22 22:01 Uzlopak

I partially agree. slice is more about Arrays so I would expect not Buffers or ArrayBuffers performance as relevant,

yea, that's kind of another beast. you would not even get the same type of array...

Are you interested in a collaboration?

what kind of collaboration?

jimmywarting avatar Jan 16 '22 23:01 jimmywarting

I can fork and push my changes. Maybe you would have some more insights how to optimize it further. Not a big collaboration...

Uzlopak avatar Jan 17 '22 00:01 Uzlopak

hmm, nah... don't see any benefits of using a dependency for something that beats a dependency free solution and trades something that is much simpler, shorter and IMO don't need to be optimized. importing x from y is a black box that every developer needs to learn what it dose. it's much easier for someone to see and understand native code, Native slice and spread is already pretty fast, and if something in your code is running slow and you start to have to use micro optimization here and there then it probably bad code smell on your part outside of array slice anyways.

Then it would for example be much faster to just use the same array instead of cloning it if you really need some speed

I guess sliced was created by dealing with arguments, and if you are passing more then 10 arguments to a function then you are probably doing something wrong. then it would be better to pass in an array as one argument instead. Then you shouldn't have to use a slice method on steorids to convert less then 10 arguments to an array

jimmywarting avatar Jan 17 '22 01:01 jimmywarting

The only reason I was looking into it, was that it is a dependency of mongoose. And tbh, I was shocked that sliced is in the current implementation always slower than native slice. The benchmark used in this project is as you wrote just plain and simple unfair, as it is basically the arguments array with one entry. You can improve the performance drastically if you do

/**
 * An Array.prototype.slice.call(arguments) alternative
 *
 * @param {Object} args something with a length
 * @param {Number} slice
 * @param {Number} sliceEnd
 * @api public
 */

 module.exports = function (args, slice, sliceEnd) {

  var len = args.length;

  if (0 === len) return [];

  var start = (slice < 0)
    ? Math.max(0, slice + len)
    : slice || 0;

  if (sliceEnd !== undefined) {
    len = sliceEnd < 0
      ? sliceEnd + len
      : sliceEnd
  }

  if (len - start < 1) return []
  var ret = new Array(len - start);

  if (start !== 0) {
    while (len-- > start) {
      ret[len - start] = args[len];
    }
  } else {

    while (len--) {
      ret[len] = args[len];
    }
  }

  return ret;
}

but still... the more entries it has to process, the slower it is compared to original slice.

Uzlopak avatar Jan 17 '22 04:01 Uzlopak

my fork is here

before:

Array.prototype.slice.call x 8,027,444 ops/sec ±2.71% (82 runs sampled)
[].slice.call x 7,282,643 ops/sec ±2.42% (85 runs sampled)
cached slice.call x 6,678,054 ops/sec ±0.58% (98 runs sampled)
sliced x 2,084,892 ops/sec ±0.45% (96 runs sampled)
fastest is [ 'Array.prototype.slice.call' ]
Array.prototype.slice.call(testArray, 1) x 6,666,796 ops/sec ±0.86% (96 runs sampled)
[].slice.call(testArray, 1) x 7,019,334 ops/sec ±0.90% (97 runs sampled)
cached slice.call(testArray, 1) x 7,064,610 ops/sec ±0.20% (100 runs sampled)
sliced(testArray, 1) x 2,136,858 ops/sec ±0.25% (98 runs sampled)
fastest is [ 'cached slice.call(testArray, 1)', '[].slice.call(testArray, 1)' ]
Array.prototype.slice.call(testArray, -1) x 34,943,928 ops/sec ±0.70% (96 runs sampled)
[].slice.call(testArray, -1) x 34,828,541 ops/sec ±0.81% (97 runs sampled)
cached slice.call(testArray, -1) x 36,215,172 ops/sec ±0.46% (99 runs sampled)
sliced(testArray, -1) x 38,468,329 ops/sec ±0.13% (98 runs sampled)
fastest is [ 'sliced(testArray, -1)' ]
Array.prototype.slice.call(testArray, -2, -10) x 43,903,829 ops/sec ±0.31% (95 runs sampled)
[].slice.call(testArray, -2, -10) x 43,797,049 ops/sec ±0.30% (99 runs sampled)
cached slice.call(testArray, -2, -10) x 43,939,006 ops/sec ±0.38% (98 runs sampled)
sliced(testArray, -2, -10) x 136,736,157 ops/sec ±0.35% (99 runs sampled)
fastest is [ 'sliced(testArray, -2, -10)' ]
Array.prototype.slice.call(testArray, -2, -1) x 35,698,130 ops/sec ±0.65% (95 runs sampled)
[].slice.call(testArray, -2, -1) x 34,639,853 ops/sec ±1.51% (91 runs sampled)
cached slice.call(testArray, -2, -1) x 35,771,881 ops/sec ±0.63% (98 runs sampled)
sliced(testArray, -2, -1) x 38,169,503 ops/sec ±0.23% (101 runs sampled)
fastest is [ 'sliced(testArray, -2, -1)' ]

after

Array.prototype.slice.call x 5,797,350 ops/sec ±1.31% (87 runs sampled)
[].slice.call x 6,411,526 ops/sec ±0.67% (95 runs sampled)
cached slice.call x 6,454,722 ops/sec ±0.58% (92 runs sampled)
sliced x 4,909,607 ops/sec ±1.05% (92 runs sampled)
fastest is [ 'cached slice.call', '[].slice.call' ]
Array.prototype.slice.call(testArray, 1) x 6,951,746 ops/sec ±0.52% (97 runs sampled)
[].slice.call(testArray, 1) x 6,899,510 ops/sec ±0.48% (95 runs sampled)
cached slice.call(testArray, 1) x 6,845,316 ops/sec ±0.41% (91 runs sampled)
sliced(testArray, 1) x 4,265,869 ops/sec ±1.04% (91 runs sampled)
fastest is [
  'Array.prototype.slice.call(testArray, 1)',
  '[].slice.call(testArray, 1)'
]
Array.prototype.slice.call(testArray, -1) x 29,336,222 ops/sec ±0.38% (97 runs sampled)
[].slice.call(testArray, -1) x 29,948,498 ops/sec ±0.26% (100 runs sampled)
cached slice.call(testArray, -1) x 29,938,947 ops/sec ±0.29% (100 runs sampled)
sliced(testArray, -1) x 107,192,408 ops/sec ±0.35% (95 runs sampled)
fastest is [ 'sliced(testArray, -1)' ]
Array.prototype.slice.call(testArray, -10, -1) x 27,545,494 ops/sec ±0.31% (96 runs sampled)
[].slice.call(testArray, -10, -1) x 27,949,700 ops/sec ±0.33% (98 runs sampled)
cached slice.call(testArray, -10, -1) x 27,479,736 ops/sec ±0.27% (95 runs sampled)
sliced(testArray, -10, -1) x 45,405,119 ops/sec ±0.34% (94 runs sampled)
fastest is [ 'sliced(testArray, -10, -1)' ]
Array.prototype.slice.call(testArray, -40, -1) x 17,661,963 ops/sec ±0.49% (99 runs sampled)
[].slice.call(testArray, -40, -1) x 17,706,806 ops/sec ±0.29% (100 runs sampled)
cached slice.call(testArray, -40, -1) x 17,612,999 ops/sec ±0.55% (95 runs sampled)
sliced(testArray, -40, -1) x 13,616,217 ops/sec ±0.37% (95 runs sampled)
fastest is [
  '[].slice.call(testArray, -40, -1)',
  'Array.prototype.slice.call(testArray, -40, -1)',
  'cached slice.call(testArray, -40, -1)'
]
Array.prototype.slice.call(testArray, -2, -1) x 30,651,106 ops/sec ±0.30% (99 runs sampled)
[].slice.call(testArray, -2, -1) x 31,136,356 ops/sec ±0.41% (96 runs sampled)
cached slice.call(testArray, -2, -1) x 40,148,022 ops/sec ±5.56% (99 runs sampled)
sliced(testArray, -2, -1) x 109,177,175 ops/sec ±0.28% (100 runs sampled)
fastest is [ 'sliced(testArray, -2, -1)' ]

and if you think that sliced for -2 to -1 is faster than original slice, then keep in mind it is containing only 1 element. Thats why added -40 to -1, as it contains 39 entries. It is slower. So sliced is only faster for smal slices. Like 5 elements. If it is bigger, it slows verv fast down.

Uzlopak avatar Jan 17 '22 05:01 Uzlopak