Javascript TypedArray performance

前端 未结 3 1783
隐瞒了意图╮
隐瞒了意图╮ 2020-12-08 02:44

Why are TypedArrays not faster then usual arrays? I want to use precalc values for CLZ(compute leading zeros function). And i don\'t want they interpreting as usual objects?

相关标签:
3条回答
  • 2020-12-08 03:28

    In your case the reason of bad performance is you try to read outside the array when using Uint32Array because of bug with array length.

    But if it wouldn't be the real reason then:

    Try to use Int32Array instead of Uint32Array. I think in V8 variables cannot have uint32 type but can have int32 / double / pointer. So when you assign uint32 type to variable it will be converted to slower double.

    If you use 32-bit version of V8 then variables can have int31 / double / pointer type. So int32 will be converted into double too. But if you use usual array and all values are int31 then conversion is not needed so usual array can be faster.

    Also using int16 can require some conversions to get int32 (because of sign and ones' complement). Uint16 will not require conversion because V8 can just add zeros in the left.

    PS. You may be interested pointers and int31 (or int32 on x64) are the same things in V8. This also means that int32 will require 8 bytes on x64. Also this is reason why there's no int32 type on x86: because if we would used all 32 bits for storing integer we wouldn't have any space to save pointers anymore.

    0 讨论(0)
  • 2020-12-08 03:36
    var buffer = new ArrayBuffer(0x10000);
    var Uint32 = new Uint32Array(buffer);
    

    is not the same thing as:

    var Uint32 = new Uint32Array(0x10000);
    

    not because of the new ArrayBuffer (you always get an array buffer: see Uint32.buffer in both cases) but because of the length parameter: with ArrayBuffer you have 1 byte per element, with Uint32Array you have 4 bytes per element.

    So, in the first case (and in your code), Uint32.length = 0x1000/4 and your loops are out of bounds 3 out of 4 times. But sadly you will never get errors, only poor performances.

    Using 'new ArrayBuffer', you have to declare Uint32 like this:

    var buffer = new ArrayBuffer(0x10000 * 4);
    var Uint32 = new Uint32Array(buffer);
    

    See jsperf with (0x10000) and jsperf with (0x10000 * 4).

    0 讨论(0)
  • 2020-12-08 03:49

    Modern engines will use true arrays behind the scenes even if you use Array if they think they can, falling back on property map "arrays" if you do something that makes them think they can't use a true array.

    Also note that as radsoc points out, var buffer = new ArrayBuffer(0x10000) then var Uint32 = new Uint32Array(buffer) produces a Uint32 array whose size is 0x4000 (0x10000 / 4), not 0x10000, because the value you give ArrayBuffer is in bytes, but of course there are four bytes per Uint32Array entry. All of the below uses new Uint32Array(0x10000) instead (and always did, even prior to this edit) to compare apples with apples.

    So let's start there, with new Uint32Array(0x10000): http://jsperf.com/array-access-speed-2/11 (sadly, JSPerf has lost this test and its results, and is now offline entirely)

    graph showing roughly equivalent performance

    That suggests that because you're filling the array in a simple, predictable way, a modern engine continues to use a true array (with the performance benefits thereof) under the covers rather than shifting over. We see basically the same performance for both. The difference in speed could relate to type conversion taking the Uint32 value and assigning it to sum as a number (though I'm surprised if that type conversion isn't deferred...).

    Add some chaos, though:

    var Uint32 = new Uint32Array(0x10000);
    var arr = [];
    for (var i = 0x10000 - 1; i >= 0; --i) {
      Uint32[Math.random() * 0x10000 | 0] = (Math.random() * 0x100000000) | 0;
      arr[Math.random() * 0x10000 | 0] = (Math.random() * 0x100000000) | 0;
    }
    var sum = 0;
    

    ...so that the engine has to fall back on old-fashioned property map "arrays," and you see that typed arrays markedly outperform the old-fashioned kind: http://jsperf.com/array-access-speed-2/3 (sadly, JSPerf has lost this test and its results)

    bar graph showing marked performance improvement for typed arrays

    Clever, these JavaScript engine engineers...

    The specific thing you do with the non-array nature of the Array array matters, though; consider:

    var Uint32 = new Uint32Array(0x10000);
    var arr = [];
    arr.foo = "bar";                            // <== Non-element property
    for (var i = 0; i < 0x10000; ++i) {
      Uint32[i] = (Math.random() * 0x100000000) | 0;
      arr[i] = (Math.random() * 0x100000000) | 0;
    }
    var sum = 0;
    

    That's still filling the array predictably, but we add a non-element property (foo) to it. http://jsperf.com/array-access-speed-2/4 (sadly, JSPerf has lost this test and its results) Apparently, engines are quite clever, and keep that non-element property off to the side while continuing to use a true array for the element properties:

    bar graph showing performance improvement for standard arrays when <code>Array</code> array gets non-element property

    I'm at a bit of a loss to explain why standard arrays should get faster there compared to our first test above. Measurement error? Vagaries in Math.random? But we're still pretty sure the array-specific data in the Array is still a true array.

    Whereas if we do the same thing but fill in reverse order:

    var Uint32 = new Uint32Array(0x10000);
    var arr = [];
    arr.foo = "bar";                            // <== Non-element property
    for (var i = 0x10000 - 1; i >= 0; --i) {    // <== Reverse order
      Uint32[i] = (Math.random() * 0x100000000) | 0;
      arr[i] = (Math.random() * 0x100000000) | 0;
    }
    var sum = 0;
    

    ...we get back to typed arrays winning out — except on IE11: http://jsperf.com/array-access-speed-2/9 (sadly, JSPerf has lost this test and its results)

    graph showing typed arrays winning except on IE11

    0 讨论(0)
提交回复
热议问题