JavaScript Performance Long Running Tasks

前端 未结 4 585
庸人自扰
庸人自扰 2020-12-01 16:28

I noticed a question on here the other day ( Reducing Javascript CPU Usage ) and I was intrigued.

Essentially the guy wanted to encrypt some files character by char

相关标签:
4条回答
  • 2020-12-01 16:32

    setTimeout does not have a minimal delay of 0ms. The minimal delay is anywhere in the range of 5ms-20ms dependent on browsers.

    My own personal testing shows that setTimeout doesn't place your back on the event stack immediately

    Live Example

    It has an arbitary minimal time delay before it gets called again

    var s = new Date(),
        count = 10000,
        cb = after(count, function() {
            console.log(new Date() - s);    
        });
    
    doo(count, function() {
        test(10, undefined, cb);
    });
    
    • Running 10000 of these in parallel counting to 10 takes 500ms.
    • Running 100 counting to 10 takes 60ms.
    • Running 1 counting to 10 takes 40ms.
    • Running 1 counting to 100 takes 400ms.

    Cleary it seems that each individual setTimeout has to wait at least 4ms to be called again. But that's the bottle neck. The individual delay on setTimeout.

    If you schedule a 100 or more of these in parallel then it will just work.

    How do we optimise this?

    var s = new Date(),
        count = 100,
        cb = after(count, function() {
            console.log(new Date() - s);    
        }),
        array = [];
    
    doo(count, function() {
        test(10, array, cb);
    });
    

    Set up 100 running in parallel on the same array. This will avoid the main bottleneck which is the setTimeout delay.

    The above completes in 2ms.

    var s = new Date(),
        count = 1000,
        cb = after(count, function() {
            console.log(new Date() - s);    
        }),
        array = [];
    
    doo(count, function() {
        test(1000, array, cb);
    });
    

    Completes in 7 milliseconds

    var s = new Date(),
        count = 1000,
        cb = after(1, function() {
            console.log(new Date() - s);    
        }),
        array = [];
    
    doo(count, function() {
        test(1000000, array, cb);
    });
    

    Running a 1000 jobs in parallel is roughly optimum. But you will start hitting bottlenecks. Counting to 1 million still takes 4500ms.

    0 讨论(0)
  • 2020-12-01 16:33

    BE actually talked about this, what you're using is recursive functions, and JavaScript right now doesn't have "Tail End Recursive Calls", which means that the interpreter / engine has to keep the stack frame for EVERY call, which gets heavy.

    In order to optimize a solution, I would try making it into a immediate executing function, that's called in the global scope.

    0 讨论(0)
  • 2020-12-01 16:34

    Just an hypothesis... could it be that the code is so slow because you are building a recursion stack with 5000 recursion instances? your call is not truly recursive, since it happens through the settimeout function, but the function you pass in to it is a closure, so it will have to store all of the closure contexts...

    The performance problem could be related to the cost of managing the memory, and this could explain also while your last test seems to make things worse...

    I have not tried anything out with the interpreter, but it could be interesting to see if the computation time is linear with the number of recursions, or not... say: 100, 500, 1000, 5000 recursions...

    first thing I would try as a workaround is not using a closure:

    setTimeout(test, 0, i, ar, callback, start);
    
    0 讨论(0)
  • 2020-12-01 16:40

    Your issue is a matter of overhead vs unit of work. Your setTimeout overhead is very high while your unit of work ar.push is very low. The solution is an old optimization technique known as Block Processing. Rather than processing one UoW per call you need to process a block of UoW's. How large the "block" is depends on how much time each UoW takes and the maximum amount of time you can spend in each setTimeout/call/iteration (before the UI becomes unresponsive).

    function test(i, ar, callback, start){
    if ( ar === undefined ){
        var ar = [],
        start = new Date;
    };
    if ( ar.length < i ){
        // **** process a block **** //
        for(var x=0; x<50 && ar.length<i; x++){
            ar.push( i - ( i - ar.length )  );
        }
        setTimeout(function(){
            test( i, ar, callback, start);
        },0);
    }
    else {
        callback(ar, start);
    };
    }
    

    You have to process the largest block you can without causing UI/performance issues for the user. The preceding runs ~50x faster (the size of the block).

    It's the same reason we use a buffer for reading a file rather than reading it one byte at a time.

    0 讨论(0)
提交回复
热议问题