Why is lazy evaluation useful?

后端 未结 22 1511
无人共我
无人共我 2020-11-29 17:04

I have long been wondering why lazy evaluation is useful. I have yet to have anyone explain to me in a way that makes sense; mostly it ends up boiling down to \"trust me\".<

相关标签:
22条回答
  • 2020-11-29 17:54

    Lazy evaluation is most useful with data structures. You can define an array or vector inductively specifying only certain points in the structure and expressing all others in terms of the whole array. This lets you generate data structures very concisely and with high run-time performance.

    To see this in action, you can have a look at my neural network library called instinct. It makes heavy use of lazy evaluation for elegance and high performance. For example I totally get rid of the traditionally imperative activation calculation. A simple lazy expression does everything for me.

    This is used for example in the activation function and also in the backpropagation learning algorithm (I can only post two links, so you'll need to look up the learnPat function in the AI.Instinct.Train.Delta module yourself). Traditionally both require much more complicated iterative algorithms.

    0 讨论(0)
  • 2020-11-29 17:57

    This snippet shows the difference between lazy and not lazy evaluation. Of course this fibonacci function could itself be optimized and use lazy evaluation instead of recursion, but that would spoil the example.

    Let's suppose we MAY have to use the 20 first numbers for something, with not lazy evaluation all the 20 numbers have to be generated upfront, but, with lazy evaluation they'll be generated as needed only. Thus you will pay only the calculation price when needed.

    Sample output

    Not lazy generation: 0.023373
    Lazy generation: 0.000009
    Not lazy output: 0.000921
    Lazy output: 0.024205
    
    import time
    
    def now(): return time.time()
    
    def fibonacci(n): #Recursion for fibonacci (not-lazy)
     if n < 2:
      return n
     else:
      return fibonacci(n-1)+fibonacci(n-2)
    
    before1 = now()
    notlazy = [fibonacci(x) for x in range(20)]
    after1 = now()
    before2 = now()
    lazy = (fibonacci(x) for x in range(20))
    after2 = now()
    
    
    before3 = now()
    for i in notlazy:
      print i
    after3 = now()
    
    before4 = now()
    for i in lazy:
      print i
    after4 = now()
    
    print "Not lazy generation: %f" % (after1-before1)
    print "Lazy generation: %f" % (after2-before2)
    print "Not lazy output: %f" % (after3-before3)
    print "Lazy output: %f" % (after4-before4)
    
    0 讨论(0)
  • 2020-11-29 17:57

    Lazy evaluation is poor man's equational reasoning (which could be expected, ideally, to be deducing properties of code from properties of types and operations involved).

    Example where it works quite well: sum . take 10 $ [1..10000000000]. Which we don't mind being reduced to a sum of 10 numbers, instead of just one direct and simple numeric calculation. Without the lazy evaluation of course this would create a gigantic list in memory just to use its first 10 elements. It would certainly be very slow, and might cause an out-of-memory error.

    Example where it's not as great as we'd like: sum . take 1000000 . drop 500 $ cycle [1..20]. Which will actually sum the 1 000 000 numbers, even if in a loop instead of in a list; still it should be reduced to just one direct numeric calculation, with few conditionals and few formulas. Which would be a lot better then summing up the 1 000 000 numbers. Even if in a loop, and not in a list (i.e. after the deforestation optimization).


    Another thing is, it makes it possible to code in tail recursion modulo cons style, and it just works.

    cf. related answer.

    0 讨论(0)
  • 2020-11-29 17:58

    If by "lazy evaluation" you mean like in combound booleans, like in

       if (ConditionA && ConditionB) ... 
    

    then the answer is simply that the fewer CPU cycles the program consumes, the faster it will run... and if a chunk of processing instructions will have no impact on the the outcome of the program then it is unecessary, (and therefore a waste of time) to perform them anyway...

    if otoh, you mean what I have known as "lazy initializers", as in:

    class Employee
    {
        private int supervisorId;
        private Employee supervisor;
    
        public Employee(int employeeId)
        {
            // code to call database and fetch employee record, and 
            //  populate all private data fields, EXCEPT supervisor
        }
        public Employee Supervisor
        { 
           get 
              { 
                  return supervisor?? (supervisor = new Employee(supervisorId)); 
              } 
        }
    }
    

    Well, this technique allows client code using the class to avoid the need to call the database for the Supervisor data record except when the client using the Employee object requires access to the supervisor's data... this makes the process of instantiating an Employee faster, and yet when you need the Supervisor, the first call to the Supervisor property will trigger the Database call and the data will be fetched and available...

    0 讨论(0)
提交回复
热议问题