Why is lazy evaluation useful?

后端 未结 22 1510
无人共我
无人共我 2020-11-29 17:04

I have long been wondering why lazy evaluation is useful. I have yet to have anyone explain to me in a way that makes sense; mostly it ends up boiling down to \"trust me\".<

相关标签:
22条回答
  • 2020-11-29 17:42

    I find lazy evaluation useful for a number of things.

    First, all existing lazy languages are pure, because it is very hard to reason about side effects in a lazy language.

    Pure languages let you reason about function definitions using equational reasoning.

    foo x = x + 3
    

    Unfortunately in a non-lazy setting, more statements fail to return than in a lazy setting, so this is less useful in languages like ML. But in a lazy language you can safely reason about equality.

    Secondly, a lot of stuff like the 'value restriction' in ML aren't needed in lazy languages like Haskell. This leads to a great decluttering of syntax. ML like languages need to use keywords like var or fun. In Haskell these things collapse down to one notion.

    Third, laziness lets you write very functional code that can be understood in pieces. In Haskell it is common to write a function body like:

    foo x y = if condition1
              then some (complicated set of combinators) (involving bigscaryexpression)
              else if condition2
              then bigscaryexpression
              else Nothing
      where some x y = ...
            bigscaryexpression = ...
            condition1 = ...
            condition2 = ...
    

    This lets you work 'top down' though the understanding of the body of a function. ML-like languages force you to use a let that is evaluated strictly. Consequently, you don't dare 'lift' the let clause out to the main body of the function, because if it expensive (or has side effects) you don't want it always to be evaluated. Haskell can 'push off' the details to the where clause explicitly because it knows that the contents of that clause will only be evaluated as needed.

    In practice, we tend to use guards and collapse that further to:

    foo x y 
      | condition1 = some (complicated set of combinators) (involving bigscaryexpression)
      | condition2 = bigscaryexpression
      | otherwise  = Nothing
      where some x y = ...
            bigscaryexpression = ...
            condition1 = ...
            condition2 = ...
    

    Fourth, laziness sometimes offers much more elegant expression of certain algorithms. A lazy 'quick sort' in Haskell is a one-liner and has the benefit that if you only look at the first few items, you only pay costs proportional to the cost of selecting just those items. Nothing prevents you from doing this strictly, but you'd likely have to recode the algorithm each time to achieve the same asymptotic performance.

    Fifth, laziness allows you to define new control structures in the language. You can't write a new 'if .. then .. else ..' like construct in a strict language. If you try to define a function like:

    if' True x y = x
    if' False x y = y
    

    in a strict language then both branches would be evaluated regardless of the condition value. It gets worse when you consider loops. All strict solutions require the language to provide you with some sort of quotation or explicit lambda construction.

    Finally, in that same vein, some of the best mechanisms for dealing with side-effects in the type system, such as monads, really can only be expressed effectively in a lazy setting. This can be witnessed by comparing the complexity of F#'s Workflows to Haskell Monads. (You can define a monad in a strict language, but unfortunately you'll often fail a monad law or two due to lack of laziness and Workflows by comparison pick up a ton of strict baggage.)

    0 讨论(0)
  • 2020-11-29 17:43

    There's a difference between normal order evaluation an lazy evaluation (as in Haskell).

    square x = x * x
    

    Evaluating the following expression...

    square (square (square 2))
    

    ... with eager evaluation:

    > square (square (2 * 2))
    > square (square 4)
    > square (4 * 4)
    > square 16
    > 16 * 16
    > 256
    

    ... with normal order evaluation:

    > (square (square 2)) * (square (square 2))
    > ((square 2) * (square 2)) * (square (square 2))
    > ((2 * 2) * (square 2)) * (square (square 2))
    > (4 * (square 2)) * (square (square 2))
    > (4 * (2 * 2)) * (square (square 2))
    > (4 * 4) * (square (square 2))
    > 16 * (square (square 2))
    > ...
    > 256
    

    ... with lazy evaluation:

    > (square (square 2)) * (square (square 2))
    > ((square 2) * (square 2)) * ((square 2) * (square 2))
    > ((2 * 2) * (2 * 2)) * ((2 * 2) * (2 * 2))
    > (4 * 4) * (4 * 4)
    > 16 * 16
    > 256
    

    That's because lazy evaluation looks at the syntax tree and does tree-transformations...

    square (square (square 2))
    
               ||
               \/
    
               *
              / \
              \ /
        square (square 2)
    
               ||
               \/
    
               *
              / \
              \ /
               *
              / \
              \ /
            square 2
    
               ||
               \/
    
               *
              / \
              \ /
               *
              / \
              \ /
               *
              / \
              \ /
               2
    

    ... whereas normal order evaluation only does textual expansions.

    That's why we, when using lazy evaluation, get more powerful (evaluation terminates more often then other strategies) while the performance is equivalent to eager evaluation (at least in O-notation).

    0 讨论(0)
  • 2020-11-29 17:43

    Consider this:

    if (conditionOne && conditionTwo) {
      doSomething();
    }
    

    The method doSomething() will be executed only if conditionOne is true and conditionTwo is true. In the case where conditionOne is false, why do you need to compute the result of the conditionTwo? The evaluation of conditionTwo will be a waste of time in this case, especially if your condition is the result of some method process.

    That's one example of the lazy evaluation interest...

    0 讨论(0)
  • 2020-11-29 17:46

    If you believe Simon Peyton Jones, lazy evaluation is not important per se but only as a 'hair shirt' that forced the designers to keep the language pure. I find myself sympathetic to this point of view.

    Richard Bird, John Hughes, and to a lesser extend Ralf Hinze are able to do amazing things with lazy evaluation. Reading their work will help you appreciate it. To good starting points are Bird's magnificent Sudoku solver and Hughes's paper on Why Functional Programming Matters.

    0 讨论(0)
  • 2020-11-29 17:47

    Consider a tic-tac-toe program. This has four functions:

    • A move-generation function that takes a current board and generates a list of new boards each with one move applied.
    • Then there is a "move tree" function which applies the move generation function to derive all the possible board positions that could follow from this one.
    • There is a minimax function that walks the tree (or possibly only part of it) to find the best next move.
    • There is a board-evaluation function that determines if one of the players has won.

    This creates a nice clear separation of concerns. In particular the move-generation function and the board evaluation functions are the only ones that need to understand the rules of the game: the move tree and minimax functions are completely reusable.

    Now lets try implementing chess instead of tic-tac-toe. In an "eager" (i.e. conventional) language this won't work because the move tree won't fit in memory. So now the board evaluation and move generation functions need to be mixed in with the move tree and minimax logic because the minimax logic has to be used to decide which moves to generate. Our nice clean modular structure disappears.

    However in a lazy language the elements of the move tree are only generated in response to demands from the minimax function: the entire move tree does not need to be generated before we let minimax loose on the top element. So our clean modular structure still works in a real game.

    0 讨论(0)
  • 2020-11-29 17:49

    Other people already gave all the big reasons, but I think a useful exercise to help understand why laziness matters is to try and write a fixed-point function in a strict language.

    In Haskell, a fixed-point function is super easy:

    fix f = f (fix f)
    

    this expands to

    f (f (f ....
    

    but because Haskell is lazy, that infinite chain of computation is no problem; the evaluation is done "outside-to-inside", and everything works wonderfully:

    fact = fix $ \f n -> if n == 0 then 1 else n * f (n-1)
    

    Importantly, it matters not that fix be lazy, but that f be lazy. Once you've already been given an strict f, you can either throw your hands in the air and give up, or eta expand it and clutter stuff up. (This is a lot like what Noah was saying about it being the library that's strict/lazy, not the language).

    Now imagine writing the same function in strict Scala:

    def fix[A](f: A => A): A = f(fix(f))
    
    val fact = fix[Int=>Int] { f => n =>
        if (n == 0) 1
        else n*f(n-1)
    }
    

    You of course get a stack overflow. If you want it to work, you need to make the f argument call-by-need:

    def fix[A](f: (=>A) => A): A = f(fix(f))
    
    def fact1(f: =>Int=>Int) = (n: Int) =>
        if (n == 0) 1
        else n*f(n-1)
    
    val fact = fix(fact1)
    
    0 讨论(0)
提交回复
热议问题