I have long been wondering why lazy evaluation is useful. I have yet to have anyone explain to me in a way that makes sense; mostly it ends up boiling down to \"trust me\".<
Among other things, lazy languages allow multidimensional infinite data structures.
While scheme, python, etc allow single dimensional infinite data structures with streams, you can only traverse along one dimension.
Laziness is useful for the same fringe problem, but it's worth noting the coroutines connection mentioned in that link.
Without lazy evaluation you won't be allowed to write something like this:
if( obj != null && obj.Value == correctValue )
{
// do smth
}
One huge benefit of laziness is the ability to write immutable data structures with reasonable amortized bounds. A simple example is an immutable stack (using F#):
type 'a stack =
| EmptyStack
| StackNode of 'a * 'a stack
let rec append x y =
match x with
| EmptyStack -> y
| StackNode(hd, tl) -> StackNode(hd, append tl y)
The code is reasonable, but appending two stacks x and y takes O(length of x) time in best, worst, and average cases. Appending two stacks is a monolithic operation, it touches all of the nodes in stack x.
We can re-write the data structure as a lazy stack:
type 'a lazyStack =
| StackNode of Lazy<'a * 'a lazyStack>
| EmptyStack
let rec append x y =
match x with
| StackNode(item) -> Node(lazy(let hd, tl = item.Force(); hd, append tl y))
| Empty -> y
lazy
works by suspending the evaluation of code in its constructor. Once evaluated using .Force()
, the return value is cached and reused on every subsequent .Force()
.
With the lazy version, appends are an O(1) operation: it returns 1 node and suspends the actual rebuilding of the list. When you get the head of this list, it will evaluate the contents of the node, forcing it return the head and create one suspension with the remaining elements, so taking the head of the list is an O(1) operation.
So, our lazy list is in a constant state of rebuilding, you don't pay the cost for rebuilding this list until you traverse through all of its elements. Using laziness, this list supports O(1) consing and appending. Interestingly, since we don't evaluate nodes until their accessed, its wholly possible to construct a list with potentially infinite elements.
The data structure above does not require nodes to be recomputed on each traversal, so they are distinctly different from vanilla IEnumerables in .NET.
Excerpt from Higher order functions
Let's find the largest number under 100,000 that's divisible by 3829. To do that, we'll just filter a set of possibilities in which we know the solution lies.
largestDivisible :: (Integral a) => a
largestDivisible = head (filter p [100000,99999..])
where p x = x `mod` 3829 == 0
We first make a list of all numbers lower than 100,000, descending. Then we filter it by our predicate and because the numbers are sorted in a descending manner, the largest number that satisfies our predicate is the first element of the filtered list. We didn't even need to use a finite list for our starting set. That's laziness in action again. Because we only end up using the head of the filtered list, it doesn't matter if the filtered list is finite or infinite. The evaluation stops when the first adequate solution is found.
Mostly because it can be more efficient -- values don't need to be computed if they're not going to be used. For example, I may pass three values into a function, but depending on the sequence of conditional expressions, only a subset may actually be used. In a language like C, all three values would be computed anyway; but in Haskell, only the necessary values are computed.
It also allows for cool stuff like infinite lists. I can't have an infinite list in a language like C, but in Haskell, that's no problem. Infinite lists are used fairly often in certain areas of mathematics, so it can be useful to have the ability to manipulate them.
A useful example of lazy evaluation is the use of quickSort
:
quickSort [] = []
quickSort (x:xs) = quickSort (filter (< x) xs) ++ [x] ++ quickSort (filter (>= x) xs)
If we now want to find the minimum of the list, we can define
minimum ls = head (quickSort ls)
Which first sorts the list and then takes the first element of the list. However, because of lazy evaluation, only the head gets computed. For example, if we take the minimum of the list [2, 1, 3,]
quickSort will first filter out all the elements that are smaller than two. Then it does quickSort on that (returning the singleton list [1]) which is already enough. Because of lazy evaluation, the rest is never sorted, saving a lot of computational time.
This is of course a very simple example, but laziness works in the same way for programs that are very large.
There is, however, a downside to all this: it becomes harder to predict the runtime speed and memory usage of your program. This doesn't mean that lazy programs are slower or take more memory, but it's good to know.