Full laziness has been repeatedly demonstrated to cause space leaks.
Why is full laziness on from -O
onwards? I find myself unconvinced by the reasoning in
There's at least one common case where full laziness is "safe" and an optimization.
g :: Int -> Int
g z = f (z+1)
where f 0 = 0
f y = 1 + f (y-1)
This really means g = \z -> let {f = ...} in f (z+1)
and, compiled that way, will allocate a closure for f
before calling it. Obviously that's silly, and the compiler should transform the program into
g_f 0 = 0
g_f y = 1 + g_f (y-1)
g z = g_f (z+1)
where no allocation is needed to call g_f
. Happily the full laziness transformation does exactly that.
Obviously programmers could refrain from making these local definitions that do not depend on the arguments of the top-level function, but such definitions are generally considered good style...
Another example:
h :: [Int] -> [Int]
h xs = map (+1) xs
In this case you can just eta reduce, but normally you can't eta reduce. And naming the function (+1)
is quite ugly.