We all know (or should know) that Haskell is lazy by default. Nothing is evaluated until it must be evaluated. So when must something be evaluated? There are points where Haskel
We all know (or should know) that Haskell is lazy by default. Nothing is evaluated until it must be evaluated.
No.
Haskell is a language in which evaluation order doesn't matter because there are no side effects.
It's not quite true that evaluation order doesn't matter, because the language allows for infinite loops. If you aren't careful, it's possible to get stuck in a cul-de-sac where you evaluate a subexpression forever when a different evaluation order would have led to termination in finite time. So it's more accurate to say:
This still leaves implementations with a huge freedom in how they evaluate the program.
A Haskell program is a single expression, namely let {
all top-level bindings} in Main.main
. Evaluation can be understood as a sequence of reduction (small-)steps which change the expression (which represents the current state of the executing program).
You can divide reduction steps into two categories: those that are provably necessary (provably will be part of any terminating sequence of reductions), and those that aren't. You can divide the provably necessary reductions somewhat vaguely into two subcategories: those that are "obviously" necessary, and those that require some nontrivial analysis to prove them necessary.
Performing only obviously necessary reductions is what's called "lazy evaluation". I don't know whether a purely lazy evaluating implementation of Haskell has ever existed. Hugs may have been one. GHC definitely isn't.
GHC performs reduction steps at compile time that aren't provably necessary; for example, it will replace 1+2::Int
with 3::Int
even if it can't prove that the result will be used.
GHC may also perform not-provably-necessary reductions at run time in some circumstances. For example, when generating code to evaluate f (x+y)
, if x
and y
are of type Int
and their values will be known at run time, but f
can't be proven to use its argument, there is no reason not to compute x+y
before calling f
. It uses less heap space and less code space and is probably faster even if the argument isn't used. However, I don't know whether GHC actually takes these sorts of optimization opportunities.
GHC definitely performs evaluation steps at run time that are proven necessary only by fairly complex cross-module analyses. This is extremely common and may represent the bulk of the evaluation of realistic programs. Lazy evaluation is a last-resort fallback evaluation strategy; it isn't what happens as a rule.
There was an "optimistic evaluation" branch of GHC that did much more speculative evaluation at run time. It was abandoned because of its complexity and the ongoing maintenance burden, not because it didn't perform well. If Haskell was as popular as Python or C++ then I'm sure there would be implementations with much more sophisticated runtime evaluation strategies, maintained by deep-pocketed corporations. Non-lazy evaluation isn't a change to the language, it's just an engineering challenge.
You can model interaction with the outside world by means of special side-effectful reduction rules like: "If the current program is of the form getChar >>=
, then get a character from standard input and reduce the program to
applied to the character you got."
The entire goal of the run time system is to evaluate the program until it has one of these side-effecting forms, then do the side effect, then repeat until the program has some form that implies termination, such as return ()
.
There are no other rules about what is reduced when. There are only rules about what can reduce to what.
For example, the only rules for if
expressions are that if True then
can be reduced to
, if False then
can be reduced to
, and if
, where
is an exceptional value, can be reduced to an exceptional value.
If the expression representing the current state of your program is an if
expression, you have no choice but to perform reductions on the condition until it's True
or False
or
, because that's the only way you'll ever get rid of the if
expression and have any hope of reaching a state that matches one of the I/O rules. But the language specification doesn't tell you to do that in so many words.
These sorts of implicit ordering constraints are the only way that evaluation can be "forced" to happen. This is a frequent source of confusion for beginners. For example, people sometimes try to make foldl
more strict by writing foldl (\x y -> x `seq` x+y)
instead of foldl (+)
. This doesn't work, and nothing like it can ever work, because no expression can make itself evaluate. The evaluation can only "come from above". seq
is not special in any way in this regard.
Reduction (or evaluation) in Haskell only occurs at strictness points. [...] My intuition says that main, seq / bang patterns, pattern matching, and any IO action performed via main are the primary strictness points [...].
I don't see how to make sense of that statement. Every part of the program has some meaning, and that meaning is defined by reduction rules, so reduction happens everywhere.
To reduce a function application
, you have to evaluate
until it has a form like (\x ->
or (getChar >>=)
or something else that matches a rule. But for some reason function application doesn't tend to show up on lists of expressions that allegedly "force evaluation", while case
always does.
You can see this misconception in a quote from the Haskell wiki, found in another answer:
In practice Haskell is not a purely lazy language: for instance pattern matching is usually strict
I don't understand what could qualify as a "purely lazy language" to whoever wrote that, except, perhaps, a language in which every program hangs because the runtime never does anything. If pattern matching is a feature of your language then you've got to actually do it at some point. To do it, you have to evaluate the scrutinee enough to determine whether it matches the pattern. That's the laziest way to match a pattern that is possible in principle.
~
-prefixed patterns are often called "lazy" by programmers, but the language spec calls them "irrefutable". Their defining property is that they always match. Because they always match, you don't have to evaluate the scrutinee to determine whether they match or not, so a lazy implementation won't. The difference between regular and irrefutable patterns is what expressions they match, not what evaluation strategy you're supposed to use. The spec says nothing about evaluation strategies.
main
is a strictness point. It is specially designated as the primary strictness point of its context: the program. When the program (main
's context) is evaluated, the strictness point of main is activated. [...] Main is usually composed of IO actions, which are also strictness points, whose context ismain
.
I'm not convinced that any of that has any meaning.
Main's depth is maximal: it must be fully evaluated.
No, main
only has to be evaluated "shallowly", to make I/O actions appear at the top level. main
is the entire program, and the program isn't completely evaluated on every run because not all code is relevant to every run (in general).
discuss
seq
and pattern matching in these terms.
I already talked about pattern matching. seq
can be defined by rules that are similar to case
and application: for example, (\x ->
reduces to
. This "forces evaluation" in the same way that case
and application do. WHNF is just a name for what these expressions "force evaluation" to.
Explain the nuances of function application: how is it strict? How is it not?
It's strict in its left expression just like case
is strict in its scrutinee. It's also strict in the function body after substitution just like case
is strict in the RHS of the selected alternative after substitution.
What about
deepseq
?
It's just a library function, not a builtin.
Incidentally, deepseq
is semantically weird. It should take only one argument. I think that whoever invented it just blindly copied seq
with no understanding of why seq
needs two arguments. I count deepseq
's name and specification as evidence that a poor understanding of Haskell evaluation is common even among experienced Haskell programmers.
let
andcase
statements?
I talked about case
. let
, after desugaring and type checking, is just a way of writing an arbitrary expression graph in tree form. Here's a paper about it.
unsafePerformIO
?
To an extent it can be defined by reduction rules. For example, case unsafePerformIO
reduces to unsafePerformIO (
, and at the top level only, unsafePerformIO
reduces to
.
This doesn't do any memoization. You could try to simulate memoization by rewriting every unsafePerformIO
expression to explicitly memoize itself, and creating the associated IORef
s... somewhere. But you could never reproduce GHC's memoization behavior, because it depends on unpredictable details of the optimization process, and because it isn't even type safe (as shown by the infamous polymorphic IORef
example in the GHC documentation).
Debug.Trace
?
Debug.Trace.trace
is just a simple wrapper around unsafePerformIO
.
Top-level definitions?
Top-level variable bindings are the same as nested let
bindings. data
, class
, import
, and such are a whole different ball game.
Strict data types? Bang patterns?
Just sugar for seq
.