问题
In a functional language, functions are first class citizens and thus calling them is not the only thing I can do. I can also store them away.
Now when I have a language, which is strict by default, then I am still not forced to evaluate a function call. I have the option to store the function and its parameters e.g. in a tuple for later evaluation.
So instead of
x = f a b c
I do something like
x = (f,a,b,c)
And later, I can evaluate this thing with something like
eval (f,a,b,c) = f a b c
Well, there is probably more to it, because I want to evaluate each unevaluated function call only once, but it seems to me, that this can also be solved with a data structure which is a bit fancier than just a tuple.
The inverse also seems to be the case, because e.g. in Haskell, which is lazy be default I can enforce evaluation with seq
or BangPatterns
.
So is it correct to say, that every functional language has the potential of being lazy, but most of them are just not lazy by default and thus require additional programming effort to call a function lazily, whereas haskell is lazy by default and requires additional programming effort to call a function in a strict way?
Should that be the case, what is more difficult for the programmer: writing lazy function calls in a strict language or writing strict function calls in a lazy language?
As a side note: was Simon P. Jone serious when he said: "the next version of haskell will be strict". I first thought that this was a joke. But now I think strict by default isn't all that bad as long as you can be lazy if required.
回答1:
The answer is a qualified yes. Your intuition that laziness can be implemented in a strict language where functions are first-class objects is correct. But going into the details reveals a number of subtleties.
Let's take a functional language (by which I mean a language where functions can be constructed and manipulated as first-class objects, like in the lambda calculus), where function application is strict (i.e. the function¹ and its argument(s) are fully evaluated before the function is applied). I'll use the syntax of Standard ML, since this is a popular and historically important strict functional language. A strict application F A
(where F
and A
are two expressions) can be delayed by encoding it as
Thunk (F, A)
This object contains a function and an argument is called a thunk. We can define a type of thunks:
datatype ('a, 'b) thunk = Thunk of ('a -> 'b) * 'a;
and a function to evaluate a thunk:
fun evaluate (Thunk (f, x)) = f x;
Nice and easy so far. But we have not, in fact, implemented lazy evaluation! What we've implemented is normal-order evaluation, also known as call-by-name. The difference is that if the value of the thunk is used more than once, it is calculated every time. Lazy evaluation (also known as call-by-need) requires evaluating the expression at most once.
In a pure, strict language, lazy evaluation is in fact impossible to implement. The reason is that evaluating a thunk modifies the state of the system: it goes from unevaluated to evaluated. Implementing lazy evaluation requires a way to change the state of the system.
There's a subtlety here: if the semantics of the language is defined purely in terms of the termination status of expressions and the value of terminating expressions, then, in a pure language, call-by-need and call-by-name are indistinguishable. Call-by-value (i.e. strict evaluation) is distinguishable because fewer expressions terminate — call-by-need and call-by-name hide any non-termination that happens in a thunk that is never evaluated. The equivalence of call-by-need and call-by-name allows lazy evaluation to be considered as an optimization of normal-order evaluation (which has nice theoretical properties). But in many programs, using call-by-name instead of call-by-value would blow up the running time by computing the value of the same expressions over and over again.
In a language with mutable state, lazy evaluation can be expressed by storing the value into the thunk when it is calculated.
datatype ('a, 'b) lazy_state = Lazy of ('a -> 'b) * 'a | Value of 'a;
type ('a, 'b) lazy_state = ('a, 'b) lazy_state ref;
let lazy (f, x) = ref (Lazy (f, x));
fun force r =
case !r of Value y => y
| Lazy (f, x) => let val y = f x in r := Value y; y end;
This code is not very complicated, so even in ML dialects that provide lazy evaluation as a library feature (possibly with syntactic sugar), it isn't used all that often in practice — often, the point at which the value will be needed is a known location in the programs, and programmers just use a function and pass it its argument at that location.
While this is getting into subjective territory, I would say that it's much easier to implement lazy evaluation like this, than to implement strict evaluation in a language like Haskell. Forcing strict evaluation in Haskell is basically impossible (except by wrapping everything into a state monad and essentially writing ML code in Haskell syntax). Of course, strict evaluation doesn't change the values calculated by the program, but it can have a significant impact on performance (and, in particular, it is sometimes much appreciated because it makes performance a lot more predictable — predicting the performance of a Haskell program can be very hard).
This evaluate-and-store mechanism is effectively the core of what a Haskell compiler does under the hood. Haskell is pure², so you cannot implemente this in the language itself! However, it's sound for the compiler to do it under the hood, because this particular use of side effects does not break the purity of the language, so it does not invalidate any program transformation. The reason storing the value of a thunk is sound is that it turns call-by-name evaluation into call-by-need, and as we saw above, this neither changes the values of terminating expressions, nor changes which expressions terminate.
This approach can be somewhat problematic in a language that combines purely functional local evaluation with a multithreaded environment and message passing between threads. (This is notably the model of Erlang.) If one thread starts evaluating a thunk, and another thread needs its value just then, what is going to happen? If no precautions are taken, then both threads will calculate the value and store it. In a pure language, this is harmless in the sense that both threads will calculate the same value anyway³. However this can hurt performance. To ensure that a thunk is evaluated only once, the calculation of the value must be wrapped in a lock; this helps with long calculations that are performed many times but hurts short calculations that are performed only once, as taking and releasing a lock takes some time.
¹ The function, not the function body of course.
² Or rather, the fragment of Haskell that doesn't use a side effect monad is pure.
³ It is necessary for the transition between a delayed thunk and a computed value to be atomic — concurrent threads must be able to read a lazy value and get either a valid delayed thunk or a valid computed value, not some mixture of the two that isn't a valid object. At the processor level, the transition from delayed thunk to computed value is usually a pointer assignment, which on most architectures is atomic, fortunately.
回答2:
Lazy evaluation, at the low level, is implemented by a concept called a thunk, which comprises two things:
- A closure that computes the value of the deferred computation
- A set-once mutable storage cell for memoizing the result.
The first part, the closure, can be modeled in an even simpler way than your tuple with the function and its arguments. You can just use a function that accepts unit or no arguments (depending on how your language works), and in its body you apply the function to the arguments. To compute the result, you just invoke the function.
Paul Johnson mentions Scheme, which is a perfect language to demonstrate this. As a Scheme macro (pseudocode, untested):
(define-syntax delay
(syntax-rules ()
((delay expr ...)
;; `(delay expr)` evaluates to a lambda that closes over
;; two variables—one to store the result, one to record
;; whether the thunk has been forced.
(let ((value #f)
(done? #f))
(lambda ()
(unless done?
(set! value (begin expr ...))
(set! done? #t))
value)))))
(define (force thunk)
;; Thunks are procedures, so to force them you just invoke them.
(thunk))
But to get this back to the title of the question: does this mean that every functional language can be lazy? The answer is no. Eager languages can implement thunking and use it to provide opt-in delayed evaluation at user-selected spots, but this isn't the same as having pervasive lazy evaluation like Haskell implementations provide.
回答3:
What you propose will work. The details for doing this in Scheme can be found in SICP. One could envisage a strict version of Haskell in which there is a "lazy" function which does the opposite of what "seq" does in Haskell. However adding this to a strict Haskell-like language would require compiler magic because otherwise the thunk gets forced before being passed to "lazy".
However if your language has uncontrolled effects then this can get hairy, because an effect happens whenever its enclosing function gets evaluated, and figuring out when figuring out when that is going to happen in a lazy langauge is difficult. Thats why Haskell has the IO monad.
来源:https://stackoverflow.com/questions/33766559/can-every-functional-language-be-lazy