问题
I've been poking around continuations recently, and I got confused about the correct terminology. Here Gabriel Gonzalez says:
A Haskell continuation has the following type:
newtype Cont r a = Cont { runCont :: (a -> r) -> r }
i.e. the whole (a -> r) -> r
thing is the continuation (sans the wrapping)
The wikipedia article seems to support this idea by saying
a continuation is an abstract representation of the control state of a computer program.
However, here the authors say that
Continuations are functions that represent "the remaining computation to do."
but that would only be the (a->r)
part of the Cont
type. And this is in line to what Eugene Ching says here:
a computation (a function) that requires a continuation function in order to fully evaluate.
We’re going to be seeing this kind of function a lot, hence, we’ll give it a more intuitive name. Let’s call them waiting functions.
I've seen another tutorial (Brian Beckman and Erik Meijer) where they call the whole thing (the waiting function) the observable and the function which is required for it to complete the observer.
- What is the the continuation, the
(a->r)->r
thingy or just the(a->r)
thing (sans the wrapping)? - Is the wording observable/observer about correct?
- Are the citations above really contradictory, is there a common truth?
回答1:
Fueled by reading about continuations via Andrzej Filinski's Declarative Continuations and Categorical Duality I adopt the following terminology and understanding.
A continuation on values of a
is a "black hole which accepts values of a
". You can see it as a black box with one operation—you feed it a value a
and then the world ends. Locally at least.
Now let's assume we're in Haskell and I demand that you construct for me a function forall r . (a -> r) -> r
. Let's say, for now, that a ~ Int
and it'll look like
f :: forall r . (Int -> r) -> r
f cont = _
where the type hole has a context like
r :: Type
cont :: Int -> r
-----------------
_ :: r
Clearly, the only way we can comply with these demands is to pass an Int
into the cont
function and return it after which no further computation can happen. This models the idea of "feed an Int to the continuation and then the world ends".
So, I would call the function (a -> r)
the continuation so long as it's in a context with a fixed-but-unknown r
and a demand to return that r
. For instance, the following is not so much of a continuation
forall r . (a -> r) -> (r, a)
as we're clearly allowed to pass back out more information from our failing universe than the continuation alone allows.
On "Observable"
I'm personally not a fan of the "observer"/"observable" terminology. In that terminology we might write
newtype Observable a = O { observe :: forall r . (a -> r) -> r }
so that we have observe :: Observable a -> (a -> r) -> r
which ensures that exactly one a
will be passed to an "observer" a -> r
"observing" it. This gives a very operational view to the type above while Cont
or even the scarily named Yoneda Identity explains much more declaratively what the type actually is.
I think the point is to somehow hide the complexity of Cont
behind metaphor to make it less scary for "the average programmer", but that just adds an extra layer of metaphor for behavior to leak out of. Cont
and Yoneda Identity
explain exactly what the type is without dressing it up.
回答2:
What is the the continuation, the (a->r)->r thingy or just the (a->r) thing (sans the wrapping)?
I would say that the a -> r
bit is the continuation and the (a -> r) -> r
is "in continuation passing style" or "is the type of the continuation monad.
I am going to go off on a long digression on the history of continuations which is not really relivant to the question...so be warned.
It is my belief that the first published paper on continuations was "Continuations: A Mathematical Semantics for Handling Full Jumps" by Strachey and Wadsworth (although the concept was already folklore). The idea of that paper is I think a pretty important one. Early semantics for imperative programs attempted to model commands as state transformer functions. For example, consider the simple imperative language given by the following BNF
Command := set <Expression> to <Expression>
| skip
| <Command> ; <Command>
Expression := !<Expression>
| <Number>
| <Expression> + <Expression>
here we use a expressions as pointers. The simplest denotational function interprets the state as functions from natural numbers to natural numbers:
S = N -> N
We can interpret expressions as functions from state to the natural numbers
E[[e : Expression]] : S -> N
and commands as state transducers.
C[[c : Command]] : S -> S
This denotational semantics can be spelled out rather simply:
E[[n : Number]](s) = n
E[[a + b]](s) = E[[a]](s) + E[[b]](s)
E[[!e]](s) = s(E[[e]](s))
C[[skip]](s) = s
C[[set a to b]](s) = \n -> if n = E[[a]](s) then E[[b]](s) else s(n)
C[[c_1;c_2]](s) = (C[[c_2] . C[[c_1]])(s)
As simple program in this language might look like
set 0 to 1;
set 1 to (!0) + 1
which would be interpreted as function that turns a state function s
into a new function that is just like s
except it maps 0
to 1
and 1
to 2
.
This was all well and good, but how do you handle branching? Well, if you think about it alot you can probably come up with a way to handle if
and loops that go an exact number of times...but what about general while
loops?
Strachey and Wadsworth's showed us how to do it. First of all, they pointed out that these "State transducer functions" were pretty important and so decided to call them "command continuations" or just "continuations."
C = S -> S
From this they defined a new semantics, which we will provisionally define this way
C'[[c : Command]] : C -> C
C'[[c]](cont) = cont . C[[c]]
What is going on here? Well, observe that
C'[[c_1]](C[[c_2]]) = C[[c_1 ; c_2]]
and further
C'[[c_1]](C'[[c_2]](cont) = C'[[c_1 ; c_2]](cont)
Instead of doing it this way, we can inline the definition
C'[[skip]](cont) = cont
C'[[set a to b]](cont) = cont . \s -> \n -> if n = E[[a]](s) then E[[b]](s) else s(n)
C'[[c_1 ; c_2]](cont) = C'[[c_1]](C'[[c_2]](cont)
What has this bought us? Well, a way to interpret while
, thats what!
Command := ... | while <Expression> do <Command> end
C'[[while e do c end]](cont) =
let loop = \s -> if E[[e]](s) = 0 then C'[[c]](loop)(s) else cont(s)
in loop
or, using a fixpoint combinator
C'[[while e do c end]](cont)
= Y (\f -> \s -> if E[[e]](s) = 0 then C'[[c]](f)(s) else cont(s))
Anyways...that is history and not particularly important...except in so far as it showed how to interpret programs mathematically, and set the language of "continuation."
Also, the approach to denotational semantics of "1. define a new semantic function in terms of the old 2. inline 3. profit" works surprisingly often. For example, it is often useful to have your semantic domain form a lattice (think, abstract interpretation). How do you get that? Well, one option is to take the powerset of the domain, and inject into this by interpreting your functions as singletons. If you inline this powerset construction you get something that can either model non-determinism or, in the case of abstract interpretation, various amounts of information about a program other than exact certainty as to what it does.
Various other work followed. Here I skip over many greats such as the lambda papers... But, perhaps the most notable was Griffin's landmark paper "A Formulae-as-Types Notion of Control" which showed a connection between continuation passing style and classical logic. Here the connection between "continuation" and "Evaluation context" is emphasized
That is, E represents the rest of the computation that remains to be done after N is evaluated. The context E is called the continuation (or control context) of N at this point in the evalu- ation sequence. The notation of evaluation contexts allows, as we shall see below, a concise specification of the operational semantics of operators that ma- nipulate continuations (indeed, this was its intended use [3, 2, 4, 1]).
making clear that the "continuation" is "just the a -> r
bit"
This all looks at things from the point of view of semantics and sees continuations as functions. The thing is, continuations as functions give you more power than you get with something like scheme's callCC. So, another perspective on continuations is that they are variables in the program which internalize the call stack. Parigot had the idea to make continuation variables a seperate syntactic category leading to the elegant lambda-mu calculus in "λμ-Calculus: An algorithmic interpretation of classical natural deduction."
Is the wording observable/observer about correct?
I think it is in so far as it is what Eric Mejier uses. It is non standard terminology in academic PLs.
Are the citations above really contradictory, is there a common truth?
Let us look at the citations again
a continuation is an abstract representation of the control state of a computer program.
In my interpretation (which I think is pretty standard) a continuation models what a program should do next. I think wikipedia is consistent with this.
A Haskell continuation has the following type:
This is a bit odd. But, note that later in the post Gabriel uses language which is more standard and supports my use of language.
That means that if we have a function with two continuations:
(a1 -> r) -> ((a2 -> r) -> r)
回答3:
I suggest to recall the call convention for C on x86 platforms, because of its use of the stack and registers to pass the arguments around. This will turn out very useful to understand the abstraction.
Suppose, function f
calls function g
and passes 0
to it. This will look like so:
mov eax, 0
call g -- now eax is the first argument,
-- and the stack has the address of return point, f'
g: -- here goes g that uses eax to compute the return value
mov eax,1 -- which by calling convention is placed in eax
ret -- get the return point, f', off the stack, and jump there
f': ...
You see, placing the return point f'
on the stack is the same as passing a function pointer as one of the arguments, and then the return is the same as calling the given function and pass it a value. So from g
's point of view the return point to f
looks like function of one argument, f' :: a -> r
. As you understand, the state of the stack completely captures the state of the computation f
was performing, and which needed a
from g
in order to proceed.
At the same time, at the point g
is called it looks like a function that accepts a function of one argument (we place the pointer of that function on the stack), which will eventually compute the value of type r
that the code from f':
onwards was meant to compute, so the type becomes g :: (a->r)->r
.
Since f'
is given a value of type a
from "somewhere", f'
can be seen as the observer of g
- which is, conversely, the observable.
This is only intended to give a basic idea and tie somehow to the world you probably already know. The magic of continuations permits to do more tricks than just convert "plain" computation into computation with continuations.
回答4:
When we refer to a continuation, we mean the part that let us continue calculating a result.
An operation in the Continuation Monad is analogous to a function that is incomplete and so it is waiting on another function to complete it. Although, the Continuation Monad is itself a valid construct that can be used to complete another Continuation Monad, that is what the binding operator (>>=
) for the Cont Monad does.
When writing code that involves callCC
or Call with Current Continuation, you are passing the current Cont Monad into another Cont Monad so that the second one can make use of it. For example, it might prematurely end execution by calling the first Cont Monad, and from there the cycle can either repeat or diverge into a different Continuation Monad.
The part that is the continuation is different from which perspective you use. In my personal opinion, the best way to describe a continuation is in relation to another construct.
So if we return to our example of two Cont Monads interacting, from the perspective of the first Monad the continuation is the (a -> r) -> r
(because that is the unwrapped type of the first Monad) and from the perspective of the second Monad the continuation is the (a -> r)
(because that is the unwrapped type of the first monad when a
is substituted for (a -> r)
).
来源:https://stackoverflow.com/questions/25648332/correct-terminology-for-continuations