问题
I am trying to understand functional programming from first principles, yet I am stuck on the interface between the pure functional world and the impure real world that has state and side effects. From a mathematical perspective,
- what is a function that returns a function?
- what is a function that returns an IO action (like Haskell's IO type)?
To elaborate: In my understanding, a pure function is a map from domain to co-domain. Ultimately, it is a map from some values in computer memory to some other values in memory. In a functional language, functions are defined declaratively; i.e., they describe the mapping but not the actual computation that needs to be performed on a specific input value; the latter is up to the compiler to derive. In a simplified setting with memory to spare, there would be no computation at runtime; instead, the compiler could create a lookup table for each function already at compile time. Executing a pure program would amount to table lookup. Composing functions thus amounts to building higher-dimensional lookup tables. Of course, the entire point in having computers is to devise ways to specify functions without the need for point-wise table lookup - but I find the mental model helpful to distinguish between pure functions and effects. However, I have difficulty adapting this mental model for higher-order functions:
- For a function that takes another function as argument, what is the resulting first-order function that maps values to values? Is there a mathematical description for it (I'm sure there is, but I am neither a mathematician nor a computer scientist).
- How about a function that returns a function? How can I mentally "flatten" this construct to again get a first-order function that maps values to values?
Now to the nasty real world. Interaction with it is not pure, yet without it, there are no sensible programs. In my simplified mental model above, separating pure and impure parts of a program means that the basis of each functional program is a layer of imperative statements that get data from the real world, apply a pure function to it (do table lookup), and then write the result back to the real world (to disk, to the screen, the network, etc.).
In Haskell, this imperative interaction with the real world is abstracted as IO actions, which the compiler sequences according to their data dependency. However, we do not write a program directly as a sequence of imperative IO actions. Instead, there are functions that return IO actions (functions of type :: IO a
). But to my understanding, these cannot be real functions. What are they? How to best think about them in terms of the mental model outlined above?
回答1:
Mathematically, there's no problem at all with functions that take or return other functions. The standard set-theory definition of a function from set S to set T is just:
f ∈ S → T means that f ⊂ S ✕ T and two conditions hold:
- If s ∈ S, then (s, t) ∈ f for some t, and
- if both (s, t) ∈ f and (s, t') ∈ f, then t = t'.
We write f(s) = t as a convenient notational shorthand for (s, t) ∈ f.
So writing S → T just denotes a specific set, and therefore (A → B) → C and A → (B → C) are again just specific sets.
Of course, for efficiency, we do not represent functions internally in memory as the set of input-output pairs like this, but this is a decent first approximation that you can use if you want a mathematical intuition. (The second approximation takes a lot more work to set up properly, because it uses structures you probably haven't already experienced very much to deal with laziness and recursion in a careful, principled way.)
IO actions are a bit trickier. How you want to think of them may depend a bit on your particular mathematical bent.
One persuasion of mathematician might like to define IO actions as an inductive set, something like:
- If
x :: a
, thenpure x :: IO a
. - If
f :: a -> b
, thenfmap f :: IO a -> IO b
. - If
x :: IO a
andf :: a -> IO b
, thenx >>= f :: IO b
. putStrLn :: String -> IO ()
forkIO :: IO a -> IO ThreadId
- ...and a thousand other base cases.
- We quotient over a few equalities:
fmap id = id
fmap f . fmap g = fmap (f . g)
pure x >>= f
=f x
x >>= pure . f
=fmap f x
- (and a slightly complicated-to-read one that just says that
>>=
is associative)
In terms of defining the meaning of a program, that's enough to specify what "values" the IO family of types can hold. You might recognize this style of definition from the standard way of defining natural numbers:
- Zero is a natural number.
- If n is a natural number, then Succ(n) is a natural number.
Of course, there are some things about this way of defining things that aren't super satisfying. Like: what does any particular IO action mean? Nothing in this definition says anything about that. (Though see "Tackling the Awkward Squad" for an elucidation of how you could say what an IO action means even if you take this kind of inductive definition of the type.)
Another kind of mathematician might like this kind of definition better:
An IO action is isomorphic to a stateful function on a phantom token representing the current state of the universe:
IO a ~= RealWorld -> (RealWorld, a)
There are attractions to this kind of definition, too; though, notably, it gets a lot harder to say what the heck forkIO
does with that kind of definition.
...or you could take the GHC definition, in which case an IO a
is secretly an a
if you dig under the covers enough. But, shhh!!, don't tell the inexperienced programmers who just want to escape IO
and write an IO a -> a
function because they don't understand how to program using the IO
interface yet!
回答2:
IO
is a data structure. E.g. here's a very simple model of IO
:
data IO a = Return a | GetLine (String -> IO a) | PutStr String (IO a)
Real IO
can be seen as being this but with more constructors (I prefer to think of all the IO
"primitives" in base
as such constructors). The main
value of a Haskell program is just a value of this data structure. The runtime (which is "external" to Haskell) evaluates main
to the first IO
constructor, then "executes" it somehow, passes any values returned back as arguments to the contained function, and then executes the resulting IO
action recursively, stopping at the Return ()
. That's it. IO
doesn't have any strange interactions with functions, and it's not actually "impure", because nothing in Haskell is impure (unless it's unsafe). There is just an entity outside of your program that interprets it as something effectful.
Thinking of functions as tables of inputs and outputs is perfectly fine. In mathematics, this is called the graph of the function, and e.g. in set theory it's often taken as the definition of a function in the first place. Functions that return IO
actions fit just fine into this model. They just return values of the data structure IO
; nothing strange about it. E.g. putStrLn
might be defined as so (I don't think it actually is, but...):
putStrLn s = PutStr (s ++ "\n") (Return ())
and readLn
could be
-- this is actually read <$> getLine; real readLn throws exceptions instead of returning bottoms
readLn = GetLine (\s -> Return (read s))
both of which have perfectly sensible interpretations when thinking of functions as graphs.
Your other question, about how to interpret higher-order functions, isn't going to get you very far. Functions are values, period. Modeling them as graphs is a good way to think about them, and in that case higher order functions look like graphs which contain graphs in their input or output columns. There's no "simplifying view" that turns a function taking a function or returning a function into a function that takes just values and returns just values. Such a process is not well-defined and is unnecessary.
(Note: some people might try to tell you that IO
can be seen as a function taking the "real world" as input and outputting a new version of the world. That's really not a good way to think about it, in part because it conflates evaluation and execution. It's a hack that makes implementing Haskell simpler, but it makes using and thinking about the language a bit of a mess. This data structure model is IMO easier to deal with.)
回答3:
What is a function that returns a function?
You were almost there:
Composing functions thus amounts to building higher-dimensional lookup tables.
Here's a small example, in Haskell:
infixr 2 ||
(||) :: Bool -> (Bool -> Bool)
True || True = True
True || False = True
False || True = True
False || False = False
Your lookup table would then take the form of a case-expression:
x || y = case (x, y) of (True, True) -> True
(True, False) -> True
(False, True) -> True
(False, False) -> False
Instead of using tuples:
x || y = case x of True -> (case y of True -> True
False -> True)
False -> (case y of True -> True
False -> False)
If we now move the parameter y
into new local functions:
(||) x = case x of True -> let f y = case y of True -> True
False -> True
in f
False -> let g y = case y of True -> True
False -> False
in g
then the corresponding map-of-maps would be:
+-------+-----------------------+
| x | (||) x |
+-------+-----------------------+
| True | |
| | +-------+-------+ |
| | | y | f y | |
| | +-------+-------+ |
| | | True | True | |
| | +-------+-------+ |
| | | False | True | |
| | +-------+-------+ |
| | |
+-------+-----------------------+
| False | |
| | +-------+-------+ |
| | | y | g y | |
| | +-------+-------+ |
| | | True | True | |
| | +-------+-------+ |
| | | False | False | |
| | +-------+-------+ |
| | |
+-------+-----------------------+
So your abstract model can be extended to higher-order functions - they're just maps from some domain to a co-domain consisting of other maps.
What is a function that returns an I/O action (like Haskell's IO
type)?
Here's an interesting fact: the partially-applied function type:
forall a . (->) a
is monadic:
unit :: a -> (d -> a)
unit x = \ u -> x
bind :: (d -> a) -> (a -> (d -> b)) -> (d -> b)
bind m k = \ u -> let x = m u in k x u
instance Monad ((->) a) where
return = unit
(>>=) = bind
How simple is that! If only the IO
type could be defined so easily...
Of course it can't be exactly the same - outside interactions are involved - but how close can we get?
Well, I/O usually needs to happen in some predefined order for it to be useful (e.g. grab house keys then leave locked house) so a mechanism is needed to sequentially order the evaluation of IO
expressions - how about bang patterns?
unit :: a -> (d -> a)
unit x = \ u -> x
bind :: (d -> a) -> (a -> (d -> b)) -> (d -> b)
bind m k = \ u -> let !x = m u in k x u
It's barely noticeable - nice! As a bonus, we can now also provide a useful definition for (>>)
:
next :: (d -> a) -> (d -> b) -> (d -> b)
next m w = \ u -> let !_ = m u in w u
instance Monad ((->) a) where
.
.
.
(>>) = next
Let's consider the following small Haskell 2010 program:
main :: IO ()
main = putStr "ha" >> putStr "ha" >> putStr "!\n"
This can be rewritten as:
main = let x = putStr "ha" in x >> x >> putStr "!\n"
Assuming the appropriate definitions for:
puts :: String -> (d -> ())
putc :: Char -> (d -> ())
can we also rewrite:
main' :: d -> ()
main' = puts "ha" >> puts "ha" >> puts "!\n"
as:
main' = let x = puts "ha" in x >> x >> puts "!\n"
No - quoting from Philip Wadler's How to Declare an Imperative:
[...] the laugh is on us: the program prints only a single
"ha"
, at the time variablex
is bound. In the presence of side effects, equational reasoning in its simplest form becomes invalid.
(section 2.2 on page 5.)
Why? Let's look at what changed:
let x = puts "ha" in x >> x
If (>>)
is replaced with its definition:
let x = puts "ha" in \ u -> let !_ = x u in x u
the cause is revealed - while x u
is used twice, it is only evaluated once because Haskell is nonstrict - the second use of x u
merely retrieving the result of the first.
This is a legitimate transformation e.g:
testme n = n^2 + n^2 + n
and:
testme n = let x = n^2 in x + x + n
and optimising Haskell implementations like GHC rely on that and many other transformations to complete their objective - treating I/O as some special case is most likely to be an exercise in utter futility...let's just modify the code so it won't end up being rewritten.
One simple way to do that would be to make all calls to puts
or putc
unique:
let x = puts "ha" in \ u -> let !u1:u2:_ = ... in
let !_ = x u1 in x u2
Therefore:
bind :: (d -> a) -> (a -> (d -> b)) -> (d -> b)
bind m k = \ u -> let !u1:u2:_ = ... in
let !x = m u1 in
k x u2
next :: (d -> a) -> (d -> b) -> (d -> b)
next m w = \ u -> let !u1:u2:_ = ... in
let !_ = m u1 in
w u2
However, that isn't enough:
let x = puts "ha" in \ u -> let !u1:u2:_ = ... in
let !_ = x u1 in x u
We could take a hint from Clean and add on uniqueness types, but one substantial change has already been made (the bang-pattern extension) - are we really going to add another extension each time we encounter a new problem?
We might as well make a completely-new programming language...
Moving on, let's rename all those annoying d
type variables, along with puts
and putc
:
data OI
putstr :: String -> OI -> ()
putchar :: Char -> OI -> ()
Hmm...all output, no input:
getchar :: OI -> Char
What about the other definitions? Let's try:
next :: (OI -> a) -> (IO -> b) -> OI -> b
next m w = \ u -> let !u1:u2:_ = ... in
let !_ = m u1 in
w u2
So u
, u1
and u2
have the same type; they're related:
next :: (OI -> a) -> (IO -> b) -> OI -> b
next m w = \ u -> let !u1:u2:_ = parts u in
let !_ = m u1 in
w u2
A name like parts
is rather generic:
class Partible a where
parts :: a -> [a]
partsOI :: OI -> [OI]
instance Partible OI where
parts = partsOI
We can now provide a definition for putstr
:
putstr s = \ u -> foldr (\!_ -> id) () $ zipWith putchar s $ parts u
and complete bind
's:
bind :: (OI -> a) -> (a -> OI -> b) -> OI -> b
bind m k = \ u -> let !u1:u2:_ = parts u in
let !x = m u1 in
k x u2
That definition of unit
:
unit :: a -> OI -> a
unit x = \ u -> x
doesn't use its parameter u
, so:
let x = puts "ha" in \ u -> let !u1:u2:_ = ... in
let !_ = x u1 in unit () u
is possible - how is that more acceptable than:
let x = puts "ha" in \ u -> let !u1:u2:_ = ... in
let !_ = x u1 in x u
Should unit
also call parts
?
unit x = \ u -> let !_:_ = parts u in x
Now the first task carried out by unit
, bind
and next
involves the (indirect) application of partsOI
...what if an OI
value was spoiled upon its first use by partsOI
so it couldn't be reused?
No: not just partsOI
, but putchar
and getchar
too - then all three could make use of a common check-and-spoil mechanism; the reuse of an OI
argument could then be treated as being invalid e.g. by throwing an exception or raising an error (just as division-by-zero is treated now in Haskell).
Right now, it's either that or uniqueness types...
Spoiling OI
values during evaluation rules out an idiomatic Haskell type declaration. Just like Int
or Char
, OI
will need to be predefined; together with partsOI
, putchar
and getchar
, it forms an abstract data type.
Some observations:
partsOI
returns a list of indefinite length; an easier option would be to define such a list in Haskell (the syntax is much better :-)In
bind
andnext
, only the first two members of the list returned byparts
are used - a pair ofOI
values would be sufficient.
Returning pairs of OI
values is simple enough:
part u :: Partible a => a -> (a, a)
part u = let !u1:u2:_ = parts u in (u1, u2)
This is interesting:
parts u = let !(u1, u2) = part u in u1 : part u
which suggests:
class Partible a where
part :: a -> (a, a)
parts :: a -> [a]
-- Minimal complete definition: part or parts
part u = let !u1:u2:_ = parts u in (u1, u2)
parts u = let !(u1, u2) = part u in u1 : part u
partOI :: OI -> (OI, OI)
instance Partible OI where
part = partOI
along with:
unit :: a -> OI -> a
unit x = \ u -> let !(_, _) = part u in x
bind :: (OI -> a) -> (a -> OI -> b) -> OI -> b
bind m k = \ u -> let !(u1, u2) = part u in
let !x = m u1 in
k x u2
next :: (OI -> a) -> (IO -> b) -> OI -> b
next m w = \ u -> let !(u1, u2) = part u in
let !_ = m u1 in
w u2
That worked well! Just one other detail: main
main'
- what happens when it's called?
It's all there in the type signature:
main' :: OI -> ()
An implementation would evaluate the application of main'
to a new OI
value, then discard the result; the OI
value being obtained via a mechanism similar to that used by partOI
to generate the OI
values it returns.
Time to bring everything together:
-- the OI ADT:
data OI
putchar :: Char -> OI -> ()
getchar :: OI -> Char
partOI :: OI -> (OI, OI)
class Partible a where
part :: a -> (a, a)
parts :: a -> [a]
-- Minimal complete definition: part or parts
part u = let !u1:u2:_ = parts u in (u1, u2)
parts u = let !(u1, u2) = part u in u1 : part u
instance Partible OI where
part = partOI
putstr :: String -> OI -> ()
putstr s = \ u -> foldr (\!_ -> id) () $ zipWith putchar s $ parts u
unit :: a -> OI -> a
unit x = \ u -> let !(_, _) = part u in x
bind :: (OI -> a) -> (a -> OI -> b) -> OI -> b
bind m k = \ u -> let !(u1, u2) = part u in
let !x = m u1 in
k x u2
next :: (OI -> a) -> (IO -> b) -> OI -> b
next m w = \ u -> let !(u1, u2) = part u in
let !_ = m u1 in
w u2
instance Monad ((->) OI) where
return = unit
(>>=) = bind
(>>) = next
{- main' :: OI -> () -}
So...what was the question?
What is a function that returns an I/O action (like Haskell's
IO
type)?
I'll just answer the easier question:
What is an I/O action (like Haskell's
IO
type)?
As I see it, an I/O action (an IO
value in Haskell) is an abstract entity bearing the type of a function whose domain is a partible type specific to the purpose of outside interactions.
P.S: if you're wondering why I didn't use the pass-the-planet model of I/O:
newtype IO' a = IO' (FauxWorld -> (FauxWorld, a))
data FauxWorld = FW OI
instance Monad IO' where
return x = IO' $ \ s@(FW _) -> (s, x)
IO' m >>= k = IO' $ \ s@(FW _) -> let !(s', x) = m s in
let !(IO' w) = k x in
w s'
putChar' :: Char -> IO' ()
putChar' c = IO' $ \ (FW u) -> let !(u1, u2) = part u in
let !_ = putchar c u1 in
(FW u2, ())
putStr' :: String -> IO' ()
putStr' s = IO' $ \ (FW u) -> let !(u1, u2) = part u in
let !_ = putstr s u1 in
(FW u2, ())
getChar' :: IO' Char
getChar' = IO' $ \ (FW u) -> let !(u1, u2) = part u in
let !c = getchar u1 in
(FW u2, c)
来源:https://stackoverflow.com/questions/61798648/how-to-view-higher-order-functions-and-io-actions-from-a-mathematical-perspectiv