A monad is just a monoid in the category of endofunctors, what's the problem?

谁都会走 提交于 2019-11-26 16:50:14
Tom Crockett

That particular phrasing is by James Iry, from his highly entertaining Brief, Incomplete and Mostly Wrong History of Programming Languages, in which he fictionally attributes it to Philip Wadler.

The original quote is from Saunders Mac Lane in Categories for the Working Mathematician, one of the foundational texts of Category Theory. Here it is in context, which is probably the best place to learn exactly what it means.

But, I'll take a stab. The original sentence is this:

All told, a monad in X is just a monoid in the category of endofunctors of X, with product × replaced by composition of endofunctors and unit set by the identity endofunctor.

X here is a category. Endofunctors are functors from a category to itself (which is usually all Functors as far as functional programmers are concerned, since they're mostly dealing with just one category; the category of types - but I digress). But you could imagine another category which is the category of "endofunctors on X". This is a category in which the objects are endofunctors and the morphisms are natural transformations.

And of those endofunctors, some of them might be monads. Which ones are monads? Exactly the ones which are monoidal in a particular sense. Instead of spelling out the exact mapping from monads to monoids (since Mac Lane does that far better than I could hope to), I'll just put their respective definitions side by side and let you compare:

A monoid is...

  • A set, S
  • An operation, • : S × S → S
  • An element of S, e : 1 → S

...satisfying these laws:

  • (a • b) • c = a • (b • c), for all a, b and c in S
  • e • a = a • e = a, for all a in S

A monad is...

  • An endofunctor, T : X → X (in Haskell, a type constructor of kind * -> * with a Functor instance)
  • A natural transformation, μ : T × T → T, where × means functor composition (μ is known as join in Haskell)
  • A natural transformation, η : I → T, where I is the identity endofunctor on X (η is known as return in Haskell)

...satisfying these laws:

  • μ ∘ Tμ = μ ∘ μT
  • μ ∘ Tη = μ ∘ ηT = 1 (the identity natural transformation)

With a bit of squinting you might be able to see that both of these definitions are instances of the same abstract concept.

misterbee

Intuitively, I think that what the fancy math vocabulary is saying is that:

Monoid

A monoid is a set of objects, and a method of combining them. Well known monoids are:

  • numbers you can add
  • lists you can concatenate
  • sets you can union

There are more complex examples also.

Further, every monoid has an identity, which is that "no-op" element that has no effect when you combine it with something else:

  • 0 + 7 == 7 + 0 == 7
  • [] ++ [1,2,3] == [1,2,3] ++ [] == [1,2,3]
  • {} union {apple} == {apple} union {} == {apple}

Finally, a monoid must be associative. (you can reduce a long string of combinations anyway you want, as long as you don't change the left-to-right-order of objects) Addition is OK ((5+3)+1 == 5+(3+1)), but subtraction isn't ((5-3)-1 != 5-(3-1)).

Monad

Now, let's consider a special kind of set and a special way of combining objects.

Objects

Suppose your set contains objects of a special kind: functions. And these functions have an interesting signature: They don't carry numbers to numbers or strings to strings. Instead, each function carries a number to a list of numbers in a two-step process.

  1. Compute 0 or more results
  2. Combine those results unto a single answer somehow.

Examples:

  • 1 -> [1] (just wrap the input)
  • 1 -> [] (discard the input, wrap the nothingness in a list)
  • 1 -> [2] (add 1 to the input, and wrap the result)
  • 3 -> [4, 6] (add 1 to input, and multiply input by 2, and wrap the multiple results)

Combining Objects

Also, our way of combining functions is special. A simple way to combine function is composition: Let's take our examples above, and compose each function with itself:

  • 1 -> [1] -> [[1]] (wrap the input, twice)
  • 1 -> [] -> [] (discard the input, wrap the nothingness in a list, twice)
  • 1 -> [2] -> [ UH-OH! ] (we can't "add 1" to a list!")
  • 3 -> [4, 6] -> [ UH-OH! ] (we can't add 1 a list!)

Without getting too much into type theory, the point is that you can combine two integers to get an integer, but you can't always compose two functions and get a function of the same type. (Functions with type a -> a will compose, but a-> [a] won't.)

So, let's define a different way of combining functions. When we combine two of these functions, we don't want to "double-wrap" the results.

Here is what we do. When we want to combine two functions F and G, we follow this process (called binding):

  1. Compute the "results" from F but don't combine them.
  2. Compute the results from applying G to each of F's results separately, yielding a collection of collection of results.
  3. Flatten the 2-level collection and combine all the results.

Back to our examples, let's combine (bind) a function with itself using this new way of "binding" functions:

  • 1 -> [1] -> [1] (wrap the input, twice)
  • 1 -> [] -> [] (discard the input, wrap the nothingness in a list, twice)
  • 1 -> [2] -> [3] (add 1, then add 1 again, and wrap the result.)
  • 3 -> [4,6] -> [5,8,7,12] (add 1 to input, and also multiply input by 2, keeping both results, then do it all again to both results, and then wrap the final results in a list.)

This more sophisticated way of combining functions is associative (following from how function composition is associative when you aren't doing the fancy wrapping stuff).

Tying it all together,

  • a monad is a structure that defines a way to combine (the results of) functions,
  • analogously to how a monoid is a structure that defines a way to combine objects,
  • where the method of combination is associative,
  • and where there is a special 'No-op' that can be combined with any something to result in something unchanged.

Notes

There are lots of ways to "wrap" results. You can make a list, or a set, or discard all but the first result while noting if there are no results, attach a sidecar of state, print a log message, etc, etc.

I've played a bit loose with the definitions in hopes of getting the essential idea across intuitively.

I've simplified things a bit by insisting that our monad operates on functions of type a -> [a]. In fact, monads work on functions of type a -> m b, but the generalization is kind of a technical detail that isn't the main insight.

Luis Casillas

First, the extensions and libraries that we're going to use:

{-# LANGUAGE RankNTypes, TypeOperators #-}

import Control.Monad (join)

Of these, RankNTypes is the only one that's absolutely essential to the below. I once wrote an explanation of RankNTypes that some people seem to have found useful, so I'll refer to that.

Quoting Tom Crockett's excellent answer, we have:

A monad is...

  • An endofunctor, T : X -> X
  • A natural transformation, μ : T × T -> T, where × means functor composition
  • A natural transformation, η : I -> T, where I is the identity endofunctor on X

...satisfying these laws:

  • μ(μ(T × T) × T)) = μ(T × μ(T × T))
  • μ(η(T)) = T = μ(T(η))

How do we translate this to Haskell code? Well, let's start with the notion of a natural transformation:

-- | A natural transformations between two 'Functor' instances.  Law:
--
-- > fmap f . eta g == eta g . fmap f
--
-- Neat fact: the type system actually guarantees this law.
--
newtype f :-> g =
    Natural { eta :: forall x. f x -> g x }

A type of the form f :-> g is analogous to a function type, but instead of thinking of it as a function between two types (of kind *), think of it as a morphism between two functors (each of kind * -> *). Examples:

listToMaybe :: [] :-> Maybe
listToMaybe = Natural go
    where go [] = Nothing
          go (x:_) = Just x

maybeToList :: Maybe :-> []
maybeToList = Natural go
    where go Nothing = []
          go (Just x) = [x]

reverse' :: [] :-> []
reverse' = Natural reverse

Basically, in Haskell, natural transformations are functions from some type f x to another type g x such that the x type variable is "inaccessible" to the caller. So for example, sort :: Ord a => [a] -> [a] cannot be made into a natural transformation, because it's "picky" about which types we may instantiate for a. One intuitive way I often use to think of this is the following:

  • A functor is a way of operating on the content of something without touching the structure.
  • A natural transformation is a way of operating on the structure of something without touching or looking at the content.

Now, with that out of the way, let's tackle the clauses of the definition.

The first clause is "an endofunctor, T : X -> X." Well, every Functor in Haskell is an endofunctor in what people call "the Hask category," whose objects are Haskell types (of kind *) and whose morphisms are Haskell functions. This sounds like a complicated statement, but it's actually a very trivial one. All it means is that that a Functor f :: * -> * gives you the means of constructing a type f a :: * for any a :: * and a function fmap f :: f a -> f b out of any f :: a -> b, and that these obey the functor laws.

Second clause: the Identity functor in Haskell (which comes with the Platform, so you can just import it) is defined this way:

newtype Identity a = Identity { runIdentity :: a }

instance Functor Identity where
    fmap f (Identity a) = Identity (f a)

So the natural transformation η : I -> T from Tom Crockett's definition can be written this way for any Monad instance t:

return' :: Monad t => Identity :-> t
return' = Natural (return . runIdentity)

Third clause: The composition of two functors in Haskell can be defined this way (which also comes with the Platform):

newtype Compose f g a = Compose { getCompose :: f (g a) }

-- | The composition of two 'Functor's is also a 'Functor'.
instance (Functor f, Functor g) => Functor (Compose f g) where
    fmap f (Compose fga) = Compose (fmap (fmap f) fga)

So the natural transformation μ : T × T -> T from Tom Crockett's definition can be written like this:

join' :: Monad t => Compose t t :-> t
join' = Natural (join . getCompose)

The statement that this is a monoid in the category of endofunctors then means that Compose (partially applied to just its first two parameters) is associative, and that Identity is its identity element. I.e., that the following isomorphisms hold:

  • Compose f (Compose g h) ~= Compose (Compose f g) h
  • Compose f Identity ~= f
  • Compose Identity g ~= g

These are very easy to prove because Compose and Identity are both defined as newtype, and the Haskell Reports define the semantics of newtype as an isomorphism between the type being defined and the type of the argument to the newtype's data constructor. So for example, let's prove Compose f Identity ~= f:

Compose f Identity a
    ~= f (Identity a)                 -- newtype Compose f g a = Compose (f (g a))
    ~= f a                            -- newtype Identity a = Identity a
Q.E.D.

Note: No, this isn't true. At some point there was a comment on this answer from Dan Piponi himself saying that the cause and effect here was exactly the opposite, that he wrote his article in response to James Iry's quip. But it seems to have been removed, perhaps by some compulsive tidier.

Below is my original answer.


It's quite possible that Iry had read From Monoids to Monads, a post in which Dan Piponi (sigfpe) derives monads from monoids in Haskell, with much discussion of category theory and explicit mention of "the category of endofunctors on Hask" . In any case, anyone who wonders what it means for a monad to be a monoid in the category of endofunctors might benefit from reading this derivation.

I came to this post by way of better understanding the inference of the infamous quote from Mac Lane's Category Theory For the Working Mathematician.

In describing what something is, it's often equally useful to describe what it's not.

The fact that Mac Lane uses the description to describe a Monad, one might imply that it describes something unique to monads. Bear with me. To develop a broader understanding of the statement, I believe it needs to be made clear that he is not describing something that is unique to monads; the statement equally describes Applicative and Arrows among others. For the same reason we can have two monoids on Int (Sum and Product), we can have several monoids on X in the category of endofunctors. But there is even more to the similarities.

Both Monad and Applicative meet the criteria:

  • endo => any arrow, or morphism that starts and ends in the same place
  • functor => any arrow, or morphism between two Categories

    (e.g., in day to day Tree a -> List b, but in Category Tree -> List)

  • monoid => single object; i.e., a single type, but in this context, only in regards to the external layer; so, we can't have Tree -> List, only List -> List.

The statement uses "Category of..." This defines the scope of the statement. As an example, the Functor Category describes the scope of f * -> g *, i.e., Any functor -> Any functor, e.g., Tree * -> List * or Tree * -> Tree *.

What a Categorical statement does not specify describes where anything and everything is permitted.

In this case, inside the functors, * -> * aka a -> b is not specified which means Anything -> Anything including Anything else. As my imagination jumps to Int -> String, it also includes Integer -> Maybe Int, or even Maybe Double -> Either String Int where a :: Maybe Double; b :: Either String Int.

So the statement comes together as follows:

  • functor scope :: f a -> g b (i.e., any parameterized type to any parameterized type)
  • endo + functor :: f a -> f b (i.e., any one parameterized type to the same parameterized type) ... said differently,
  • a monoid in the category of endofunctor

So, where is the power of this construct? To appreciate the full dynamics, I needed to see that the typical drawings of a monoid (single object with what looks like an identity arrow, :: single object -> single object), fails to illustrate that I'm permitted to use an arrow parameterized with any number of monoid values, from the one type object permitted in Monoid. The endo, ~ identity arrow definition of equivalence ignores the functor's type value and both the type and value of the most inner, "payload" layer. Thus, equivalence returns true in any situation where the functorial types match (e.g., Nothing -> Just * -> Nothing is equivalent to Just * -> Just * -> Just * because they are both Maybe -> Maybe -> Maybe).

Sidebar: ~ outside is conceptual, but is the left most symbol in f a. It also describes what "Haskell" reads-in first (big picture); so Type is "outside" in relation to a Type Value. The relationship between layers (a chain of references) in programming is not easy to relate in Category. The Category of Set is used to describe Types (Int, Strings, Maybe Int etc.) which includes the Category of Functor (parameterized Types). The reference chain: Functor Type, Functor values (elements of that Functor's set, e.g., Nothing, Just), and in turn, everything else each functor value points to. In Category the relationship is described differently, e.g., return :: a -> m a is considered a natural transformation from one Functor to another Functor, different from anything mentioned thus far.

Back to the main thread, all in all, for any defined tensor product and a neutral value, the statement ends up describing an amazingly powerful computational construct born from its paradoxical structure:

  • on the outside it appears as a single object (e.g., :: List); static
  • but inside, permits a lot of dynamics
    • any number of values of the same type (e.g., Empty | ~NonEmpty) as fodder to functions of any arity. The tensor product will reduce any number of inputs to a single value... for the external layer (~fold that says nothing about the payload)
    • infinite range of both the type and values for the inner most layer

In Haskell, clarifying the applicability of the statement is important. The power and versatility of this construct, has absolutely nothing to do with a monad per se. In other words, the construct does not rely on what makes a monad unique.

When trying to figure out whether to build code with a shared context to support computations that depend on each other, versus computations that can be run in parallel, this infamous statement, with as much as it describes, is not a contrast between the choice of Applicative, Arrows and Monads, but rather is a description of how much they are the same. For the decision at hand, the statement is moot.

This is often misunderstood. The statement goes on to describe join :: m (m a) -> m a as the tensor product for the monoidal endofunctor. However, it does not articulate how, in the context of this statement, (<*>) could also have also been chosen. It truly is a an example of six/half dozen. The logic for combining values are exactly alike; same input generates the same output from each (unlike the Sum and Product monoids for Int because they generate different results when combining Ints).

So, to recap: A monoid in the category of endofunctors describes:

   ~t :: m * -> m * -> m *
   and a neutral value for m *

(<*>) and (>>=) both provide simultaneous access to the two m values in order to compute the the single return value. The logic used to compute the return value is exactly the same. If it were not for the different shapes of the functions they parameterize (f :: a -> b versus k :: a -> m b) and the position of the parameter with the same return type of the computation (i.e., a -> b -> b versus b -> a -> b for each respectively), I suspect we could have parameterized the monoidal logic, the tensor product, for reuse in both definitions. As an exercise to make the point, try and implement ~t, and you end up with (<*>) and (>>=) depending on how you decide to define it forall a b.

If my last point is at minimum conceptually true, it then explains the precise, and only computational difference between Applicative and Monad: the functions they parameterize. In other words, the difference is external to the implementation of these type classes.

In conclusion, in my own experience, Mac Lane's infamous quote provided a great "goto" meme, a guidepost for me to reference while navigating my way through Category to better understand the idioms used in Haskell. It succeeds at capturing the scope of a powerful computing capacity made wonderfully accessible in Haskell.

However, there is irony in how I first misunderstood the statement's applicability outside of the monad, and what I hope conveyed here. Everything that it describes turns out to be what is similar between Applicative and Monads (and Arrows among others). What it doesn't say is precisely the small but useful distinction between them.

- E

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!