join
is defined along with bind
to flatten the combined data structure into single structure.
From type system view, (+) 7 :: Num a =>
An intuition about join
is that is squashes 2 containers into one. .e.g
join [[1]] => [1]
join (Just (Just 1)) => 1
join (a christmas tree decorated with small cristmas tree) => a cristmas tree
etc ...
Now, how can you join functions ? In fact functions, can be seen as a container.
If you look at a Hash table for example. You give a key and you get a value (or not). It's a function key -> value
(or if you prefer key -> Maybe value
).
So how would you join 2 HashMap ?
Let's say I have (in python style) h={"a": {"a": 1, "b": 2}, "b" : {"a" : 10, "b" : 20 }}
how can I join it, or if you prefer flatten it ?
Given "a"
which value should I get ? h["a"]
gives me {"a":1, "b":2}
. The only thing I can do with it is to find "a" again in this new value, which gives me 1
.
Therefore join h
equals to {"a":1, "b":20}
.
It's the same for a function.
how to get some intuition about it instead of just relying on type system?
I'd rather say that relying on the type system is a great way to build a specific sort of intuition. The type of join
is:
join :: Monad m => m (m a) -> m a
Specialised to (->) r
, it becomes:
(r -> (r -> a)) -> (r -> a)
Now let's try to define join
for functions:
-- join :: (r -> (r -> a)) -> (r -> a)
join f = -- etc.
We know the result must be a r -> a
function:
join f = \x -> -- etc.
However, we do not know anything at all about what the r
and a
types are, and therefore we know nothing in particular about f :: r -> (r -> a)
and x :: r
. Our ignorance means there is literally just one thing we can do with them: passing x
as an argument, both to f
and to f x
:
join f = \x -> f x x
Therefore, join
for functions passes the same argument twice because that is the only possible implementation. Of course, that implementation is only a proper monadic join
because it follows the monad laws:
join . fmap join = join . join
join . fmap return = id
join . return = id
Verifying that might be another nice exercise.
Going along with the traditional analogy of a monad as a context for computation, join
is a method of combining contexts. Let's start with your example. join (+) 7
. Using a function as a monad implies the reader monad. (+ 1)
is a reader monad which takes the environment and adds one to it. Thus, (+)
would be a reader monad within a reader monad. The outer reader monad takes the environment n
and returns a reader of the form (n +)
, which will take a new environment. join
simply combines the two environments so that you provide it once and it applies the given parameter twice. join (+) === \x -> (+) x x
.
Now, more in general, let's look at some other examples. The Maybe
monad represents potential failure. A value of Nothing
is a failed computation, whereas a Just x
is a success. A Maybe
within a Maybe
is a computation that could fail twice. A value of Just (Just x)
is obviously a success, so joining that produces Just x
. A Nothing
or a Just Nothing
indicates failure at some point, so joining the possible failure should indicate that the computation failed, i.e. Nothing
.
A similar analogy can be made for the list monad, for which join
is merely concat
, the writer monad, which uses the monoidal operator <>
to combine the output values in question, or any other monad.
join
is a fundamental property of monads and is the operation that makes it significantly stronger than a functor or an applicative functor. Functors can be mapped over, applicatives can be sequences, monads can be combined. Categorically, a monad is often defined as join
and return
. It just so happens that in Haskell we find it more convenient to define it in terms of return
, (>>=)
, and fmap
, but the two definitions have been proven synonymous.