In many articles I have read that monad >>=
operator is a way to represent function composition. But for me it is closer to some kind of advanced function
Just as an illustration, consider this:
($) :: (a -> b) -> a -> b
let g=g in (g $) :: a -> b
g :: (a -> b)
_____
Functor f => / \
(<$>) :: (a -> b) -> f a -> f b
let g=g in (g <$>) :: f a -> f b
g :: (a -> b)
___________________
Applicative f => / / \
(<*>) :: f (a -> b) -> f a -> f b
let h=h in (h <*>) :: f a -> f b
h :: f (a -> b)
_____________
Monad m => /.------. \
(=<<) :: (a -> m b) -> m a -> m b
let k=k in (k =<<) :: m a -> m b
k :: (a -> m b)
So yes, each one of those, (g <$>)
, (h <*>)
or (k =<<)
, is some kind of a function application, promoted into either Functor, Applicative Functor, or a Monad "context". And (g $)
is just a regular kind of application of a regular kind of function.
With Functors, functions have no influence on the f
component of the overall thing. They work strictly on the inside and can't influence the "wrapping".
With Applicatives, the functions come wrapped in an f
, which wrapping combines with that of an argument (as part of the application) to produce the wrapping of the result.
With Monads, functions themselves now produce the wrapped results, pulling their arguments somehow from the wrapped argument (as part of the application).
We can see the three operators as some kind of a marking on a function, like mathematicians like to write say f'
or f^
or f*
(and in the original work by Eugenio Moggi(1) f*
is exactly what was used, denoting the promoted function (f =<<)
).
And of course, with the promoted functions :: f a -> f b
, we get to chain them, because now the types line up. The promotion is what allows the composition.
(1) "Notions of computation and monads", Eugenio Moggi, July 1991.
So the functor is "magically working inside" "the pipes"; applicative is "prefabricated pipes built from components in advance"; and monads are "building pipe networks as we go". An illustration:
Clearly, >>=
is not a way to represent function composition. Function composition is simply done with .
. However, I don't think any of the articles you've read meant this, either.
What they meant was “upgrading” function composition to work directly with “monadic functions”, i.e. functions of the form a -> m b
. The technical term for such functions is Kleisli arrows, and indeed they can be composed with <=<
or >=>
. (Alternatively, you can use the Category instance, then you can also compose them with .
or >>>
.)
However, talking about arrows / categories tends to be confusing especially to beginners, just like point-free definitions of ordinary functions are often confusing. Luckily, Haskell allows us to express functions also in a more familiar style that focuses on the results of functions, rather the functions themselves as abstract morphisms†. It's done with lambda abstraction: instead of
q = h . g . f
you may write
q = (\x -> (\y -> (\z -> h z) (g y)) (f x))
...of course the preferred style would be (this being only syntactic sugar for lambda abstraction!)‡
q x = let y = f x
z = g y
in h z
Note how, in the lambda expression, basically composition was replaced by application:
q = \x -> (\y -> (\z -> h z) $ g y) $ f x
Adapted to Kleisli arrows, this means instead of
q = h <=< g <=< f
you write
q = \x -> (\y -> (\z -> h z) =<< g y) =<< f x
which again looks of course much nicer with flipped operators or syntactic sugar:
q x = do y <- f x
z <- g y
h z
So, indeed, =<<
is to <=<
like $
is to .
. The reason it still makes sense to call it a composition operator is that, apart from “applying to values”, the >>=
operator also does the nontrivial bit about Kleisli arrow composition, which function composition doesn't need: joining the monadic layers.
†The reason this works is that Hask is a cartesian closed category, in particular a well-pointed category. In such a category, arrows can, broadly speaking, be defined by the collection of all their results when applied to simple argument values.
‡@adamse remarks that let
is not really syntactic sugar for lambda abstraction. This is particularly relevant in case of recursive definitions, which you can't directly write with a lambda. But in simple cases like this here, let
does behave like syntactic sugar for lambdas, just like do
notation is syntactic sugar for lambdas and >>=
. (BTW, there's an extension which allows recursion even in do notation... it circumvents the lambda-restriction by using fixed-point combinators.)