In my area of business - back office IT for a financial institution - it is very common for a software component to carry a global configuration around, to log its progress, to
The Haskell community is split on this issue.
John Hughes reports that he finds it easier to teach monad transformers than to teach monads, and that his students do better with a "transformers first" approach.
The GHC developers generally avoid monad transformers, preferring to roll up their own monads which integrate all the features they need. (I was told just today in no uncertain terms that GHC will not use a monad transformer I defined three days ago.)
To me, monad transformers are a lot like point-free programming (i.e., programming without named variables), which makes sense; after all, they are exactly programming point-free at the type level. I've never like point-free programming because it's useful to be able to introduce the occasional name.
What I observe in practice is
The number of monad transformers available on Hackage is very great, and most of them are pretty simple. This is a classic instance of the problem where it's harder to learn a large library than to roll your own instances.
Monads like Writer, State, and Environment are so simple that I don't see much benefit to monad transformers.
Where monad transformers shine is in modularity and reuse. This property is beautifully demonstrated by Liang, Hudak, and Jones in their landmark paper "Monad Transformers and Modular Interpreters".
Are monad transformers best practice when dealing with those common tasks mentioned above?
I would say not. Where monad transformers are best practice is where you have a product line of related abstractions which you can create by composing and reusing monad transformers in different ways. In a case like this you probably develop a number of monad transformers that are important for your problem domain (like the one that was rejected for GHC) and you (a) compose them in multiple ways; (b) achieve a significant amount of reuse for most transformers; (c) are encapsulating something nontrivial in each monad transformer.
My monad transformer which was rejected for GHC did not meet any of the criteria (a)/(b)/(c) above.
The concept behind monad transformers is quite tricky and hard to understand, monad transformers lead to very complex type signatures
I Think this is a bit of an exaggeration:
Monad transformers are not your only options, you could write a custom Monad, use a continuation Monad. You have mutable references/arrays in IO (global), ST (local and controlled, no IO actions), MVar (synchronizing), TVar (transactional).
I've heard that the potential efficiency issues with Monad transformers could be mitigated just by adding INLINE pragmas to bind/return in the source of mtl/transformers library.
I recently "fell" on monad composition in the context of F#. I wrote a DSL with a strong reliance on the state monad: All components rely on the state monad: the parser (parser monad based on state monad), variable matching tables (more than one for internal types), identifier look up tables. And as these components all work together, they rely on the same state monad. Therefore there is a notion of state composition that brings together the different local states, and the notion of state accessors that give each algo their own state visibility.
Initially, the design was really "just one big state monad". But then I started needing states with only local life times, and yet still in the context of the "persistent" state (and again, all these states are managed by state monads). For that I did need to introduce state monad transformers that augment the state and adapt the state monads together. I also added a transformer to move freely between a state monad and a continuation state monad but I have not bothered to use it.
Therefore, to answer the question: yes, monad transformers exist in the "wild". Yet I would argue strongly against using them "out of the box". Write your application with simple building blocks, using small hand crafted bridges between your modules, if you do end up using something like a monad transformer, that's great; Do not start from there.
And about the type signatures: I have come to think of this type of programming as something very similar to playing blindfold chess (and I am not a chess player): your skill level needs to be at the point that you "see" your functions and types fitting together. The type signatures mostly end up being a distraction, unless you explicitly want to add type constraints for safety reasons (or because the compiler forces you to give them such as with F# records).
So something that tends to be rather global like a log or a configuration, you would suggest to put in the IO monad? From looking at (an admittedly very limited set of) examples, I come to think that Haskell code tends to be either pure (i.e., not monadic at all) or in the IO monad. Or is this a misconception?
I think this is a misconception, only the IO monad is not pure. monads like Write/T/Reader/T/State/T/ST monads are purely functional still. You can write a pure function which uses any of these monads internally like this completely useless example:
foo :: Int -> Int
foo seed = flip execState seed $ do
modify $ (+) 3
modify $ (+) 4
modify $ (-) 2
All this is doing is threading/plumbing the state implicitly, what you would do yourself by hand explicitly, the do-notation here just gives you some nice syntactic sugar to make it look imperative. You can't do any IO actions here, you can't call any foreign functions. ST monad lets you have real mutable references in a local scope while having a pure function interface and, you can't do any IO actions in there it's purely functional still.
You can't avoid some IO actions but you don't want to fall back to IO for everything because that is where anything can go, missiles can be launched, you've got no control. Haskell has abstractions to control effectful computations at varying degrees of safety/purity, IO monad should be the last resort (but you can't avoid it completely).
In your example I think you should stick to using monad transformers or a custom made monad that does the same as composing them with transformers. I've never written a custom monad (yet) but I've used monad transformers quite a bit (my own code, not at work), don't worry about them so much, use them and it's not as bad as you think.
Have you seen the chapter from Real World Haskell that uses monad transformers?
Back when I was learning monads I built an application using a stack of StateT ContT IO to create a discrete event simulation library; the continuations were used to store monadic threads, with the StateT holding the runnable thread queue and other queues used for suspended threads waiting for various events. It worked quite well. I couldn't figure out how to write the Monad instance for a newtype wrapper, so I just made it a type synonym and that worked pretty well.
These days I would probably have rolled my own monad from scratch. However whenever I do this I find myself looking at "All About Monads" and the source of the MTL to remind me what the bind operations look like, so in a sense I'm still thinking in terms of an MTL stack even though the result is a custom monad.
I think this is a misconception, only the IO monad is not pure. monads like Write/T/Reader/T/State/T/ST monads are purely functional still.
There seem to me more than one notion about the term pure / non-pure. Your definition "IO = unpure, everything else = pure" sounds similar to what Peyton-Jones talks about in "Taming effects" (http://ulf.wiger.net/weblog/2008/02/29/peyton-jones-taming-effects-the-next-big-challenge/). On the other hand, the Real World Haskell (in the final pages of the Monad Transformer chapter) contrasts pure functions to monadic function in general - arguing that you need different libraries for both worlds. BTW, one could argue that IO is pure as well, it's side effects being encapsulated in a State function with type RealWorld -> (a, RealWorld). After all, Haskell calls itself a purely functional language (IO included, I presume :-).)
My question is not so much about what can be done theoretically, but more about what has been proven useful from a Software Engineering point of view. Monad transformers allow for modularity of effects (and abstractions in general), but is that the direction programming should be heading to?