The type blows my mind:
class Contravariant (f :: * -> *) where
contramap :: (a -> b) -> f b -> f a
Then I read this, but con
I know this answer won't be as deeply academic as the other ones, but it's simply based on the common implementations of contravariant you'll come across.
First, a tip: Don't read the contraMap
function type using the same mental metaphor for f
as you do when reading the good ol' Functor's map
.
You know how you think:
"a thing that contains (or produces) an
t
"
...when you read a type like f t
?
Well, you need to stop doing that, in this case.
The Contravariant functor is "the dual" of the classic functor so, when you see f a
in contraMap
, you should think the "dual" metaphor:
f t
is a thing that CONSUMES at
Now contraMap
's type should start to make sense:
contraMap :: (a -> b) -> f b ...
...pause right there, and the type is perfectly sensible:
b
.b
.First argument cooks the b
. Second argument eats the b
.
Makes sense, right?
Now finish writing the type:
contraMap :: (a -> b) -> f b -> f a
So in the end this thing must yield a "consumer of a
".
Well, surely we can build that, given that our first argument is a function that takes an a
as input.
A function (a -> b)
should be a good building block for building a "consumer of a
".
So contraMap
basically lets you create a new "consumer", like this (warning: made up symbols incoming):
(takes a as input / produces b as output) ~~> (consumer of b)
contraMap
(i.e. (a -> b)
).f b
).contraMap
(a thing that knows how to consume an a
, i.e. f a
).From a programmer's point of view the essence of functor-ness is being able to easily adapt things. What I mean by "adapt" here is that if I have an f a
and I need an f b
, I'd like an adaptor that will fit my f a
in my f b
-shaped hole.
It seems intuitive that if I can turn an a
into a b
, that I might be able to turn a f a
into an f b
. And indeed that's the pattern that Haskell's Functor
class embodies; if I supply an a -> b
function then fmap
lets me adapt f a
things into f b
things, without worrying about whatever f
involves.1
Of course talking about paramterised types like list-of-x [x]
, Maybe y
, or IO z
here, and the thing we get to change with our adaptors is the x
, y
, or z
in those. If we want the flexibility to get an adaptor from any possible function a -> b
then of course the thing we're adapting has to be equally applicable to any possible type.
What is less intuitive (at first) is that there are some types which can be adapted almost exactly the same way as functory ones, only they're "backwards"; for these if we want to adapt an f a
to fill a need for a f b
we actually need to supply a b -> a
function, not an a -> b
one!
My favourite concrete example is actually the function type a -> r
(a for argument, r for result); all of this abstract nonsense makes perfect sense when applied to functions (and if you've done any substantial programming you've almost certainly used these concepts without knowing the terminology or how widely-applicable they are), and the two notions are so obviously dual to each other in this context.
It's fairly well known that a -> r
is a functor in r
. This makes sense; if I've got an a -> r
and I need an a -> s
, then I could use an r -> s
function to adapt my original function simply by post-processing the result.2
If, on the other hand, I have an a -> r
function and what I need is a b -> r
, then again it's clear that I can address my need by pre-processing arguments before passing them to the original function. But what do I pre-process them with? The original function is a black box; no matter what I do it's always expecting a
inputs. So I need to turn my b
values into the a
values it expects: my pre-processing adaptor needs a b -> a
function.
What we've just seen is that the function type a -> r
is a covariant functor in r
, and a contravariant functor in a
. I think of this as saying we can adapt a function's result, and the result type "changes with" the adaptor r -> s
, while when we adapt a function's argument the argument type changes "in the opposite direction" to the adaptor.
Interestingly, the implementation of the function-result fmap
and the function-argument contramap
are almost exactly the same thing: just function composition (the .
operator)! The only difference is on which side you compose the adaptor function:3
fmap :: (r -> s) -> (a -> r) -> (a -> s)
fmap adaptor f = adaptor . f
fmap adaptor = (adaptor .)
fmap = (.)
contramap' :: (b -> a) -> (a -> r) -> (b -> r)
contramap' adaptor f = f . adaptor
contramap' adaptor = (. adaptor)
contramap' = flip (.)
I consider the second definition from each block the most insightful; (covariantly) mapping over a function's result is composition on the left (post-composition if we want to take a "this-happens-after-that" view), while contravariantly mapping over a function's argument is composition on the right (pre-composition).
This intuition generalises pretty well; if an f x
structure can give us values of type x
(just like an a -> r
function gives us r
values, at least potentially), it might be a covariant Functor
in x
, and we could use an x -> y
function to adapt it into being an f y
. But if an f x
structure receives values of type x
from us (again, like an a -> r
function's argument of type a
), then it might be a Contravariant
functor and we'd need to use a y -> x
function to adapt it to being an f y
.
I find it interesting to reflect that this "sources are covariant, destinations are contravariant" intuition reverses when you're thinking from the perspective of an implementer of the source/destination rather than a caller. If I'm trying to implement an f x
that receives x
values I can "adapt my own interface" so I get to work with y
values instead (while still presenting the "receives x
values" interface to my callers) by using an x -> y
function. Usually we don't think this way around; even as the implementer of the f x
I think about adapting the things I'm calling rather than "adapting my caller's interface to me". But it's another perspective you can take.
The only semi-real-world use I've made of Contravariant
(as opposed to implicitly using the contravariance of functions in their arguments by using composition-on-the-right, which is very common) was for a type Serialiser a
that could serialise x
values. Serialiser
had to be a Contravariant
rather than a Functor
; given I can serialise Foos, I can also serialise Bars if I can Bar -> Foo
.4 But when you realise that Serialiser a
is basically a -> ByteString
it becomes obvious; I'm just repeating a special case of the a -> r
example.
In pure functional programming, there's not very much use in having something that "receives values" without it also giving something back so all the contravariant functors tend to look like functions, but nearly any straightforward data structure that can contain values of an arbitrary type will be a covariant functor in that type parameter. This is why Functor
stole the good name early and is used all over the place (well, that and that Functor
was recognised as a fundamental part of Monad
, which was already in wide use before Functor
was defined as a class in Haskell).
In imperative OO I believe contravariant functors may be significantly more common (but not abstracted over with a unified framework like Contravariant
), although it's also very easy to have mutability and side effects mean that a parameterised type just couldn't be a functor at all (commonly: your standard container of a
that is both readable and writable is both an emitter and a sink of a
, and rather than meaning it's both covariant and contravariant it turns out that means it's neither).
1 The Functor
instance of each individual f
says how to apply arbitrary functions to the particular form of that f
, without worrying about the particular types f
is being applied to; a nice separation of concerns.
2 This functor is also a monad, equivalent to the Reader
monad. I'm not going to go beyond functors in detail here, but given the rest of my post an obvious question would be "is the a -> r
type also some sort of contravariant monad in a
then?". Contravariance doesn't apply to monads unfortunately (see Are there contravariant monads?), but there is a contravariant analogue of Applicative
: https://hackage.haskell.org/package/contravariant-1.4/docs/Data-Functor-Contravariant-Divisible.html
3 Note that my contramap'
here doesn't match the actual contramap
from Contravariant
as implemented in Haskell; you can't make a -> r
an actual instance of Contravariant
in Haskell code simply because the a
is not the last type paramter of (->)
. Conceptually it works perfectly well, and you can always use a newtype wrapper to swap the type parameters and make that an instance (the contravariant defines the the Op
type for exactly this purpose).
4 At least for a definition of "serialise" that doesn't necessarily include being able to reconstruct the Bar later, since it would serialise the a Bar identically to the Foo it mapped to with no way to include any information about what the mapping was.
First of all @haoformayor's answer is excellent so consider this more an addendum than a full answer.
One way I like to think about Functor (co/contravariant) is in terms of diagrams. The definition is reflected in the following ones. (I am abbreviating contramap
with cmap
)
covariant contravariant
f a ─── fmap φ ───▶ f b g a ◀─── cmap φ ─── g b
▲ ▲ ▲ ▲
│ │ │ │
│ │ │ │
a ────── φ ───────▶ b a ─────── φ ──────▶ b
Note: that the only change in those two definition is the arrow on top, (well and the names so I can refer to them as different things).
The example I always have in head when speaking about those is functions - and then an example of f
would be type F a = forall r. r -> a
(which means the first argument is arbitrary but fixed r
), or in other words all functions with a common input.
As always the instance for (covariant) Functor
is just fmap ψ φ
= ψ . φ`.
Where the (contravariant) Functor
is all functions with a common result - type G a = forall r. a -> r
here the Contravariant
instance would be
cmap ψ φ = φ . ψ
.
But what the hell does this mean
φ :: a -> b
and ψ :: b -> c
usually therefore (ψ . φ) x = ψ (φ x)
or x ↦ y = φ x
and y ↦ ψ y
makes sense, what is ommited in the statement for cmap
is that here
φ :: a -> b
but ψ :: c -> a
so ψ
cannot take the result of φ
but it can transform its arguments to something φ
can use - therefore x ↦ y = ψ x
and y ↦ φ y
is the only correct choice.
This is reflected in the following diagrams, but here we have abstracted over the example of functions with common source/target - to something that has the property of being covariant/contravariant, which is a thing you often see in mathematics and/or haskell.
covariant
f a ─── fmap φ ───▶ f b ─── fmap ψ ───▶ f c
▲ ▲ ▲
│ │ │
│ │ │
a ─────── φ ──────▶ b ─────── ψ ──────▶ c
contravariant
g a ◀─── cmap φ ─── g b ◀─── cmap ψ ─── g c
▲ ▲ ▲
│ │ │
│ │ │
a ─────── φ ──────▶ b ─────── ψ ──────▶ c
In mathematics you usually require a law to call something functor.
covariant
a f a
│ ╲ │ ╲
φ │ ╲ ψ.φ ══▷ fmap φ │ ╲ fmap (ψ.φ)
▼ ◀ ▼ ◀
b ──▶ c f b ────▶ f c
ψ fmap ψ
contravariant
a f a
│ ╲ ▲ ▶
φ │ ╲ ψ.φ ══▷ cmap φ │ ╲ cmap (ψ.φ)
▼ ◀ │ ╲
b ──▶ c f b ◀─── f c
ψ cmap ψ
which is equivalent to saying
fmap ψ . fmap φ = fmap (ψ.φ)
whereas
cmap φ . cmap ψ = cmap (ψ.φ)
You can think of Functor f
as an assertion that a
never appears in the "negative position". This is an esoteric term for this idea: Notice that in the following datatypes the a
appears to act as a "result" variable.
newtype IO a = IO (World -> (World, a))
newtype Identity a = Identity a
newtype List a = List (forall r. r -> (a -> List a -> r) -> r)
In each of these examples a
appears in a positive position. In some sense the a
for each type represents the "result" of a function. It might help to think of a
in the second example as () -> a
. And it might help to remember that third example is equivalent to data List a = Nil | Cons a (List a)
. In callbacks like a -> List -> r
the a
appears in the negative position but the callback itself is in the negative position so negative and negative multiply to be positive.
This scheme for signing the parameters of a function is elaborated in this wonderful blog post.
Now note that each of these types admit a Functor
. That is no mistake! Functors are meant to model the idea of categorical covariant functors, which "preserve the order of the arrows" i.e. f a -> f b
as opposed to f b -> f a
. In Haskell, types where a
never appears in a negative position always admit Functor
. We say these types are covariant on a
.
To put it another way, one could validly rename the Functor
class to be Covariant
. They are one and the same idea.
The reason this idea is worded so strangely with the word "never" is that a
can appear in both a positive and negative location, in which case we say the type is invariant on a
. a
can also never appear (such as a phantom type), in which case we say the type is both covariant and contravariant on a
– bivariant.
So for types where a
never appears in the positive position we say the type is contravariant in a
. Every such type Foo a
will admit an instance Contravariant Foo
. Here are some examples, taken from the contravariant
package:
data Void a
(a
is phantom)data Unit a = Unit
(a
is phantom again)newtype Const constant a = Const constant
newtype WriteOnlyStateVariable a = WriteOnlyStateVariable (a -> IO ())
newtype Predicate a = Predicate (a -> Bool)
newtype Equivalence a = Equivalence (a -> a -> Bool)
In these examples a
is either bivariant or merely contravariant. a
either never appears or is negative (in these contrived examples a
always appears before an arrow so determining this is extra-simple). As a result, each of these types admit an instance Contravariant
.
A more intuitive exercise would be to squint at these types (which exhibit contravariance) and then squint at the types above (which exhibit covariance) and see if you can intuit a difference in the semantic meaning of a
. Maybe that is helpful, or maybe it is just still abstruse sleight of hand.
When might these be practically useful? Let us for example want to partition a list of cookies by what kind of chips they have. We have a chipEquality :: Chip -> Chip -> Bool
. To obtain a Cookie -> Cookie -> Bool
, we simply evaluate runEquivalence . contramap cookie2chip . Equivalence $ chipEquality
.
Pretty verbose! But solving the problem of newtype-induced verbosity will have to be another question...
24 Days of Hackage: contravariant
Covariance, contravariance, and positive and negative positions
I love profunctors
Talk: Fun with Profunctors: I cannot overstate how great this talk is