Correct me if I\'m wrong, but it seems like algebraic data types in Haskell are useful in many of the cases where you would use classes and inheritance in OO languages. But
Check "Open data types and open functions" http://lambda-the-ultimate.org/node/1453
In object-oriented languages, it is easy to extend data by defining new classes, but it is difficult to add new functions. In functional languages, the situation is reversed: adding new functions poses no problems, but extending data (adding new data constructors) requires modifying existing code. The problem of supporting both directions of extensibility is known as the expression problem. We present open data types and open functions as a lightweight solution to the expression problem in the Haskell language. The idea is that constructors of open data types, and equations of open functions can appear scattered throughout a program. In particular, they may reside in different modules. The intended semantics is as follows: the program should behave as if the data types and functions were closed, defined in one place. The order of function equations is determined by best-fit pattern matching, where a specific pattern takes precedence over an unspecific one. We show that our solution is applicable to the expression problem, generic programming, and exceptions. We sketch two implementations. A simple one, derived from the semantics, and one based on mutually recursive modules that permits separate compilation.
The answer has to do with in what ways the code is easy to extend, a tension between classes and algebraic data types that Phil Wadler dubbed "the expression problem":
With algebraic data types,
It is very cheap to add a new operation on things: you just define a new function. All the old functions on those things continue to work unchanged.
It is very expensive to add a new kind of thing: you have to add a new constructor an existing data type, and you have to edit and recompile every function which uses that type.
With classes,
It is very cheap to add a new kind of thing: just add a new subclass, and as needed you define specialized methods, in that class, for all the existing operations. The superclass and all the other subclasses continue to work unchanged.
It is very expensive to add a new operation on things: you have to add a new method declaration to the superclass and potentially add a method definition to every existing subclass. In practice, the burden varies depending on the method.
So, algebraic data types are closed because a closed type supports certain kinds of program evolution well. For example, if your data types define a language, it is easy to add new compiler passes without invalidating old ones or changing the data.
It is possible to have "open" data types, but except under carefully controlled circumstances, the type checking becomes difficult. Todd Millstein did some very beautiful work on a language design that supports open algebraic types and extensible functions, all with a modular type checker. I found his paper a great pleasure to read.
If you write a function like
maybeToList Nothing = []
maybeToList (Just x) = [x]
then you know that it's never going to produce a runtime error because you've covered all of the cases. This ceases to be true as soon as the Maybe type is extensible. In those cases where you need an extensible type (and they are rarer than you think), the canonical Haskell solution is to use a type class.
First, as counterpoint to Charlie's answer, this isn't intrinsic to functional programming. OCaml has the concept of open unions or polymorphic variants, which essentially do what you want.
As for why, I believe this choice was made for Haskell because
So if you'd rather have a data Color r b g = Red r | Blue b | Green g
type, it's easy to make, and you can easily make it act like a monad or a functor or however other functions need.
Some excellent answers on this (admittedly old) question, but I feel I have to throw my few cents in.
There is no way that I can somehow add another option to this type later on without modifying this declaration. So what are the benefits of this system? It seems like the OO way would be much more extensible.
The answer to this, I believe, is that the sort of extensibility that open sums give you is not always a plus, and that, correspondingly, the fact that OO forces this on you is a weakness.
The advantage of closed unions is their exhaustiveness: if you have fixed all the alternatives at compilation time, then you can be certain that there will be no unforeseen cases that your code cannot handle. This is a valuable property in many problem domains, for example, in abstract syntax trees for languages. If you're writing a compiler, the expressions of the language fall into a predefined, closed set of subcases—you do not want people to be able to add new subcases at runtime that your compiler doesn't understand!
In fact, compiler ASTs are one of the classic Gang of Four motivating examples for the Visitor Pattern, which is the OOP counterpart to closed sums and exhaustive pattern matching. It is instructive to reflect on the fact that OO programmers ended up inventing a pattern to recover closed sums.
Likewise, procedural and functional programmers have invented patterns to obtain the effect of sums. The simplest one is the "record of functions" encoding, which corresponds to OO interfaces. A record of functions is, effectively, a dispatch table. (Note that C programmers have been using this technique for ages!) The trick is that there is very often a large number of possible functions of a given type—often infinitely many. So if you have a record type whose fields are functions, then that can easily support an astronomically large or infinite set of alternatives. And what's more, since records are created at runtime and can be done flexibly based on runtime conditions, the alternatives are late bound.
The final comment I'd make is that, in my mind, OO has made too many people believe that extensibility is synonymous with late binding (e.g., the ability to add new subcases to a type at runtime), when this just isn't generally true. Late binding is one technique for extensibility. Another technique is composition—building complex objects out of a fixed vocabulary of building blocks and rules for assembling them together. The vocabulary and rules are ideally small, but designed so that they have rich interactions that allow you to build very complex things.
Functional programming—and the ML/Haskell statically typed flavors in particular—have long emphasized composition over late binding. But in reality, both kinds of techniques exist in both paradigms, and should be in the toolkit of a good programmer.
It's also worth noting that programming languages themselves are fundamentally examples of composition. A programming language has a finite, hopefully simple syntax that allows you to combine its elements to write any possible program. (This in fact goes back to the compilers/Visitor Pattern example above and motivates it.)
The fact that ADT are closed makes it a lot easier to write total functions. That are functions that always produce a result, for all possible values of its type, eg.
maybeToList :: Maybe a -> [a]
maybeToList Nothing = []
maybeToList (Just x) = [x]
If Maybe
were open, someone could add a extra constructor and the maybeToList
function would suddenly break.
In OO this isn't an issue, when you're using inheritance to extend a type, because when you call a function for which there is no specific overload, it can just use the implementation for a superclass. I.e., you can call printPerson(Person p)
just fine with a Student
object if Student
is a subclass of Person
.
In Haskell, you would usually use encapsulation and type classes when you need to extent your types. For example:
class Eq a where
(==) :: a -> a -> Bool
instance Eq Bool where
False == False = True
False == True = False
True == False = False
True == True = True
instance Eq a => Eq [a] where
[] == [] = True
(x:xs) == (y:ys) = x == y && xs == ys
_ == _ = False
Now, the ==
function is completely open, you can add your own types by making it an instance of the Eq
class.
Note that there has been work on the idea of extensible datatypes, but that is definitely not part of Haskell yet.