I\'ve been reading a lot of stuff about functional programming lately, and I can understand most of it, but the one thing I just can\'t wrap my head around is stateless codi
I'm late to the discussion, but I wanted to add a few points for people who are struggling with functional programming.
First the imperative way (in pseudocode)
moveTo(dest, cur):
while (cur != dest):
if (cur < dest):
cur += 1
else:
cur -= 1
return cur
Now the functional way (in pseudocode). I'm leaning heavily on the ternary operator because I want people from imperative backgrounds to actually be able to read this code. So if you don't use the ternary operator much (I always avoided it in my imperative days) here is how it works.
predicate ? if-true-expression : if-false-expression
You can chain the ternary expression by putting a new ternary expression in place of the false-expression
predicate1 ? if-true1-expression :
predicate2 ? if-true2-expression :
else-expression
So with that in mind, here's the functional version.
moveTo(dest, cur):
return (
cur == dest ? return cur :
cur < dest ? moveTo(dest, cur + 1) :
moveTo(dest, cur - 1)
)
This is a trivial example. If this were moving people around in a game world, you'd have to introduce side effects like drawing the object's current position on the screen and introducing a bit of delay in each call based on how fast the object moves. But you still wouldn't need mutable state.
The lesson is that functional languages "mutate" state by calling the function with different parameters. Obviously this doesn't really mutate any variables, but that's how you get a similar effect. This means you'll have to get used to thinking recursively if you want to do functional programming.
Learning to think recursively is not hard, but it does take both practice and a toolkit. That small section in that "Learn Java" book where they used recursion to calculate factorial does not cut it. You need a toolkit of skills like making iterative processes out of recursion (this is why tail recursion is essential for functional language), continuations, invariants, etc. You wouldn't do OO programming without learning about access modifiers, interfaces etc. Same thing for functional programming.
My recommendation is to do the Little Schemer (note that I say "do" and not "read") and then do all the exercises in SICP. When you're done, you'll have a different brain than when you started.
Using some creativity and pattern matching, stateless games have been created:
as well as rolling demos:
and visualizations:
Or if you play a video game, there are tons of state variables, beginning with the positions of all the characters, who tend to move around constantly. How can you possibly do anything useful without keeping track of changing values?
If you're interested, here's a series of articles which describe game programming with Erlang.
You probably won't like this answer, but you won't get functional program until you use it. I can post code samples and say "Here, don't you see" -- but if you don't understand the syntax and underlying principles, then your eyes just glaze over. From your point of view, it looks as if I'm doing the same thing as an imperative language, but just setting up all kinds of boundaries to purposefully make programming more difficult. My point of view, you're just experiencing the Blub paradox.
I was skeptical at first, but I jumped on the functional programming train a few years ago and fell in love with it. The trick with functional programming is being able to recognize patterns, particular variable assignments, and move the imperative state to the stack. A for-loop, for example, becomes recursion:
// Imperative
let printTo x =
for a in 1 .. x do
printfn "%i" a
// Recursive
let printTo x =
let rec loop a = if a <= x then printfn "%i" a; loop (a + 1)
loop 1
Its not very pretty, but we got the same effect with no mutation. Of course, wherever possible, we like avoid looping altogether and just abstract it away:
// Preferred
let printTo x = seq { 1 .. x } |> Seq.iter (fun a -> printfn "%i" a)
The Seq.iter method will enumerate through the collection and invoke the anonymous function for each item. Very handy :)
I know, printing numbers isn't exactly impressive. However, we can use the same approach with games: hold all state in the stack and create a new object with our changes in the recursive call. In this way, each frame is a stateless snapshot of the game, where each frame simply creates a brand new object with the desired changes of whatever stateless objects needs updating. The pseudocode for this might be:
// imperative version
pacman = new pacman(0, 0)
while true
if key = UP then pacman.y++
elif key = DOWN then pacman.y--
elif key = LEFT then pacman.x--
elif key = UP then pacman.x++
render(pacman)
// functional version
let rec loop pacman =
render(pacman)
let x, y = switch(key)
case LEFT: pacman.x - 1, pacman.y
case RIGHT: pacman.x + 1, pacman.y
case UP: pacman.x, pacman.y - 1
case DOWN: pacman.x, pacman.y + 1
loop(new pacman(x, y))
The imperative and functional versions are identical, but the functional version clearly uses no mutable state. The functional code keeps all state is held on the stack -- the nice thing about this approach is that, if something goes wrong, debugging is easy, all you need is a stack trace.
This scales up to any number of objects in the game, because all objects (or collections of related objects) can be rendered in their own thread.
Just about every user application I can think of involves state as a core concept.
In functional languages, rather than mutating the state of objects, we simply return a new object with the changes we want. Its more efficient than it sounds. Data structures, for example, are very easy to represent as immutable data structures. Stacks, for example, are notoriously easy to implement:
using System;
namespace ConsoleApplication1
{
static class Stack
{
public static Stack<T> Cons<T>(T hd, Stack<T> tl) { return new Stack<T>(hd, tl); }
public static Stack<T> Append<T>(Stack<T> x, Stack<T> y)
{
return x == null ? y : Cons(x.Head, Append(x.Tail, y));
}
public static void Iter<T>(Stack<T> x, Action<T> f) { if (x != null) { f(x.Head); Iter(x.Tail, f); } }
}
class Stack<T>
{
public readonly T Head;
public readonly Stack<T> Tail;
public Stack(T hd, Stack<T> tl)
{
this.Head = hd;
this.Tail = tl;
}
}
class Program
{
static void Main(string[] args)
{
Stack<int> x = Stack.Cons(1, Stack.Cons(2, Stack.Cons(3, Stack.Cons(4, null))));
Stack<int> y = Stack.Cons(5, Stack.Cons(6, Stack.Cons(7, Stack.Cons(8, null))));
Stack<int> z = Stack.Append(x, y);
Stack.Iter(z, a => Console.WriteLine(a));
Console.ReadKey(true);
}
}
}
The code above constructs two immutable lists, appends them together to make a new list, and appends the results. No mutable state is used anywhere in the application. It looks a little bulky, but that's only because C# is a verbose language. Here's the equivalent program in F#:
type 'a stack =
| Cons of 'a * 'a stack
| Nil
let rec append x y =
match x with
| Cons(hd, tl) -> Cons(hd, append tl y)
| Nil -> y
let rec iter f = function
| Cons(hd, tl) -> f(hd); iter f tl
| Nil -> ()
let x = Cons(1, Cons(2, Cons(3, Cons(4, Nil))))
let y = Cons(5, Cons(6, Cons(7, Cons(8, Nil))))
let z = append x y
iter (fun a -> printfn "%i" a) z
No mutable necessary to create and manipulate lists. Nearly all data structures can be easily converted into their functional equivalents. I wrote a page here which provides immutable implementations of stacks, queues, leftist heaps, red-black trees, lazy lists. Not a single snippet of code contains any mutable state. To "mutate" a tree, I create a brand new one with new node I want -- this is very efficient because I don't need to make a copy of every node in the tree, I can reuse the old ones in my new tree.
Using a more significant example, I also wrote this SQL parser which is totally stateless (or at least my code is stateless, I don't know whether the underlying lexing library is stateless).
Stateless programming is just as expressive and powerful as stateful programming, it just requires a little practice to train yourself to start thinking statelessly. Of course, "stateless programming when possible, stateful programming where necessary" seems to be the motto of most impure functional languages. There's no harm in falling back on mutables when the functional approach just isn't as clean or efficient.
Short answer: you can't.
So what's the fuss about immutability then?
If you're well-versed in imperative language, then you know that "globals are bad". Why? Because they introduce (or have the potential to introduce) some very hard-to-untangle dependencies in your code. And dependencies are not good; you want your code to be modular. Parts of program not influence other parts as little as possible. And FP brings you to the holy grail of modularity: no side effects at all. You just have your f(x) = y. Put x in, get y out. No changes to x or anything else. FP makes you stop thinking about state, and start thinking in terms of values. All of your functions simply receive values and produce new values.
This has several advantages.
First off, no side-effects means simpler programs, easier to reason about. No worrying that introducing a new part of program is going to interfere and crash an existing, working part.
Second, this makes program trivially parallelizable (efficient parallelization is another matter).
Third, there are some possible performance advantages. Say you have a function:
double x = 2 * x
Now you put in a value of 3 in, and you get a value of 6 out. Every time. But you can do that in imperative as well, right? Yep. But the problem is that in imperative, you can do even more. I can do:
int y = 2;
int double(x){ return x * y; }
but I could also do
int y = 2;
int double(x){ return x * (y++); }
The imperative compiler doesn't know whether I'm going to have side effects or not, which makes it more difficult to optimize (i.e. double 2 needn't be 4 every time). The functional one knows I won't - hence, it can optimize every time it sees "double 2".
Now, even though creating new values every time seems incredibly wasteful for complex types of values in terms of computer memory, it doesn't have to be so. Because, if you have f(x) = y, and values x and y are "mostly the same" (e.g. trees which differ only in a few leafs) then x and y can share parts of memory - because neither of them will mutate.
So if this unmutable thing is so great, why did I answer that you can't do anything useful without mutable state. Well, without mutability, your entire program would be a giant f(x) = y function. And the same would go for all parts of your program: just functions, and functions in the "pure" sense at that. As I said, this means f(x) = y every time. So e.g. readFile("myFile.txt") would need to return the same string value every time. Not too useful.
Therefore, every FP provides some means of mutating state. "Pure" functional languages (e.g. Haskell) do this using somewhat scary concepts such as monads, while "impure" ones (e.g. ML) allow this directly.
And of course, functional languages come with a host of other goodies which make programming more efficient, such as first-class functions etc.
Note that saying functional programming does not have 'state' is a little misleading and might be the cause of the confusion. It definitely has no 'mutable state', but it can still have values that are manipulated; they just cannot be changed in-place (e.g. you have to create new values from the old values).
This is a gross over-simplification, but imagine you had an OO language, where all the properties on classes are set once only in the constructor, all methods are static functions. You could still perform pretty much any calculation by having methods take objects containing all the values they needs for their calculations and then returning new objects with the result (maybe a new instance of the same object even).
It may be 'hard' to translate existing code into this paradigm, but that is because it really requires a completely different way of thinking about code. As a side-effect though in most cases you get a lot of opportunity for parallelism for free.
Addendum: (Regarding your edit of how to keep track of values that need to change)
They would be stored in an immutable data structure of course...
This is not a suggested 'solution', but the easiest way to see that this will always work is that you could store these immutable values into a map (dictionary / hashtable) like structure, keyed by a 'variable name'.
Obviously in practical solutions you'd use a more sane approach, but this does show that worst-case if nothing else'd work you could 'simulate' mutable state with such a map that you carry around through your invocation tree.
That's the way FORTRAN would work without COMMON blocks: You'd write methods that had the values you passed in and local variables. That's it.
Object oriented programming brought us state and behavior together, but it was a new idea when I first encountered it from C++ back in 1994.
Geez, I was a functional programmer when I was a mechanical engineer and I didn't know it!