When would you NOT want to use functional programming? What is it not so good at?
I am more looking for disadvantages of the paradigm as a whole, not things like "not widely used", or "no good debugger available". Those answers may be correct as of now, but they deal with FP being a new concept (an unavoidable issue) and not any inherent qualities.
Related:
It's hard for me to think of many downsides to functional programming. Then again, I am a former chair of the International Conference on Functional Programming, so you may safely assume I am biased.
I think the main downsides have to do with isolation and with barriers to entry. Learning to write good functional programs means learning to think differently, and to do it well requires a substantial investment of time and effort. It is difficult to learn without a teacher. These properties lead to some downsides:
It is likely that a functional program written by a newcomer will be unnecessarily slow—more likely than, say, a C program written by a newcomer to C. On the other hand, it is about equally likely that a C++ program written by a newcomer will be unnecessarily slow. (All those shiny features...)
Generally experts have no difficulty writing fast functional programs; and in fact some of the best-performing parallel programs on 8- and 16-core processors are now written in Haskell.
It's more likely that someone starting functional programming will give up before realizing the promised productivity gains than will someone starting, say, Python or Visual Basic. There just isn't as much support in the form of books and development tools.
There are fewer people to talk to. Stackoverflow is a good example; relatively few Haskell programmers visit the site regularly (although part of this is that Haskell programmers have their own lively forums which are much older and better established than Stackoverflow).
It's also true that you can't talk to your neighbor very easily, because functional-programming concepts are harder to teach and harder to learn than the object-oriented concepts behind languages like Smalltalk, Ruby, and C++. And also, the object-oriented community has spent years developing good explanations for what they do, whereas the functional-programming community seem to think that their stuff is obviously great and doesn't require any special metaphors or vocabulary for explanation. (They are wrong. I am still waiting for the first great book Functional Design Patterns.)
A well-known downside of lazy functional programming (applies to Haskell or Clean but not to ML or Scheme or Clojure) is that it is very difficult to predict the time and space costs of evaluating a lazy functional program—even experts can't do it. This problem is fundamental to the paradigm and is not going away. There are excellent tools for discovering time and space behavior post facto, but to use them effectively you have to be expert already.
I think the bullshit surrounding functional languages is the biggest problem with functional programming. When I started using functional programming in anger, a big hurdle for me was understanding why many of the highly-evolved arguments put forward by the Lisp community (e.g. about macros and homoiconic syntax) were wrong. Today, I see many people being deceived by the Haskell community with regard to parallel programming.
In fact, you don't have to look any further than this very thread to see some of it:
"Generally experts have no difficulty writing fast functional programs; and in fact some of the best-performing parallel programs on 8- and 16-core processors are now written in Haskell."
Statements like this might give you the impression that experts choose Haskell because it can be so good for parallelism but the truth is that Haskell's performance sucks and the myth that Haskell is good for multicore parallelism is perpetuated by Haskell researchers with little to no knowledge about parallelism who avoid real peer review by only publishing inside the comfort zone of journals and conferences under the control of their own clique. Haskell is invisible in real-world parallel/multicore/HPC precisely because it sucks at parallel programming.
Specifically, the real challenge in multicore programming is taking advantage of CPU caches to make sure cores aren't starved of data, a problem that has never been addressed in the context of Haskell. Charles Leiserson's group at MIT did an excellent job of explaining and solving this problem using their own Cilk language that went on to become the backbone of real-world parallel programming for multicores in both Intel TBB and Microsoft's TPL in .NET 4. There is a superb description of how this technique can be used to write elegant high-level imperative code that compiles to scalable high-performance code in the 2008 paper The cache complexity of multithreaded cache oblivious algorithms. I explained this in my review of some of the state-of-the-art Parallel Haskell research.
This leaves a big question mark over the purely functional programming paradigm. This is the price you pay for abstracting away time and space, which was always the major motivation behind this declarative paradigm.
One big disadvantage to functional programming is that on a theoretical level, it doesn't match the hardware as well as most imperative languages. (This is the flip side of one of its obvious strengths, being able to express what you want done rather than how you want the computer to do it.)
For example, functional programming makes heavy use of recursion. This is fine in pure lambda calculus because mathematics' "stack" is unlimited. Of course, on real hardware, the stack is very much finite. Naively recursing over a large dataset can make your program go boom. Most functional languages optimize tail recursion so that this doesn't happen, but making an algorithm tail recursive can force you to do some rather unbeautiful code gymnastics (e.g., a tail-recursive map function creates a backwards list or has to build up a difference list, so it has to do extra work to get back to a normal mapped list in the correct order compared to the non-tail-recursive version).
(Thanks to Jared Updike for the difference list suggestion.)
If your language does not provide good mechanisms to plumb state/exception behavior through your program (e.g. syntax sugars for monadic binds) then any task involving state/exceptions becomes a chore. (Even with these sugars, some people might find it harder to deal with state/exceptions in FP.)
Functional idioms often do lots of inversion-of-control or laziness, which often has a negative impact on debugging (using a debugger). (This is somewhat offset by FP being much less error-prone due to immutability/referential transparency, which means you'll need to debug less often.)
Philip Wadler wrote a paper about this (called Why No One Uses Functional Programming Languages) and addressed the practical pitfalls stopping people from using FP languages:
- http://www.cse.iitb.ac.in/~as/fpcourse/sigplan-why.ps.gz
- http://carpanta.dc.fi.udc.es/pf/papers/sigplan-angry.ps.gz
Update: inaccessible old link for those with ACM access:
Aside from speed or adoption issues and addressing a more basic issue, I've heard it put that with functional programming, it's very easy to add new functions for existing datatypes, but it's "hard" to add new datatypes. Consider:
(Written in SMLnj. Also, please excuse the somewhat contrived example.)
datatype Animal = Dog | Cat;
fun happyNoise(Dog) = "pant pant"
| happyNoise(Cat) = "purrrr";
fun excitedNoise(Dog) = "bark!"
| excitedNoise(Cat) = "meow!";
I can very quickly add the following:
fun angryNoise(Dog) = "grrrrrr"
| angryNoise(Cat) = "hisssss";
However, if I add a new type to Animal, I have to go through each function to add support for it:
datatype Animal = Dog | Cat | Chicken;
fun happyNoise(Dog) = "pant pant"
| happyNoise(Cat) = "purrrr"
| happyNoise(Chicken) = "cluck cluck";
fun excitedNoise(Dog) = "bark!"
| excitedNoise(Cat) = "meow!"
| excitedNoise(Chicken) = "cock-a-doodle-doo!";
fun angryNoise(Dog) = "grrrrrr"
| angryNoise(Cat) = "hisssss"
| angryNoise(Chicken) = "squaaaawk!";
Notice, though, that the exact opposite is true for object-oriented languages. It's very easy to add a new subclass to an abstract class, but it can be tedious if you want to add a new abstract method to the abstract class/interface for all subclasses to implement.
I just wanted to buzz in with an anecdote because I'm learning Haskell right now as we speak. I'm learning Haskell because the idea of separating functions from actions appeals to me and there are some really sexy theories behind implicit parallelization because of the isolation of pure functions from non-pure functions.
I've been learning the fold class of functions now for three days. Fold seems to have a very simple application: taking a list and reducing it to a single value. Haskell implements a foldl
, and foldr
for this. The two functions have massively different implementations. There is an alternate implementation of foldl
, called foldl'
. On top of this there is version with a slightly different syntax called foldr1
and foldl1
with different initial values. Of which there is a correspond implementation of foldl1'
for foldl1
. As if all of this wasn't mind blowing, the functions that fold[lr].*
require as arguments and use internally in the reduction have two separate signatures, only one variant works on infinite lists (r), and only one of them is executed in constant memory (as I understand (L) because only it requires a redex
). Understanding why foldr
can work on infinite lists requires at least a decent understanding of the languages lazy-behavoir and the minor detail that not all functions will force the evaluation of second argument. The graphs online for these functions are confusing as hell for someone who never saw them in college. There is no perldoc
equivalent. I can't find a single description of what any of the functions in the Haskell prelude do. The prelude is a kinda of a distribution preloaded that comes with core. My best resource is really a guy I've never met (Cale) who is helping me at a huge expense to his own time.
Oh, and fold doesn't have to reduce the list to a non-list type scalar, the identity function for lists can be written foldr (:) [] [1,2,3,4]
(highlights that you can accumulate to a list).
/me goes back to reading.
Here are some problems I've run into:
- Most people find functional programming to be difficult to understand. This means it will probably be harder for you to write functional code, and it will almost certainly be harder for someone else to pick it up.
- Functional programming languages are usually slower than a language like c would be. This is becoming less of an issue over time (because computers are getting faster, and compilers are getting smarter)
- Not being as wide spread as their imperative counterparts, it can be difficult to find libraries and examples for common programming problems. (For example its almost always easier to find something for Python, then it is for Haskell)
- There's a lack of tools, particularly for debugging. Its definitely not as easy as opening up Visual Studio for C#, or eclipse for Java.
Looking away from the details of specific implementations of functional programming, I see two key issues:
It seems comparatively rare that it is practical to choose a functional model of some real-world problem over an imperative one. When the problem domain is imperative, using a language with that characteristic is a natural and reasonable choice (since it is in general advisable to minimize the distance between the specification and implementation as part of reducing the number of subtle bugs). Yes, this can be overcome by a smart-enough coder, but if you need Rock Star Coders for the task, it's because it's too bloody hard.
For some reason that I never really understood, functional programming languages (or perhaps their implementations or communities?) are much more likely to want to have everything in their language. There's much less use of libraries written in other languages. If someone else has a particularly good implementation of some complex operation, it makes much more sense to use that instead of making your own. I suspect that this is in part a consequence of the use of complex runtimes which make handling foreign code (and especially doing it efficiently) rather difficult. I'd love to be proved wrong on this point.
I suppose these both come back to a general lack of pragmatism caused by functional programming being much more strongly used by programming researchers than common coders. A good tool can enable an expert to great things, but a great tool is one that enables the common man to approach what an expert can do normally, because that's by far the more difficult task.
来源:https://stackoverflow.com/questions/1786969/pitfalls-disadvantages-of-functional-programming