A few days ago, there were a couple questions on buffer overflow vulnerabilities (such as Does Java have buffer overflows?, Secure C and the universities - trained for buffer ov
If the programmer doesn't anticipate that [some input] could cause [program] to consume more-than-available resources, that's a vulnerability in the form of a possible DoS. This is a weakness of all Turing-complete languages I've seen, but Haskell's laziness makes it harder to reason about what a computation involves.
As a (rather contrived) example,
import Control.Monad (when)
import System (getArgs)
main = do
files <- getArgs
contents <- mapM readFile files
flip mapM_ (zip files contents) $ \(file, content) ->
when (null content) $ putStrLn $ file ++ " is empty"
The naïve programmer may think, "Haskell is lazy, so it won't open and read the files until it needs to", and "Haskell is garbage collected, so once it's done with a file, it can close the file handle". Unfortunately, this program actually will just open lots of files all at once (implementation-specific), and only the empty files will get their filehandles closed (side-effect of implementation's liveliness rules):
$ ghc --make -O2 Test [1 of 1] Compiling Main ( Test.hs, Test.o ) Linking Test ... $ strace -etrace=open,close ./Test dir/* /dev/null ... open("dir/1", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_LARGEFILE) = 3 open("dir/2", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_LARGEFILE) = 4 open("dir/3", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_LARGEFILE) = 5 open("dir/4", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_LARGEFILE) = 6 open("dir/5", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_LARGEFILE) = 7 ... open("/dev/null", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_LARGEFILE) = 255 close(255) /dev/null is empty $
You might not be expecting a -EMFILE "Too many open files" error to ever occur.
Like I said, this is a contrived example, and can happen in other languages too, but it's just easier to miss certain resource usages in Haskell.
It may be possible to more easily cause unbounded recursion (and the resulting memory use) in a functional implementation than in the corresponding imperative implementation. While an imperative program might just go into an infinite loop when processing malformed input data, a functional program may instead gobble up all the memory it can while doing the same processing. Generally, it would be the responsibility of the VM in which the program is running to limit the amount of memory a particular process can consume. (Even for native Unix processes, ulimit
can provide this protection.)
I don't think so.
As I see it, instead of programming paradigm, vulnerabilities like buffer overflows has more to do with the compiler/interpreter architecture/virtual machine.
For eg, if you are using functional programming in .NET environment (C#, VB etc), as long as you are dealing with managed objects, buffer overflows will be taken care of.
Functional languages have an under-appreciated "security through obscurity" advantage due to their execution models. If you look at security exploits in C programs, they take advantage of the weak type system, pointer manipulation, and the lack of bounds checking, but more importantly they take advantage of a well-understood, straight-forward execution model. For example, you can reliably smash the stack in C, because it's relatively easy to know where the stack is, just by taking the address of local variables. Many other exploits rely on a similar low-level understanding of the execution model.
In contrast, it's not nearly so obvious how functional code will be compiled down to a binary, so it's not nearly so easy to devise a recipe for executing injected code or accessing privileged data. Ironically, the obscurity of execution models is usually considered a weakness of functional languages, since programmers don't always have a good intuition of how their code will perform.