A few days ago, there were a couple questions on buffer overflow vulnerabilities (such as Does Java have buffer overflows?, Secure C and the universities - trained for buffer ov
If the programmer doesn't anticipate that [some input] could cause [program] to consume more-than-available resources, that's a vulnerability in the form of a possible DoS. This is a weakness of all Turing-complete languages I've seen, but Haskell's laziness makes it harder to reason about what a computation involves.
As a (rather contrived) example,
import Control.Monad (when)
import System (getArgs)
main = do
files <- getArgs
contents <- mapM readFile files
flip mapM_ (zip files contents) $ \(file, content) ->
when (null content) $ putStrLn $ file ++ " is empty"
The naïve programmer may think, "Haskell is lazy, so it won't open and read the files until it needs to", and "Haskell is garbage collected, so once it's done with a file, it can close the file handle". Unfortunately, this program actually will just open lots of files all at once (implementation-specific), and only the empty files will get their filehandles closed (side-effect of implementation's liveliness rules):
$ ghc --make -O2 Test [1 of 1] Compiling Main ( Test.hs, Test.o ) Linking Test ... $ strace -etrace=open,close ./Test dir/* /dev/null ... open("dir/1", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_LARGEFILE) = 3 open("dir/2", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_LARGEFILE) = 4 open("dir/3", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_LARGEFILE) = 5 open("dir/4", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_LARGEFILE) = 6 open("dir/5", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_LARGEFILE) = 7 ... open("/dev/null", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_LARGEFILE) = 255 close(255) /dev/null is empty $
You might not be expecting a -EMFILE "Too many open files" error to ever occur.
Like I said, this is a contrived example, and can happen in other languages too, but it's just easier to miss certain resource usages in Haskell.