One of the programming problems I have come across involves calculation of factorials of large numbers (of numbers up to 10^5). I have seen a simple Haskell code which goes lik
I first want to point out two factors which are clearly not the reason for the speed difference, but have been mentioned nevertheless, in the question and some answers.
The question mentions caching, and some of the answers mention memoization. But the factorial function does not benefit from memoization, because it recursively calls itself with different arguments. So we would never hit an entry in the cache that's already filled, and the whole caching is unnecessary. Maybe people were thinking of the fibonacci function here?
For the record, Haskell would not provide automatic memoization anyway.
Both the Java and the Haskell program look already pretty optimal to me. Both programs use the iteration mechanism of choice of their respective languages: Java uses a loop, Haskell uses recursion. Both programs use a standard type for big integer arithmetic.
If anything, the Haskell version should be slower because it is not tail recursive, whereas the Java version uses a loop which is the fastest looping construct available in Java.
I don't see much scope for clever high-level optimizations a compiler could make to these programs. I suspect that the observed speed difference is due to low-level details about how big integers are implemented.
The Haskell compiler has built-in and reasonable support for Integer. That seems to be less so with Java implementations and the big integer class. I googled for "BigInteger slow" and the results suggest that the question really should be: Why is Java's BigInteger so slow? There seem to be other big integer classes that are faster. I'm not a Java expert, so I cannot answer this variant of the question in any detail.