Automatic memoizing in functional programming languages

前端 未结 6 508
故里飘歌
故里飘歌 2020-12-08 14:59

I always thought that Haskell would do some sort of automatic intelligent memoizing. E.g., the naive Fibonacci implementation

fib 0 = 0
fib 1 = 1
fib n = fib         


        
6条回答
  •  醉梦人生
    2020-12-08 15:40

    Haskell doesn't seem to do automatic memoization. Or do I understand something wrong?

    No, Haskell doesn't. However, shared expressions are calculated only once. In the example given by Paul Johnson x is stored on the heap as a thunk. Both y and z can refer to x as x is in scope, and they refer to the same location. Once x has to be evaluated it will be evaluated only once and only the result of the evaluation will be kept. So this is not really memoisation but it is a result of the implementation.

    Are there other languages which do automatic (i.e. implicit, not explicit) memoization?

    I've seen the decorator @memoized come along in some python source code. You could of course completely create your own decorator / implementation for it. Complete with LRU and other policies you want to use.

    What are common ways to implement memoization?

    There is no real common way to implement memoisation. For fib-like patterns (only one argument, which is a number) the memoisation as used in the fib-example One could devise a general solution (the hashmaps one) and it will work, but it might also be suboptimal for your specific problem.

    With memoisation you have side-effects, so you probably want the cache to live in the State monad. However, in general you want to keep your algorithms as pure as possible so if it has recursion, you are already getting in a bit of a mess. This is because you will call the memoised version of the function recursively, but that needs to run in the State monad, so now your whole function has to run in the State monad. This also effects laziness, but you could try the lazy state monad.

    Keeping this in mind: good automatic memoisation is very difficult to achieve. But you can come a long way easily. Automatically memoising functions probably involves program transformation, where writing things in fix point could go a long way.

    Edit: I am mostly interested in that problem I described, i.e. how to implement such a limit. Any references to any papers which address this would be very nice.

    Once you have the basic mechanism of memoisation you could tweak the lookup and store functions for you memoising table to implement LRU or some other mechanism of keeping memory consumption small. Maybe you can get the idea for LRU from this C++ example.

提交回复
热议问题