Efficient memoization in Python

守給你的承諾、 提交于 2019-12-02 20:42:41

The different styles of variable access have already been timed and compared at: http://code.activestate.com/recipes/577834-compare-speeds-of-different-kinds-of-access-to-var Here's a quick summary: local access beats nonlocal (nested scopes) which beat global access (module scope) which beats access to builtins.

Your solution #2 (with local access) should win. Solution #3 has a slow-dotted lookup (which requires a dictionary lookup). Solution #1 uses nonlocal (nested scope) access which uses cell-variables (faster than a dict lookup but slower than locals).

Also note, the KeyError exception class is a global lookup and can be sped-up by localizing it. You could replace the try/except entirely and use a memo.get(n, sentinel) instead. And even that could be sped-up by using a bound method. Of course, your easiest speed boost may just come from trying out pypy :-)

In short, there are many ways to tweak this code. Just make sure it's worth it.

For the benefit of people who stumble on this question while looking for a way to do memoization in python, I recommend fastcache.

It works on python 2 and 3, is faster than any of the methods described above, and gives the option to limit cache size so that it does not inadvertently get too big:

from fastcache import clru_cache

@clru_cache(maxsize=128, typed=False)
def foo(cat_1, cat_2, cat_3):
    return cat_1 + cat_2 + cat_3

Installing fastcache is simple, using pip:

pip install fastcache

or conda:

conda install fastcache
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!