I have a function foo(i) that takes an integer and takes a significant amount of time to execute. Will there be a significant performance difference between any of the foll
Why are you optimizing this? Have you written working, tested code, then examined your algorithm profiled your code and found that optimizing this will have an effect? Are you doing this in a deep inner loop where you found you are spending your time? If not, don't bother.
You'll only know which works fastest for you by timing it. To time it in a useful way, you'll have to specialize it to your actual use case. For example, you can get noticeable performance differences between a function call in a list comprehension versus an inline expression; it isn't clear whether you really wanted the former or if you reduced it to that to make your cases similar.
You say that it doesn't matter whether you end up with a numpy array or a list
, but if you're doing this kind of micro-optimization it does matter, since those will perform differently when you use them afterward. Putting your finger on that could be tricky, so hopefully it will turn out the whole problem is moot as premature.
It is typically better to simply use the right tool for the job for clarity, readability, and so forth. It is rare that I would have a hard time deciding between these things.
numpy.vectorize
. For example, times_five
below can be used on a numpy array with no decoration.map
. That's what it's for.map
and list comprehensions' lazy equivalents: itertools.imap
and generator expressions. These can reduce memory usage by a factor of n
in some cases and can avoid performing unnecessary operations sometimes.If it does turn out this is where performance problems lie, getting this sort of thing right is tricky. It is very common that people time the wrong toy case for their actual problems. Worse, it is extremely common people make dumb general rules based on it.
Consider the following cases (timeme.py is posted below)
python -m timeit "from timeme import x, times_five; from numpy import vectorize" "vectorize(times_five)(x)"
1000 loops, best of 3: 924 usec per loop
python -m timeit "from timeme import x, times_five" "[times_five(item) for item in x]"
1000 loops, best of 3: 510 usec per loop
python -m timeit "from timeme import x, times_five" "map(times_five, x)"
1000 loops, best of 3: 484 usec per loop
A naïve obsever would conclude that map is the best-performing of these options, but the answer is still "it depends". Consider the power of using the benefits of the tools you are using: list comprehensions let you avoid defining simple functions; numpy lets you vectorize things in C if you're doing the right things.
python -m timeit "from timeme import x, times_five" "[item + item + item + item + item for item in x]"
1000 loops, best of 3: 285 usec per loop
python -m timeit "import numpy; x = numpy.arange(1000)" "x + x + x + x + x"
10000 loops, best of 3: 39.5 usec per loop
But that's not all—there's more. Consider the power of an algorithm change. It can be even more dramatic.
python -m timeit "from timeme import x, times_five" "[5 * item for item in x]"
10000 loops, best of 3: 147 usec per loop
python -m timeit "import numpy; x = numpy.arange(1000)" "5 * x"
100000 loops, best of 3: 16.6 usec per loop
Sometimes an algorithm change can be even more effective. This will be more and more effective as the numbers get bigger.
python -m timeit "from timeme import square, x" "map(square, x)"
10 loops, best of 3: 41.8 msec per loop
python -m timeit "from timeme import good_square, x" "map(good_square, x)"
1000 loops, best of 3: 370 usec per loop
And even now, this all may have little bearing on your actual problem. It looks like numpy is so great if you can use it right, but it has its limitations: none of these numpy examples used actual Python objects in the arrays. That complicates what must be done; a lot even. And what if we do get to use C datatypes? These are less robust than Python objects. They aren't nullable. The integers overflow. You have to do some extra work to retrieve them. They're statically typed. Sometimes these things prove to be problems, even unexpected ones.
So there you go: a definitive answer. "It depends."
# timeme.py
x = xrange(1000)
def times_five(a):
return a + a + a + a + a
def square(a):
if a == 0:
return 0
value = a
for i in xrange(a - 1):
value += a
return value
def good_square(a):
return a ** 2
If the function itself takes a significant amount of time to execute, it's irrelevant how you map its output to an array. Once you start getting into arrays of millions of numbers, though, numpy can save you a significant amount of memory.
The list comprehension is the fastest, then the map, then the numpy on my machine. The numpy code is quite a bit slower actually than the other two, but that the difference is much less if you use numpy.arange instead of range (or xrange) as I did in the times listed below. Also, if you use psyco, the list comprehension is sped up while the other two were slowed down for me. I also used larger arrays of numbers than in your code and my foo function just computed the square root. Here are some typical times.
Without psyco:
list comprehension: 47.5581952455 ms
map: 51.9082732582 ms
numpy.vectorize: 57.9601876775 ms
With psyco:
list comprehension: 30.4318844993 ms
map: 96.4504427239 ms
numpy.vectorize: 99.5858691538 ms
I used Python 2.6.4 and the timeit module.
Based on these results, I would say that it probably doesn't really make a difference which one you choose for the initialization. I would probably choose the numpy one or the list comprehension based on the speed, but ultimately you should let what you are doing with the array afterwards guide your choice.
First comment: don't mix usage of xrange(
) or range()
in your samples... doing so invalidates your question as you're comparing apples and oranges.
I second @Gabe's notion that if you have many large data structures, numpy should win overall... just keep in mind most of the time C is faster than Python, but then again, most of the time, PyPy is faster than CPython. :-)
As far as listcomps vs. map()
calls go... one makes 101 function calls while the other one makes 102. meaning you won't see a significant difference in timing, as shown below using the timeit module as @Mike has suggested:
List Comprehension
$ python -m timeit "def foo(x):pass; [foo(i) for i in range(100)]"
1000000 loops, best of 3: 0.216 usec per loop
$ python -m timeit "def foo(x):pass; [foo(i) for i in range(100)]"
1000000 loops, best of 3: 0.21 usec per loop
$ python -m timeit "def foo(x):pass; [foo(i) for i in range(100)]"
1000000 loops, best of 3: 0.212 usec per loop
map()
function call
$ python -m timeit "def foo(x):pass; map(foo, range(100))"
1000000 loops, best of 3: 0.216 usec per loop
$ python -m timeit "def foo(x):pass; map(foo, range(100))"
1000000 loops, best of 3: 0.214 usec per loop
$ python -m timeit "def foo(x):pass; map(foo, range(100))"
1000000 loops, best of 3: 0.215 usec per loop
With that said however, unless you are planning on using the lists that you create from either of these techniques, try avoid them (using lists) completely. IOW, if all you're doing is iterating over them, it's not worth the memory consumption (and possibly creating a potentially massive list in memory) when you only care to look at each element one at a time just discard the list as soon as you're done.
In such cases, I highly recommend the use of generator expressions instead as they don't create the entire list in memory... it is a more memory-friendly, lazy iterative way of looping through elements to process w/o creating a largish array in memory. The best part is that its syntax is nearly identical to that of listcomps:
a = (foo(i) for i in range(100))
2.x users only: along the lines of more iteration, change all the range()
calls to xrange()
for any older 2.x code then switch to range()
when porting to Python 3 where xrange()
replaces and is renamed to range()
.