Here is some code that I wrote using Python:
from math import sqrt
abundant_list = []
for i in range(12,28123+1):
dividor_list = [1]
for j in range(
It is also worth pointing out that the answer to this question hinges on the small size of the list/tuple that is added on each iteration. For larger lists, extend is clearly superior (and lists vs tuples does not make a difference). Starting with mgilson's answer, I checked behaviour for collections with 600 items, rather than 2:
Calling append 600 times takes 8 times as long as using extend()
with a manually defined list/tuple (i.e. [v,v,v,v,v,v,v...]
):
42.4969689846
5.45146393776
5.38034892082
The bulk of these five seconds is actually the list/tuple creation. Preparing it before the timeit
call brings times for extend down to
1.42491698265
0.657584905624
for list and tuple, respectively.
For a more realistic (and fairer) case, one can dynamically generate the data within the function call:
import timeit
def append_loop(foo, reps):
for i in range(reps):
foo.append(i)
def append_comp(foo, reps):
[foo.append(i) for i in range(reps)]
def extend_lst(foo, reps):
foo.extend([i for i in range(reps)])
def extend_tup(foo, reps):
foo.extend((i for i in range(reps)))
repetitions = 600
print timeit.timeit('append_loop([], repetitions)', setup='from __main__ import append_loop, repetitions')
print timeit.timeit('append_comp([], repetitions)', setup='from __main__ import append_comp, repetitions')
print timeit.timeit('extend_lst([], repetitions)', setup='from __main__ import extend_lst, repetitions')
print timeit.timeit('extend_tup([], repetitions)', setup='from __main__ import extend_tup, repetitions')
(Append is implemented both via for-loop and list comprehension to factor out efficiency differences between the two ways of looping.)
Timings are:
53.8211231232
57.1711571217
19.8829259872
28.5986201763
As we can see, extend with list comprehension is still over two times faster than appending. Also, tuple comprehension appears noticeably slower than list comprehension, and append_comp
only introduces unnecessary list creation overhead.
import timeit
def append2x(foo):
foo.append(1)
foo.append(1)
def extend_lst(foo):
foo.extend([1,1])
def extend_tup(foo):
foo.extend((1,1))
l1 = []
l2 = []
l3 = []
print timeit.timeit('append2x(l1)',setup = 'from __main__ import append2x,l1')
print timeit.timeit('extend_lst(l2)',setup = 'from __main__ import extend_lst,l2')
print timeit.timeit('extend_tup(l3)',setup = 'from __main__ import extend_tup,l3')
Here's a simple benchmark. My results (os-X, 10.5.8, core2duo, FWIW):
0.520906925201 #append
0.602569103241 #extend-list
0.357008934021 #extend-tuple
And the same ordering of the results my linux box (Ubuntu, x86-64 core i7):
0.307395935059 #append
0.319436073303 #extend-list
0.238317012787 #extend-tuple
To me, this says that extend
is quicker than append
, but that creating a list
is relatively expensive compared to creating a tuple
Pointed out in the comments below, because of the immutability of tuples, the interpreter can optimize the creation of the tuple out (it creates the tuple once and re-uses it over and over). If we change the code to:
def extend_lst(foo):
v = 1
foo.extend([v,v])
def extend_tup(foo):
v = 1
foo.extend((v,v))
The timings are virtually identical:
0.297003984451 #append
0.344678163528 #extend-list
0.292304992676 #extend-tuple
Although tuple
still consistently beats the list version and barely edges out the append
version for all of the trials I have done.
One thing that I'm taking away from this is that if you're iterating over an object that consists of all literals, choose a tuple
over a list
. If it doesn't consist entirely of literals, then it really doesn't matter whether you choose list
or tuple
.
They take exact same time.
Here is the time taken for your code:
With dividor_list.extend([i/j,j])
>>>
0:00:00.410000
>>>
0:00:00.383000
>>>
0:00:00.389000
With dividor_list.append(i/j); dividor_list.append(j)
>>>
0:00:00.400000
>>>
0:00:00.390000
>>>
0:00:00.381000