@Steve already gave a good answer to your question:
verts = [None] * 1000
Warning: As @Joachim Wuttke pointed out, the list must be initialized with an immutable element. [[]] * 1000
does not work as expected because you will get a list of 1000 identical lists (similar to a list of 1000 points to the same list in C). Immutable objects like int, str or tuple will do fine.
Alternatives
Resizing lists is slow. The following results are not very surprising:
>>> N = 10**6
>>> %timeit a = [None] * N
100 loops, best of 3: 7.41 ms per loop
>>> %timeit a = [None for x in xrange(N)]
10 loops, best of 3: 30 ms per loop
>>> %timeit a = [None for x in range(N)]
10 loops, best of 3: 67.7 ms per loop
>>> a = []
>>> %timeit for x in xrange(N): a.append(None)
10 loops, best of 3: 85.6 ms per loop
But resizing is not very slow if you don't have very large lists. Instead of initializing the list with a single element (e.g. None
) and a fixed length to avoid list resizing, you should consider using list comprehensions and directly fill the list with correct values. For example:
>>> %timeit a = [x**2 for x in xrange(N)]
10 loops, best of 3: 109 ms per loop
>>> def fill_list1():
"""Not too bad, but complicated code"""
a = [None] * N
for x in xrange(N):
a[x] = x**2
>>> %timeit fill_list1()
10 loops, best of 3: 126 ms per loop
>>> def fill_list2():
"""This is slow, use only for small lists"""
a = []
for x in xrange(N):
a.append(x**2)
>>> %timeit fill_list2()
10 loops, best of 3: 177 ms per loop
Comparison to numpy
For huge data set numpy or other optimized libraries are much faster:
from numpy import ndarray, zeros
%timeit empty((N,))
1000000 loops, best of 3: 788 ns per loop
%timeit zeros((N,))
100 loops, best of 3: 3.56 ms per loop