I have noticed very poor performance when using iterrows from pandas.
Is this something that is experienced by others? Is it specific to iterrows and should this fun
Generally, iterrows
should only be used in very, very specific cases. This is the general order of precedence for performance of various operations:
1) vectorization
2) using a custom cython routine
3) apply
a) reductions that can be performed in cython
b) iteration in python space
4) itertuples
5) iterrows
6) updating an empty frame (e.g. using loc one-row-at-a-time)
Using a custom Cython routine is usually too complicated, so let's skip that for now.
1) Vectorization is ALWAYS, ALWAYS the first and best choice. However, there is a small set of cases (usually involving a recurrence) which cannot be vectorized in obvious ways. Furthermore, on a smallish DataFrame
, it may be faster to use other methods.
3) apply
usually can be handled by an iterator in Cython space. This is handled internally by pandas, though it depends on what is going on inside the apply
expression. For example, df.apply(lambda x: np.sum(x))
will be executed pretty swiftly, though of course, df.sum(1)
is even better. However something like df.apply(lambda x: x['b'] + 1)
will be executed in Python space, and consequently is much slower.
4) itertuples
does not box the data into a Series
. It just returns the data in the form of tuples.
5) iterrows
DOES box the data into a Series
. Unless you really need this, use another method.
6) Updating an empty frame a-single-row-at-a-time. I have seen this method used WAY too much. It is by far the slowest. It is probably common place (and reasonably fast for some python structures), but a DataFrame
does a fair number of checks on indexing, so this will always be very slow to update a row at a time. Much better to create new structures and concat
.