There are many questions (1, 2, 3) dealing with counting values in a single series.
However, there are fewer questions looking at the best way to co
There's actually a bit of hidden overhead in zip(df.A.values, df.B.values)
. The key here comes down to numpy arrays being stored in memory in a fundamentally different way than Python objects.
A numpy array, such as np.arange(10)
, is essentially stored as a contiguous block of memory, and not as individual Python objects. Conversely, a Python list, such as list(range(10))
, is stored in memory as pointers to individual Python objects (i.e. integers 0-9). This difference is the basis for why numpy arrays are smaller in memory than the Python equivalent lists, and why you can perform faster computations on numpy arrays.
So, as Counter
is consuming the zip
, the associated tuples need to be created as Python objects. This means that Python needs to extract the tuple values from numpy data and create corresponding Python objects in memory. There is noticeable overhead to this, which is why you want to be very careful when combining pure Python functions with numpy data. A basic example of this pitfall that you might commonly see is using the built-in Python sum
on a numpy array: sum(np.arange(10**5))
is actually a bit slower than the pure Python sum(range(10**5))
, and both of which are of course significantly slower than np.sum(np.arange(10**5))
.
See this video for a more in depth discussion of this topic.
As an example specific to this question, observe the following timings comparing the performance of Counter
on zipped numpy arrays vs. the corresponding zipped Python lists.
In [2]: a = np.random.randint(10**4, size=10**6)
...: b = np.random.randint(10**4, size=10**6)
...: a_list = a.tolist()
...: b_list = b.tolist()
In [3]: %timeit Counter(zip(a, b))
455 ms ± 4.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [4]: %timeit Counter(zip(a_list, b_list))
334 ms ± 4.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
The difference between these two timings gives you a reasonable estimate of the overhead discussed earlier.
This isn't quite the end of the story though. Constructing a groupby
object in pandas involves a some overhead too, at least as related to this problem, since there's some groupby
metadata that isn't strictly necessary just to get size
, whereas Counter
does the one singular thing you care about. Usually this overhead is far less than the overhead associated with Counter
, but from some quick experimentation I've found that you can actually get marginally better performance from Counter
when the majority of your groups just consist of single elements.
Consider the following timings (using @BallpointBen's sort=False
suggestion) that go along the spectrum of few large groups <--> many small groups:
def grouper(df):
return df.groupby(['A', 'B'], sort=False).size()
def count(df):
return Counter(zip(df.A.values, df.B.values))
for m, n in [(10, 10**6), (10**3, 10**6), (10**7, 10**6)]:
df = pd.DataFrame({'A': np.random.randint(0, m, n),
'B': np.random.randint(0, m, n)})
print(m, n)
%timeit grouper(df)
%timeit count(df)
Which gives me the following table:
m grouper counter
10 62.9 ms 315 ms
10**3 191 ms 535 ms
10**7 514 ms 459 ms
Of course, any gains from Counter
would be offset by converting back to a Series
, if that's what you want as your final object.