I have a problem and I cannot find any solution in the web or documentation, even if I think that it is very trivial.
What do I want to do?
I have a datafram
You can use set_index
and sum
:
df.set_index('CLASS').isna().sum(level=0)
Output:
FEATURE1 FEATURE2 FEATURE3
CLASS
X 1.0 1.0 2.0
B 0.0 0.0 0.0
Compute a mask with isna
, then group and find the sum:
df.drop('CLASS', 1).isna().groupby(df.CLASS, sort=False).sum().reset_index()
CLASS FEATURE1 FEATURE2 FEATURE3
0 X 1.0 1.0 2.0
1 B 0.0 0.0 0.0
Another option is to subtract the size
from the count
using rsub
along the 0th axis for index aligned subtraction:
df.groupby('CLASS').count().rsub(df.groupby('CLASS').size(), axis=0)
Or,
g = df.groupby('CLASS')
g.count().rsub(g.size(), axis=0)
FEATURE1 FEATURE2 FEATURE3
CLASS
B 0 0 0
X 1 1 2
There are quite a few good answers, so here are some timeits
for your perusal:
df_ = df
df = pd.concat([df_] * 10000)
%timeit df.drop('CLASS', 1).isna().groupby(df.CLASS, sort=False).sum()
%timeit df.set_index('CLASS').isna().sum(level=0)
%%timeit
g = df.groupby('CLASS')
g.count().rsub(g.size(), axis=0)
11.8 ms ± 108 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
9.47 ms ± 379 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
6.54 ms ± 81.6 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
Actual performance depends on your data and setup, so your mileage may vary.
Using the diff between count
and size
g=df.groupby('CLASS')
-g.count().sub(g.size(),0)
FEATURE1 FEATURE2 FEATURE3
CLASS
B 0 0 0
X 1 1 2
And we can transform this question to the more generic question how to count how many NaN
in dataframe with for loop
pd.DataFrame({x: y.isna().sum()for x , y in g }).T.drop('CLASS',1)
Out[468]:
FEATURE1 FEATURE2 FEATURE3
B 0 0 0
X 1 1 2