Without using groupby
how would I filter out data without NaN
?
Let say I have a matrix where customers will fill in \'N/A\',\'n/a\'
df = pd.DataFrame({'movie': ['thg', 'thg', 'mol', 'mol', 'lob', 'lob'],'rating': [3., 4., 5., np.nan, np.nan, np.nan],'name': ['John','James', np.nan, np.nan, np.nan,np.nan]})
for col in df.columns:
df = df[~pd.isnull(df[col])]
df.dropna(subset=['columnName1', 'columnName2'])
Just drop them:
nms.dropna(thresh=2)
this will drop all rows where there are at least two non-NaN
.
Then you could then drop where name is NaN
:
In [87]:
nms
Out[87]:
movie name rating
0 thg John 3
1 thg NaN 4
3 mol Graham NaN
4 lob NaN NaN
5 lob NaN NaN
[5 rows x 3 columns]
In [89]:
nms = nms.dropna(thresh=2)
In [90]:
nms[nms.name.notnull()]
Out[90]:
movie name rating
0 thg John 3
3 mol Graham NaN
[2 rows x 3 columns]
EDIT
Actually looking at what you originally want you can do just this without the dropna
call:
nms[nms.name.notnull()]
UPDATE
Looking at this question 3 years later, there is a mistake, firstly thresh arg looks for at least n
non-NaN
values so in fact the output should be:
In [4]:
nms.dropna(thresh=2)
Out[4]:
movie name rating
0 thg John 3.0
1 thg NaN 4.0
3 mol Graham NaN
It's possible that I was either mistaken 3 years ago or that the version of pandas I was running had a bug, both scenarios are entirely possible.
Simplest of all solutions:
filtered_df = df[df['name'].notnull()]
Thus, it filters out only rows that doesn't have NaN values in 'name' column.
For multiple columns:
filtered_df = df[df[['name', 'country', 'region']].notnull().all(1)]