I am calling this line:
lang_modifiers = [keyw.strip() for keyw in row[\"language_modifiers\"].split(\"|\") if not isinstance(row[\"language_modifiers\"], float)
You might also use df = df.dropna(thresh=n)
where n
is the tolerance. Meaning, it requires n Non-NA values to not drop the row
Mind you, this approach will remove the row
For example: If you have a dataframe with 5 columns, df.dropna(thresh=5)
would drop any row that does not have 5 valid, or non-Na values.
In your case you might only want to keep valid rows; if so, you can set the threshold to the number of columns you have.
pandas documentation on dropna
You are right, such errors mostly caused by NaN representing empty cells. It is common to filter out such data, before applying your further operations, using this idiom on your dataframe df:
df_new = df[df['ColumnName'].notnull()]
Alternatively, it may be more handy to use fillna()
method to impute (to replace) null
values with something default.
E.g. all null
or NaN
's can be replaced with the average value for its column
housing['LotArea'] = housing['LotArea'].fillna(housing.mean()['LotArea'])
or can be replaced with a value like empty string "" or another default value
housing['GarageCond']=housing['GarageCond'].fillna("")