replace column values in spark dataframe based on dictionary similar to np.where

一个人想着一个人 提交于 2019-12-20 04:48:00

问题


My data frame looks like -

no          city         amount   
1           Kenora        56%
2           Sudbury       23%
3           Kenora        71%
4           Sudbury       41%
5           Kenora        33%
6           Niagara       22%
7           Hamilton      88%

It consist of 92M records. I want my data frame looks like -

no          city         amount      new_city
1           Kenora        56%           X
2           Niagara       23%           X       
3           Kenora        71%           X
4           Sudbury       41%           Sudbury       
5           Ottawa        33%           Ottawa
6           Niagara       22%           X
7           Hamilton      88%           Hamilton

Using python I can manage it(using np.where) but not getting any results in pyspark. Any help?

I have done so far -

#create dictionary
city_dict = {'Kenora':'X','Niagara':'X'}

mapping_expr  = create_map([lit(x) for x in chain(*city_dict .items())])

#lookup and replace 
df= df.withColumn('new_city', mapping_expr[df['city']])

#But it gives me wrong results.

df.groupBy('new_city').count().show()

new_city    count
   X          2
  null        3

Why gives me null values?


回答1:


The problem is that mapping_expr will return null for any city that is not contained in city_dict. A quick fix is to use coalesce to return the city if the mapping_expr returns a null value:

from pyspark.sql.functions import coalesce

#lookup and replace 
df1= df.withColumn('new_city', coalesce(mapping_expr[df['city']], df['city']))
df1.show()
#+---+--------+------+--------+
#| no|    city|amount|new_city|
#+---+--------+------+--------+
#|  1|  Kenora|   56%|       X|
#|  2| Sudbury|   23%| Sudbury|
#|  3|  Kenora|   71%|       X|
#|  4| Sudbury|   41%| Sudbury|
#|  5|  Kenora|   33%|       X|
#|  6| Niagara|   22%|       X|
#|  7|Hamilton|   88%|Hamilton|
#+---+--------+------+--------+

df1.groupBy('new_city').count().show()
#+--------+-----+
#|new_city|count|
#+--------+-----+
#|       X|    4|
#|Hamilton|    1|
#| Sudbury|    2|
#+--------+-----+

The above method will fail, however, if one of the replacement values is null.

In this case, an easier alternative may be to use pyspark.sql.DataFrame.replace():

First use withColumn to create new_city as a copy of the values from the city column.

df.withColumn("new_city", df["city"])\
    .replace(to_replace=city_dict.keys(), value=city_dict.values(), subset="new_city")\
    .groupBy('new_city').count().show()
#+--------+-----+
#|new_city|count|
#+--------+-----+
#|       X|    4|
#|Hamilton|    1|
#| Sudbury|    2|
#+--------+-----+


来源:https://stackoverflow.com/questions/56767536/replace-column-values-in-spark-dataframe-based-on-dictionary-similar-to-np-where

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!