pyspark replace multiple values with null in dataframe

隐身守侯 提交于 2020-01-15 06:43:09

问题


I have a dataframe (df) and within the dataframe I have a column user_id

df = sc.parallelize([(1, "not_set"),
                     (2, "user_001"),
                     (3, "user_002"),
                     (4, "n/a"),
                     (5, "N/A"),
                     (6, "userid_not_set"),
                     (7, "user_003"),
                     (8, "user_004")]).toDF(["key", "user_id"])

df:

+---+--------------+
|key|       user_id|
+---+--------------+
|  1|       not_set|
|  2|      user_003|
|  3|      user_004|
|  4|           n/a|
|  5|           N/A|
|  6|userid_not_set|
|  7|      user_003|
|  8|      user_004|
+---+--------------+

I would like to replace the following values: not_set, n/a, N/A and userid_not_set with null.

It would be good if I could add any new values to a list and they to could be changed.

I am currently using a CASE statement within spark.sql to preform this and would like to change this to pyspark.


回答1:


None inside the when() function corresponds to the null. In case you wish to fill in anything else instead of null, you have to fill it in it's place.

from pyspark.sql.functions import col    
df =  df.withColumn(
    "user_id",
    when(
        col("user_id").isin('not_set', 'n/a', 'N/A', 'userid_not_set'),
        None
    ).otherwise(col("user_id"))
)
df.show()
+---+--------+
|key| user_id|
+---+--------+
|  1|    null|
|  2|user_001|
|  3|user_002|
|  4|    null|
|  5|    null|
|  6|    null|
|  7|user_003|
|  8|user_004|
+---+--------+



回答2:


You can use the in-built when function, which is the equivalent of a case expression.

from pyspark.sql import functions as f
df.select(df.key,f.when(df.user_id.isin(['not_set', 'n/a', 'N/A']),None).otherwise(df.user_id)).show()

Also the values needed can be stored in a list and be referenced.

val_list = ['not_set', 'n/a', 'N/A']
df.select(df.key,f.when(df.user_id.isin(val_list),None).otherwise(df.user_id)).show()



回答3:


PFB few approaches. I am assuming that all the legitimate user IDs starts with "user_". Please try below code.

from pyspark.sql.functions import *
df.withColumn(
    "user_id",
    when(col("user_id").startswith("user_"),col("user_id")).otherwise(None)
).show()

Another One.

cond = """case when user_id in ('not_set', 'n/a', 'N/A', 'userid_not_set') then null
                else user_id
            end"""

df.withColumn("ID", expr(cond)).show()

Another One.

cond = """case when user_id like 'user_%' then user_id
                else null
            end"""

df.withColumn("ID", expr(cond)).show()

Another one.

df.withColumn(
    "user_id",
    when(col("user_id").rlike("user_"),col("user_id")).otherwise(None)
).show()


来源:https://stackoverflow.com/questions/53885091/pyspark-replace-multiple-values-with-null-in-dataframe

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!