Convert null values to empty array in Spark DataFrame

守給你的承諾、 提交于 2019-11-27 22:14:34
zero323

You can use an UDF:

import org.apache.spark.sql.functions.udf

val array_ = udf(() => Array.empty[Int])

combined with WHEN or COALESCE:

df.withColumn("myCol", when(myCol.isNull, array_()).otherwise(myCol))
df.withColumn("myCol", coalesce(myCol, array_())).show

In the recent versions you can use array function:

import org.apache.spark.sql.functions.{array, lit}

df.withColumn("foo", array().cast("array<integer>"))

Please note that it will work only if conversion from string to the desired type is allowed.

With a slight modification to zero323's approach, I was able to do this without using a udf in Spark 2.3.1.

val df = Seq("a" -> Array(1,2,3), "b" -> null, "c" -> Array(7,8,9)).toDF("id","numbers")
df.show
+---+---------+
| id|  numbers|
+---+---------+
|  a|[1, 2, 3]|
|  b|     null|
|  c|[7, 8, 9]|
+---+---------+

val df2 = df.withColumn("numbers", coalesce($"numbers", array()))
df2.show
+---+---------+
| id|  numbers|
+---+---------+
|  a|[1, 2, 3]|
|  b|       []|
|  c|[7, 8, 9]|
+---+---------+

An UDF-free alternative to use when the data type you want your array elements in can not be cast from StringType is the following:

import pyspark.sql.types as T
import pyspark.sql.functions as F

df.withColumn(
    "myCol",
    F.coalesce(
        F.col("myCol"),
        F.from_json(F.lit("[]"), T.ArrayType(T.IntegerType()))
    )
)

You can replace IntegerType() with whichever data type, also complex ones.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!