问题
I would like to replace all the n/a
values in the below dataframe to unknown
.
It can be either scalar
or complex nested column
.
If it's a StructField column
I can loop through the columns and replace n\a
using WithColumn
.
But I would like this to be done in a generic way
inspite of the type
of the column
as I dont want to specify the column names explicitly as there are 100's in my case?
case class Bar(x: Int, y: String, z: String)
case class Foo(id: Int, name: String, status: String, bar: Seq[Bar])
val df = spark.sparkContext.parallelize(
Seq(
Foo(123, "Amy", "Active", Seq(Bar(1, "first", "n/a"))),
Foo(234, "Rick", "n/a", Seq(Bar(2, "second", "fifth"),Bar(22, "second", "n/a"))),
Foo(567, "Tom", "null", Seq(Bar(3, "second", "sixth")))
)).toDF
df.printSchema
df.show(20, false)
Result:
+---+----+------+---------------------------------------+
|id |name|status|bar |
+---+----+------+---------------------------------------+
|123|Amy |Active|[[1, first, n/a]] |
|234|Rick|n/a |[[2, second, fifth], [22, second, n/a]]|
|567|Tom |null |[[3, second, sixth]] |
+---+----+------+---------------------------------------+
Expected Output:
+---+----+----------+---------------------------------------------------+
|id |name|status |bar |
+---+----+----------+---------------------------------------------------+
|123|Amy |Active |[[1, first, unknown]] |
|234|Rick|unknown |[[2, second, fifth], [22, second, unknown]] |
|567|Tom |null |[[3, second, sixth]] |
+---+----+----------+---------------------------------------------------+
Any suggestion on this?
回答1:
If you like playing with RDDs, here's a simple, generic and evolutive solution :
val naToUnknown = {r: Row =>
def rec(r: Any): Any = {
r match {
case row: Row => Row.fromSeq(row.toSeq.map(rec))
case seq: Seq[Any] => seq.map(rec)
case s: String if s == "n/a" => "unknown"
case _ => r
}
}
Row.fromSeq(r.toSeq.map(rec))
}
val newDF = spark.createDataFrame(df.rdd.map{naToUnknown}, df.schema)
newDF.show(false)
Output :
+---+----+-------+-------------------------------------------+
|id |name|status |bar |
+---+----+-------+-------------------------------------------+
|123|Amy |Active |[[1, first, unknown]] |
|234|Rick|unknown|[[2, second, fifth], [22, second, unknown]]|
|567|Tom |null |[[3, second, sixth]] |
+---+----+-------+-------------------------------------------+
回答2:
It is somehow easy to replace nested values when you have only simple columns and structs. For array fields, you'll have to explode the structure before replacing or use UDF / higher-order functions, see my other answer here.
You can define a generic function that loops through DataFrame schema
and apply a lambda function func
to replace what you want:
def replaceNestedValues(schema: StructType, func: Column => Column, path: Option[String] = None): Seq[Column] = {
schema.fields.map(f => {
val p = path.fold(s"`${f.name}`")(c => s"$c.`${f.name}`")
f.dataType match {
case s: StructType => struct(replaceNestedValues(s, func, Some(p)): _*).alias(f.name)
case _ => func(col(p)).alias(f.name)
}
})
}
Before using this function, explode the array structure bar
like this:
val df2 = df.select($"id", $"name", $"status", explode($"bar").alias("bar"))
Then, define a lambda function that takes a column and replace it with unknown
when it is equal to n/a
using when/otherwise
functions, and apply transformation to columns using the above function:
val replaceNaFunc: Column => Column = c => when(c === lit("n/a"), lit("unknown")).otherwise(c)
val replacedCols = replaceNestedValues(df2.schema, replaceNaFunc)
Select new columns and groupBy to get back the bar
array:
df2.select(replacedCols: _*).groupBy($"id", $"name", $"status").agg(collect_list($"bar").alias("bar")).show(false)
Gives:
+---+----+-------+-------------------------------------------+
|id |name|status |bar |
+---+----+-------+-------------------------------------------+
|234|Rick|unknown|[[2, second, fifth], [22, second, unknown]]|
|123|Amy |Active |[[1, first, unknown]] |
|567|Tom |null |[[3, second, sixth]] |
+---+----+-------+-------------------------------------------+
回答3:
You can define an UDF to deal with your Array and replace the items you want:
UDF
val replaceNA = udf((x:Row) => {
val z = x.getString(2)
if ( z == "n/a")
Bar(x.getInt(0), x.getString(1), "unknow")
else
Bar(x.getInt(0), x.getString(1), x.getString(2))
})
Once you have that UDF, you can explode your dataframe to hace each item in bar as a single row:
val explodedDF = df.withColumn("exploded", explode($"bar"))
+---+----+------+--------------------+------------------+
| id|name|status| bar| exploded|
+---+----+------+--------------------+------------------+
|123| Amy|Active| [[1, first, n/a]]| [1, first, n/a]|
|234|Rick| n/a|[[2, second, fift...|[2, second, fifth]|
|234|Rick| n/a|[[2, second, fift...| [22, second, n/a]|
|567| Tom| null|[[3, second, sixth]]|[3, second, sixth]|
+---+----+------+--------------------+------------------+
Then apply the previously defined UDF to replace the items:
val replacedDF = explodedDF.withColumn("exploded", replaceNA($"exploded"))
+---+----+------+--------------------+--------------------+
| id|name|status| bar| exploded|
+---+----+------+--------------------+--------------------+
|123| Amy|Active| [[1, first, n/a]]| [1, first, unknow]|
|234|Rick| n/a|[[2, second, fift...| [2, second, fifth]|
|234|Rick| n/a|[[2, second, fift...|[22, second, unknow]|
|567| Tom| null|[[3, second, sixth]]| [3, second, sixth]|
+---+----+------+--------------------+--------------------+
And finally grouping all together with collect_list to return it to it's original state
val resultDF = replacedDF.groupBy("id", "name", "status")
.agg(collect_list("exploded").as("bar")).show(false)
+---+----+------+------------------------------------------+
|id |name|status|bar |
+---+----+------+------------------------------------------+
|234|Rick|n/a |[[2, second, fifth], [22, second, unknow]]|
|567|Tom |null |[[3, second, sixth]] |
|123|Amy |Active|[[1, first, unknow]] |
+---+----+------+------------------------------------------+
Putting al together in a single step:
import org.apache.spark.sql._
val replaceNA = udf((x:Row) => {
val z = x.getString(2)
if ( z == "n/a")
Bar(x.getInt(0), x.getString(1), "unknow")
else
Bar(x.getInt(0), x.getString(1), x.getString(2))
})
df.withColumn("exploded", explode($"bar"))
.withColumn("exploded", replaceNA($"exploded"))
.groupBy("id", "name", "status")
.agg(collect_list("exploded").as("bar"))
来源:https://stackoverflow.com/questions/59536407/spark-replace-null-value-in-a-nested-column