Splitting row in multiple row in spark-shell

后端 未结 2 1622
一个人的身影
一个人的身影 2021-01-20 18:11

I have imported data in Spark dataframe in spark-shell. Data is filled in it like :

Col1 | Col2 | Col3 | Col4
A1   | 11   | B2   | a|b;1;0xFFFFFF
A1   | 12          


        
2条回答
  •  执笔经年
    2021-01-20 18:21

    Not sure this is doable while staying 100% with Dataframes, here's a (somewhat messy?) solution using RDDs for the split itself:

    import org.apache.spark.sql.functions._
    import sqlContext.implicits._
    
    // we switch to RDD to perform the split of Col4 into 3 columns
    val rddWithSplitCol4 = input.rdd.map { r =>
      val indexToValue = r.getAs[String]("Col4").split(';').map {
        case s if s.startsWith("0x") => 2 -> s
        case s if s.matches("\\d+") => 1 -> s
        case s => 0 -> s
      }
      val newCols: Array[String] = indexToValue.foldLeft(Array.fill[String](3)("")) {
        case (arr, (index, value)) => arr.updated(index, value)
      }
      (r.getAs[String]("Col1"), r.getAs[Int]("Col2"), r.getAs[String]("Col3"), newCols(0), newCols(1), newCols(2))
    }
    
    // switch back to Dataframe and explode alphabets column
    val result = rddWithSplitCol4
      .toDF("Col1", "Col2", "Col3", "alphabets", "digits", "hexadecimal")
      .withColumn("alphabets", explode(split(col("alphabets"), "\\|")))
    
    result.show(truncate = false)
    // +----+----+----+---------+------+-----------+
    // |Col1|Col2|Col3|alphabets|digits|hexadecimal|
    // +----+----+----+---------+------+-----------+
    // |A1  |11  |B2  |a        |1     |0xFFFFFF   |
    // |A1  |11  |B2  |b        |1     |0xFFFFFF   |
    // |A1  |12  |B1  |         |2     |           |
    // |A2  |12  |B2  |         |      |0xFFF45B   |
    // +----+----+----+---------+------+-----------+
    

提交回复
热议问题