Splitting row in multiple row in spark-shell

后端 未结 2 1621
一个人的身影
一个人的身影 2021-01-20 18:11

I have imported data in Spark dataframe in spark-shell. Data is filled in it like :

Col1 | Col2 | Col3 | Col4
A1   | 11   | B2   | a|b;1;0xFFFFFF
A1   | 12          


        
相关标签:
2条回答
  • 2021-01-20 18:17

    Edit after getting this answer about how to make backreference in regexp_replace.

    You can use regexp_replace with a backreference, then split twice and explode. It is, imo, cleaner than my original solution

    val df = List(
        ("A1"   , "11"   , "B2"   , "a|b;1;0xFFFFFF"),
        ("A1"   , "12"   , "B1"   , "2"),
        ("A2"   , "12"   , "B2"   , "0xFFF45B")
      ).toDF("Col1" , "Col2" , "Col3" , "Col4")
    
    val regExStr = "^([A-z|]+)?;?(\\d+)?;?(0x.*)?$"
    val res = df
      .withColumn("backrefReplace",
           split(regexp_replace('Col4,regExStr,"$1;$2;$3"),";"))
      .select('Col1,'Col2,'Col3,
           explode(split('backrefReplace(0),"\\|")).as("letter"),
           'backrefReplace(1)                      .as("digits"),
           'backrefReplace(2)                      .as("hexadecimal")
      )
    
    +----+----+----+------+------+-----------+
    |Col1|Col2|Col3|letter|digits|hexadecimal|
    +----+----+----+------+------+-----------+
    |  A1|  11|  B2|     a|     1|   0xFFFFFF|
    |  A1|  11|  B2|     b|     1|   0xFFFFFF|
    |  A1|  12|  B1|      |     2|           |
    |  A2|  12|  B2|      |      |   0xFFF45B|
    +----+----+----+------+------+-----------+
    

    you still need to replace empty strings by nullthough...


    Previous Answer (somebody might still prefer it):

    Here is a solution that sticks to DataFrames but is also quite messy. You can first use regexp_extract three times (possible to do less with backreference?), and finally split on "|" and explode. Note that you need a coalesce for explode to return everything (you still might want to change the empty strings in letter to null in this solution).

    val res = df
      .withColumn("alphabets",  regexp_extract('Col4,"(^[A-z|]+)?",1))
      .withColumn("digits",     regexp_extract('Col4,"^([A-z|]+)?;?(\\d+)?;?(0x.*)?$",2))
      .withColumn("hexadecimal",regexp_extract('Col4,"^([A-z|]+)?;?(\\d+)?;?(0x.*)?$",3))
      .withColumn("letter",
         explode(
           split(
             coalesce('alphabets,lit("")),
             "\\|"
           )
         )
       )
    
    
    res.show    
    
    +----+----+----+--------------+---------+------+-----------+------+
    |Col1|Col2|Col3|          Col4|alphabets|digits|hexadecimal|letter|
    +----+----+----+--------------+---------+------+-----------+------+
    |  A1|  11|  B2|a|b;1;0xFFFFFF|      a|b|     1|   0xFFFFFF|     a|
    |  A1|  11|  B2|a|b;1;0xFFFFFF|      a|b|     1|   0xFFFFFF|     b|
    |  A1|  12|  B1|             2|     null|     2|       null|      |
    |  A2|  12|  B2|      0xFFF45B|     null|  null|   0xFFF45B|      |
    +----+----+----+--------------+---------+------+-----------+------+
    

    Note: The regexp part could be so much better with backreference, so if somebody knows how to do it, please comment!

    0 讨论(0)
  • 2021-01-20 18:21

    Not sure this is doable while staying 100% with Dataframes, here's a (somewhat messy?) solution using RDDs for the split itself:

    import org.apache.spark.sql.functions._
    import sqlContext.implicits._
    
    // we switch to RDD to perform the split of Col4 into 3 columns
    val rddWithSplitCol4 = input.rdd.map { r =>
      val indexToValue = r.getAs[String]("Col4").split(';').map {
        case s if s.startsWith("0x") => 2 -> s
        case s if s.matches("\\d+") => 1 -> s
        case s => 0 -> s
      }
      val newCols: Array[String] = indexToValue.foldLeft(Array.fill[String](3)("")) {
        case (arr, (index, value)) => arr.updated(index, value)
      }
      (r.getAs[String]("Col1"), r.getAs[Int]("Col2"), r.getAs[String]("Col3"), newCols(0), newCols(1), newCols(2))
    }
    
    // switch back to Dataframe and explode alphabets column
    val result = rddWithSplitCol4
      .toDF("Col1", "Col2", "Col3", "alphabets", "digits", "hexadecimal")
      .withColumn("alphabets", explode(split(col("alphabets"), "\\|")))
    
    result.show(truncate = false)
    // +----+----+----+---------+------+-----------+
    // |Col1|Col2|Col3|alphabets|digits|hexadecimal|
    // +----+----+----+---------+------+-----------+
    // |A1  |11  |B2  |a        |1     |0xFFFFFF   |
    // |A1  |11  |B2  |b        |1     |0xFFFFFF   |
    // |A1  |12  |B1  |         |2     |           |
    // |A2  |12  |B2  |         |      |0xFFF45B   |
    // +----+----+----+---------+------+-----------+
    
    0 讨论(0)
提交回复
热议问题