Creating a unique grouping key from column-wise runs in a Spark DataFrame

六月ゝ 毕业季﹏ 提交于 2019-12-11 06:38:45

问题


I have something analogous to this, where spark is my sparkContext. I've imported implicits._ in my sparkContext so I can use the $ syntax:

val df = spark.createDataFrame(Seq(("a", 0L), ("b", 1L), ("c", 1L), ("d", 1L), ("e", 0L), ("f", 1L)))
              .toDF("id", "flag")
              .withColumn("index", monotonically_increasing_id)
              .withColumn("run_key", when($"flag" === 1, $"index").otherwise(0))

df.show

df: org.apache.spark.sql.DataFrame = [id: string, flag: bigint ... 2 more fields]
+---+----+-----+-------+
| id|flag|index|run_key|
+---+----+-----+-------+
|  a|   0|    0|      0|
|  b|   1|    1|      1|
|  c|   1|    2|      2|
|  d|   1|    3|      3|
|  e|   0|    4|      0|
|  f|   1|    5|      5|
+---+----+-----+-------+

I want to create another column with a unique grouping key for each nonzero chunk of run_key, something equivalent to this:

+---+----+-----+-------+---+
| id|flag|index|run_key|key|
+---+----+-----+-------+---|
|  a|   0|    0|      0|  0|
|  b|   1|    1|      1|  1|
|  c|   1|    2|      2|  1|
|  d|   1|    3|      3|  1|
|  e|   0|    4|      0|  0|
|  f|   1|    5|      5|  2|
+---+----+-----+-------+---+

It could be the first value in each run, average of each run, or some other value -- it doesn't really matter as long as it's guaranteed to be unique so that I can group on it afterward to compare other values between groups.

Edit: BTW, I don't need to retain the rows where flag is 0.


回答1:


One approach would be to 1) create a column $"lag1" using Window function lag() from $"flag", 2) create another column $"switched" with $"index" value in rows where $"flag" is switched, and finally 3) create the column which copies $"switched" from the last non-null row via last() and rowsBetween().

Note that this solution uses Window function without partitioning hence may not work for large dataset.

val df = Seq(
  ("a", 0L), ("b", 1L), ("c", 1L), ("d", 1L), ("e", 0L), ("f", 1L),
  ("g", 1L), ("h", 0L), ("i", 0L), ("j", 1L), ("k", 1L), ("l", 1L)
).toDF("id", "flag").
  withColumn("index", monotonically_increasing_id).
  withColumn("run_key", when($"flag" === 1, $"index").otherwise(0))

import org.apache.spark.sql.expressions.Window

df.withColumn( "lag1", lag("flag", 1, -1).over(Window.orderBy("index")) ).
  withColumn( "switched", when($"flag" =!= $"lag1", $"index") ).
  withColumn( "key", last("switched", ignoreNulls = true).over(
    Window.orderBy("index").rowsBetween(Window.unboundedPreceding, 0)
  ) )

// +---+----+-----+-------+----+--------+---+
// | id|flag|index|run_key|lag1|switched|key|
// +---+----+-----+-------+----+--------+---+
// |  a|   0|    0|      0|  -1|       0|  0|
// |  b|   1|    1|      1|   0|       1|  1|
// |  c|   1|    2|      2|   1|    null|  1|
// |  d|   1|    3|      3|   1|    null|  1|
// |  e|   0|    4|      0|   1|       4|  4|
// |  f|   1|    5|      5|   0|       5|  5|
// |  g|   1|    6|      6|   1|    null|  5|
// |  h|   0|    7|      0|   1|       7|  7|
// |  i|   0|    8|      0|   0|    null|  7|
// |  j|   1|    9|      9|   0|       9|  9|
// |  k|   1|   10|     10|   1|    null|  9|
// |  l|   1|   11|     11|   1|    null|  9|
// +---+----+-----+-------+----+--------+---+



回答2:


You can label the "run" with the largest index where flag is 0 smaller than the index of the row in question.

Something like:

flags = df.filter($"flag" === 0)
  .select("index")
  .withColumnRenamed("index", "flagIndex")
indices = df.select("index").join(flags, df.index > flags.flagIndex)
  .groupBy($"index")
  .agg(max($"index$).as("groupKey"))
dfWithGroups = df.join(indices, Seq("index"))


来源:https://stackoverflow.com/questions/48997461/creating-a-unique-grouping-key-from-column-wise-runs-in-a-spark-dataframe

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!