PySpark Numeric Window Group By

♀尐吖头ヾ 提交于 2020-01-20 08:20:06

问题


I'd like to be able to have Spark group by a step size, as opposed to just single values. Is there anything in spark similar to PySpark 2.x's window function for numeric (non-date) values?

Something along the lines of:

sqlContext = SQLContext(sc)
df = sqlContext.createDataFrame([10, 11, 12, 13], "integer").toDF("foo")
res = df.groupBy(window("foo", step=2, start=10)).count()

回答1:


You can reuse timestamp one and express parameters in seconds. Tumbling:

from pyspark.sql.functions import col, window

df.withColumn(
    "window",
    window(
         col("foo").cast("timestamp"), 
         windowDuration="2 seconds"
    ).cast("struct<start:bigint,end:bigint>")
).show()

# +---+-------+              
# |foo| window|
# +---+-------+
# | 10|[10,12]|
# | 11|[10,12]|
# | 12|[12,14]|
# | 13|[12,14]|
# +---+-------+

Rolling one:

df.withColumn(
    "window", 
    window(
        col("foo").cast("timestamp"),
        windowDuration="2 seconds", slideDuration="1 seconds"
     ).cast("struct<start:bigint,end:bigint>")
).show()

# +---+-------+
# |foo| window|
# +---+-------+
# | 10| [9,11]|
# | 10|[10,12]|
# | 11|[10,12]|
# | 11|[11,13]|
# | 12|[11,13]|
# | 12|[12,14]|
# | 13|[12,14]|
# | 13|[13,15]|
# +---+-------+

Using groupBy and start:

w = window(col("foo").cast("timestamp"), "2 seconds").cast("struct<start:bigint,end:bigint>")
start = w.start.alias("start")
df.groupBy(start).count().show()

+-----+-----+
|start|count|
+-----+-----+
|   10|    2|
|   12|    2|
+-----+-----+


来源:https://stackoverflow.com/questions/48467215/pyspark-numeric-window-group-by

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!