How to use window functions in PySpark?

后端 未结 1 785
旧巷少年郎
旧巷少年郎 2020-12-09 08:46

I\'m trying to use some windows functions (ntile and percentRank) for a data frame but I don\'t know how to use them.

Can anyone help me w

相关标签:
1条回答
  • 2020-12-09 09:33

    To be able to use window function you have to create a window first. Definition is pretty much the same as for normal SQL it means you can define either order, partition or both. First lets create some dummy data:

    import numpy as np
    np.random.seed(1)
    
    keys = ["foo"] * 10 + ["bar"] * 10
    values = np.hstack([np.random.normal(0, 1, 10), np.random.normal(10, 1, 100)])
    
    df = sqlContext.createDataFrame([
       {"k": k, "v": round(float(v), 3)} for k, v in zip(keys, values)])
    

    Make sure you're using HiveContext (Spark < 2.0 only):

    from pyspark.sql import HiveContext
    
    assert isinstance(sqlContext, HiveContext)
    

    Create a window:

    from pyspark.sql.window import Window
    
    w =  Window.partitionBy(df.k).orderBy(df.v)
    

    which is equivalent to

    (PARTITION BY k ORDER BY v) 
    

    in SQL.

    As a rule of thumb window definitions should always contain PARTITION BY clause otherwise Spark will move all data to a single partition. ORDER BY is required for some functions, while in different cases (typically aggregates) may be optional.

    There are also two optional which can be used to define window span - ROWS BETWEEN and RANGE BETWEEN. These won't be useful for us in this particular scenario.

    Finally we can use it for a query:

    from pyspark.sql.functions import percentRank, ntile
    
    df.select(
        "k", "v",
        percentRank().over(w).alias("percent_rank"),
        ntile(3).over(w).alias("ntile3")
    )
    

    Note that ntile is not related in any way to the quantiles.

    0 讨论(0)
提交回复
热议问题