Spark DataFrame groupBy and sort in the descending order (pyspark)

后端 未结 5 1131
我在风中等你
我在风中等你 2021-01-30 07:46

I\'m using pyspark(Python 2.7.9/Spark 1.3.1) and have a dataframe GroupObject which I need to filter & sort in the descending order. Trying to achieve it via this piece of c

相关标签:
5条回答
  • 2021-01-30 08:36

    In pyspark 2.4.4

    1) group_by_dataframe.count().filter("`count` >= 10").orderBy('count', ascending=False)
    
    2) from pyspark.sql.functions import desc
       group_by_dataframe.count().filter("`count` >= 10").orderBy('count').sort(desc('count'))
    

    No need to import in 1) and 1) is short & easy to read,
    So I prefer 1) over 2)

    0 讨论(0)
  • 2021-01-30 08:37

    By far the most convenient way is using this:

    df.orderBy(df.column_name.desc())
    

    Doesn't require special imports.

    0 讨论(0)
  • 2021-01-30 08:41

    In PySpark 1.3 sort method doesn't take ascending parameter. You can use desc method instead:

    from pyspark.sql.functions import col
    
    (group_by_dataframe
        .count()
        .filter("`count` >= 10")
        .sort(col("count").desc()))
    

    or desc function:

    from pyspark.sql.functions import desc
    
    (group_by_dataframe
        .count()
        .filter("`count` >= 10")
        .sort(desc("count"))
    

    Both methods can be used with with Spark >= 1.3 (including Spark 2.x).

    0 讨论(0)
  • 2021-01-30 08:47

    you can use groupBy and orderBy as follows also

    dataFrameWay = df.groupBy("firstName").count().withColumnRenamed("count","distinct_name").sort(desc("count"))
    
    0 讨论(0)
  • 2021-01-30 08:49

    Use orderBy:

    df.orderBy('column_name', ascending=False)
    

    Complete answer:

    group_by_dataframe.count().filter("`count` >= 10").orderBy('count', ascending=False)
    

    http://spark.apache.org/docs/2.0.0/api/python/pyspark.sql.html

    0 讨论(0)
提交回复
热议问题