Pandas-style transform of grouped data on PySpark DataFrame

后端 未结 3 809
悲&欢浪女
悲&欢浪女 2021-02-07 10:24

If we have a Pandas data frame consisting of a column of categories and a column of values, we can remove the mean in each category by doing the following:

df[\"         


        
3条回答
  •  爱一瞬间的悲伤
    2021-02-07 11:11

    Actually, there is an idiomatic way to do this in Spark, using the Hive OVER expression.

    i.e.

    df.registerTempTable('df')
    with_category_means = sqlContext.sql('select *, mean(Values) OVER (PARTITION BY Category) as category_mean from df')
    

    Under the hood, this is using a window function. I'm not sure if this is faster than your solution, though

提交回复
热议问题