问题
I am converting some code from Pandas to pyspark. In pandas, lets imagine I have the following mock dataframe, df:
And in pandas, I define a certain variable the following way:
value = df.groupby(["Age", "Siblings"]).size()
And the output is a series as follows:
However, when trying to covert this to pyspark, an error comes up: AttributeError: 'GroupedData' object has no attribute 'size'
. Can anyone help me solve this?
回答1:
The equivalent of size
in pyspark
is count:
df.groupby(["Age", "Siblings"]).count()
回答2:
You can also use the agg
method, which is more flexible as it allows you to set column alias or add other types of aggregations:
import pyspark.sql.functions as F
df.groupby('Age', 'Siblings').agg(F.count('*').alias('count'))
来源:https://stackoverflow.com/questions/65707148/converting-series-from-pandas-to-pyspark-need-to-use-groupby-and-size-but