Converting series from pandas to pyspark: need to use “groupby” and “size”, but pyspark yields error
问题 I am converting some code from Pandas to pyspark. In pandas, lets imagine I have the following mock dataframe, df: And in pandas, I define a certain variable the following way: value = df.groupby(["Age", "Siblings"]).size() And the output is a series as follows: However, when trying to covert this to pyspark, an error comes up: AttributeError: 'GroupedData' object has no attribute 'size' . Can anyone help me solve this? 回答1: The equivalent of size in pyspark is count: df.groupby(["Age",