Grouped output on Pyspark

前端 未结 0 1703

I\'m running the following code:

output = (df
          .groupby(\'customer_id\')
          .agg(
            f.countDistinct(\'customer_id\').alias(\'identif         


        
相关标签:
回答
  • 消灭零回复
提交回复
热议问题