I have a data frame in pyspark. In this data frame I have column called id that is unique.
pyspark
id
Now I want to find the maximum value of
maximum
You can use the aggregate max as also mentioned in the pyspark documentation link below:
Link : https://spark.apache.org/docs/latest/api/python/pyspark.sql.html?highlight=agg
Code:
row1 = df1.agg({"id": "max"}).collect()[0]