I\'m wondering how I can achieve the following in Spark (Pyspark)
Initial Dataframe:
+--+---+
|id|num|
+--+---+
|4 |9.0|
+--+---+
|3 |7.0|
+--+---+
|
You can use lag
window function as follows
from pyspark.sql.functions import lag, col
from pyspark.sql.window import Window
df = sc.parallelize([(4, 9.0), (3, 7.0), (2, 3.0), (1, 5.0)]).toDF(["id", "num"])
w = Window().partitionBy().orderBy(col("id"))
df.select("*", lag("num").over(w).alias("new_col")).na.drop().show()
## +---+---+-------+
## | id|num|new_col|
## +---+---+-------|
## | 2|3.0| 5.0|
## | 3|7.0| 3.0|
## | 4|9.0| 7.0|
## +---+---+-------+
but there some important issues:
While the second issue is almost never a problem the first one can be a deal-breaker. If this is the case you should simply convert your DataFrame
to RDD and compute lag
manually. See for example:
Other useful links:
val df = sc.parallelize(Seq((4, 9.0), (3, 7.0), (2, 3.0), (1, 5.0))).toDF("id", "num")
df.show
+---+---+
| id|num|
+---+---+
| 4|9.0|
| 3|7.0|
| 2|3.0|
| 1|5.0|
+---+---+
df.withColumn("new_column", lag("num", 1, 0).over(w)).show
+---+---+----------+
| id|num|new_column|
+---+---+----------+
| 1|5.0| 0.0|
| 2|3.0| 5.0|
| 3|7.0| 3.0|
| 4|9.0| 7.0|
+---+---+----------+