I have a dataframe df:
+------+----------+--------------------+
|SiteID| LastRecID| Col_to_split|
+------+----------+--------------------+
| 2|105
As of Spark 2.1.0, you can use posexplode
which unnest array column and output the index for each element as well, (used data from @Herve):
import pyspark.sql.functions as F
df.select(
F.col("LastRecID").alias("RecID"),
F.posexplode(F.col("coltosplit")).alias("index", "value")
).show()
+-----+-----+-----+
|RecID|index|value|
+-----+-----+-----+
|10526| 0| 214|
|10526| 1| 207|
|10526| 2| 206|
|10526| 3| 205|
|10896| 0| 213|
|10896| 1| 208|
+-----+-----+-----+