>>> a DataFrame[id: bigint, julian_date: string, user_id: bigint] >>> b DataFrame[id: bigint, quan_created_money: decimal(10,0), quan_created_cnt: bigi
Reading the Spark documentation I found an easier solution.
Since version 1.4 of spark there is a function drop(col) which can be used in pyspark on a dataframe.
drop(col)
You can use it in two ways
df.drop('age').collect()
df.drop(df.age).collect()
Pyspark Documentation - Drop