In pandas, this can be done by column.name.
But how to do the same when its column of spark dataframe?
e.g. The calling program has a spark dataframe: spark_df>
The only way is to go an underlying level to the JVM.
df.col._jc.toString().encode('utf8')
This is also how it is converted to a str in the pyspark code itself.
str
From pyspark/sql/column.py:
def __repr__(self): return 'Column<%s>' % self._jc.toString().encode('utf8')