If dataframes in Spark are immutable, why are we able to modify it with operations such as withColumn()?

前端 未结 2 1467
感动是毒
感动是毒 2020-12-20 23:15

This is probably a stupid question originating from my ignorance. I have been working on PySpark for a few weeks now and do not have much programming experience to start wit

相关标签:
2条回答
  • 2020-12-20 23:44

    As per Spark Architecture DataFrame is built on top of RDDs which are immutable in nature, Hence Data frames are immutable in nature as well.

    Regarding the withColumn or any other operation for that matter, when you apply such operations on DataFrames it will generate a new data frame instead of updating the existing data frame.

    However, When you are working with python which is dynamically typed language you overwrite the value of the previous reference. Hence when you are executing below statement

    df = df.withColumn()
    

    It will generate another dataframe and assign it to reference "df".

    In order to verify the same, you can use id() method of rdd to get the unique identifier of your dataframe.

    df.rdd.id()

    will give you unique identifier for your dataframe.

    I hope the above explanation helps.

    Regards,

    Neeraj

    0 讨论(0)
  • 2020-12-20 23:51

    You aren't; the documentation explicitly says

    Returns a new Dataset by adding a column or replacing the existing column that has the same name.

    If you keep a variable referring to the dataframe you called withColumn on, it won't have the new column.

    0 讨论(0)
提交回复
热议问题