How to overwrite entire existing column in Spark dataframe with new column?

前端 未结 3 1400
抹茶落季
抹茶落季 2021-02-20 04:19

I want to overwrite a spark column with a new column which is a binary flag.

I tried directly overwriting the column id2 but why is it not working like a inplace operati

相关标签:
3条回答
  • 2021-02-20 05:09

    You can use

    d1.withColumnRenamed("colName", "newColName")
    d1.withColumn("newColName", $"colName")
    

    The withColumnRenamed renames the existing column to new name.

    The withColumn creates a new column with a given name. It creates a new column with same name if there exist already and drops the old one.

    In your case changes are not applied to the original dataframe df2, it changes the name of column and return as a new dataframe which should be assigned to new variable for the further use.

    d3 = df2.select((df2.id2 > 0).alias("id2"))
    

    Above should work fine in your case.

    Hope this helps!

    0 讨论(0)
  • 2021-02-20 05:15

    As stated above it's not possible to overwrite DataFrame object, which is immutable collection, so all transformations return new DataFrame.

    The fastest way to achieve your desired effect is to use withColumn:

    df = df.withColumn("col", some expression)
    

    where col is name of column which you want to "replace". After running this value of df variable will be replaced by new DataFrame with new value of column col. You might want to assign this to new variable.

    In your case it can look:

    df2 = df2.withColumn("id2", (df2.id2 > 0) & (df2.id2 != float('nan')))
    

    I've added comparison to nan, because I'm assuming you don't want to treat nan as greater than 0.

    0 讨论(0)
  • 2021-02-20 05:22

    If you're working with multiple columns of the same name in different joined tables you can use the table alias in the colName in withColumn.

    Eg. df1.join(df2, df1.id = df2.other_id).withColumn('df1.my_col', F.greatest(df1.my_col, df2.my_col))

    And if you only want to keep the columns from df1 you can also call .select('df1.*')

    If you instead do df1.join(df2, df1.id = df2.other_id).withColumn('my_col', F.greatest(df1.my_col, df2.my_col))

    I think it overwrites the last column which is called my_col. So it outputs: id, my_col (df1.my_col original value), id, other_id, my_col (newly computed my_col)

    0 讨论(0)
提交回复
热议问题