How to remove rows in DataFrame on column based on another DataFrame?

前端 未结 1 723
醉话见心
醉话见心 2021-01-19 08:26

I\'m trying to use SQLContext.subtract() in Spark 1.6.1 to remove rows from a dataframe based on a column from another dataframe. Let\'s use an example:

from         


        
相关标签:
1条回答
  • 2021-01-19 08:34

    Well there are some bugs here (the first issue looks like related to to the same problem as SPARK-6231) and JIRA looks like a good idea, but SUBTRACT / EXCEPT is no the right choice for partial matches.

    Instead, as of Spark 2.0, you can use anti-join:

    df1.join(df1_with_df2, ["name"], "leftanti").show()
    

    In 1.6 you can do pretty much the same thing with standard outer join:

    import pyspark.sql.functions as F
    
    ref = df1_with_df2.select("name").alias("ref")
    
    (df1
        .join(ref, ref.name == df1.name, "leftouter")
        .filter(F.isnull("ref.name"))
        .drop(F.col("ref.name")))
    
    0 讨论(0)
提交回复
热议问题