Spark SQL performing carthesian join instead of inner join

后端 未结 1 1359
说谎
说谎 2020-12-31 10:58

I am trying to join two DataFrames with each other after some performing some earlier computation. The command is simple:

    employee.join(employer, employe         


        
相关标签:
1条回答
  • 2020-12-31 11:23

    I think I fought with the same issue. Check if you have a warning:

    Constructing trivially true equals predicate [..]
    

    After creating the join operation. If so, just alias one of the columns in either employee or employer DataFrame, e.g. like this:

    employee.select(<columns you want>, employee("id").as("id_e"))
    

    Then perform join on employee("id_e") === employer("id").

    Explanation. Look at this operation flow:

    If you directly use your DataFrame A to compute DataFrame B and join them together on the column Id, which comes from the DataFrame A, you will not be performing the join you want to do. The ID column from DataFrameB is in fact the exactly same column from the DataFrameA, so spark will just assert that the column is equal with itself and hence the trivially true predicate. To avoid this, you have to alias one of the columns so that they will appear as "different" columns for spark. For now only the warning message has been implemented in this way:

        def === (other: Any): Column = {
        val right = lit(other).expr
        if (this.expr == right) {
          logWarning(
            s"Constructing trivially true equals predicate, '${this.expr} = $right'. " +
              "Perhaps you need to use aliases.")
        }
        EqualTo(expr, right)
      }
    

    It is not a very good solution solution for me (it is really easy to miss the warning message), I hope this will be somehow fixed.

    You are lucky though to see the warning message, it has been added not so long ago ;).

    0 讨论(0)
提交回复
热议问题