How to join on multiple columns in Pyspark?

前端 未结 2 1193
深忆病人
深忆病人 2021-01-30 16:29

I am using Spark 1.3 and would like to join on multiple columns using python interface (SparkSQL)

The following works:

I first register them as temp tables.

2条回答
  •  旧时难觅i
    2021-01-30 16:53

    An alternative approach would be:

    df1 = sqlContext.createDataFrame(
        [(1, "a", 2.0), (2, "b", 3.0), (3, "c", 3.0)],
        ("x1", "x2", "x3"))
    
    df2 = sqlContext.createDataFrame(
        [(1, "f", -1.0), (2, "b", 0.0)], ("x1", "x2", "x4"))
    
    df = df1.join(df2, ['x1','x2'])
    df.show()
    

    which outputs:

    +---+---+---+---+
    | x1| x2| x3| x4|
    +---+---+---+---+
    |  2|  b|3.0|0.0|
    +---+---+---+---+
    

    With the main advantage being that the columns on which the tables are joined are not duplicated in the output, reducing the risk of encountering errors such as org.apache.spark.sql.AnalysisException: Reference 'x1' is ambiguous, could be: x1#50L, x1#57L.


    Whenever the columns in the two tables have different names, (let's say in the example above, df2 has the columns y1, y2 and y4), you could use the following syntax:

    df = df1.join(df2.withColumnRenamed('y1','x1').withColumnRenamed('y2','x2'), ['x1','x2'])
    

提交回复
热议问题