Join two DataFrames where the join key is different and only select some columns

前端 未结 3 508
别跟我提以往
别跟我提以往 2021-01-19 04:12

What I would like to do is:

Join two DataFrames A and B using their respective id columns a_id and b_id<

3条回答
  •  傲寒
    傲寒 (楼主)
    2021-01-19 04:36

    Your pseudocode is basically correct. This slightly modified version would work if the id column existed in both DataFrames:

    A_B = A.join(B, on="id").select("A.*", "B.b1", "B.b2")
    

    From the docs for pyspark.sql.DataFrame.join():

    If on is a string or a list of strings indicating the name of the join column(s), the column(s) must exist on both sides, and this performs an equi-join.

    Since the keys are different, you can just use withColumn() (or withColumnRenamed()) to created a column with the same name in both DataFrames:

    A_B = A.withColumn("id", col("a_id")).join(B.withColumn("id", col("b_id")), on="id")\
        .select("A.*", "B.b1", "B.b2")
    

    If your DataFrames have long complicated names, you could also use alias() to make things easier:

    A_B = long_data_frame_name1.alias("A").withColumn("id", col("a_id"))\
        .join(long_data_frame_name2.alias("B").withColumn("id", col("b_id")), on="id")\
        .select("A.*", "B.b1", "B.b2")
    

提交回复
热议问题