how to merge two columns with a condition in pyspark?

荒凉一梦 提交于 2019-12-10 17:55:37

问题


I was able to merge and sort the values but unable to figure out the condition not to merge if the values are equal

df = sqlContext.createDataFrame([("foo", "bar","too","aaa"), ("bar", "bar","aaa","foo")], ("k", "K" ,"v" ,"V"))
columns = df.columns

k = 0
for i in range(len(columns)):
    for j in range(i + 1, len(columns)):
       if columns[i].lower() == columns[j].lower(): 
        k = k+1
        df = (df.withColumn(columns[i]+str(k),concat(col(columns[i]),lit(","), col(columns[j]))))
        newdf = df.select( col("k"),split(col("c1"), ",\s*").alias("c1"))
        sortDf = newdf.select(newdf.k,sort_array(newdf.c1).alias('sorted_c1'))

In the below table for columns k and K only merge [foo,bar] but not [bar,bar]

Input:

+---+---+---+---+
|  k|  K|  v|  V|
+---+---+---+---+
|foo|bar|too|aaa|
|bar|bar|aaa|foo|
+---+---+---+---+

Output:

+---+---+---+---+-----------+
|  k|  K|Merged K |Merged V |
+---+---+-------------------+
|foo|bar|[foo,bar] |[too,aaa]
|bar|bar|bar       |[aaa,foo]
+---+---+---+------+--------+

回答1:


Try:

from pyspark.sql.functions import udf

def merge(*c):
    merged = sorted(set(c))
    if len(merged) == 1:
        return merged[0]
    else:
        return "[{0}]".format(",".join(merged))

merge_udf = udf(merge)

df = sqlContext.createDataFrame([("foo", "bar","too","aaa"), ("bar", "bar","aaa","foo")], ("k1", "k2" ,"v1" ,"v2"))

df.select(merge_udf("k1", "k2"), merge_udf("v1", "v2"))


来源:https://stackoverflow.com/questions/40643550/how-to-merge-two-columns-with-a-condition-in-pyspark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!