Pypsark - Retain null values when using collect_list

此生再无相见时 提交于 2019-12-12 10:58:16

问题


According to the accepted answer in pyspark collect_set or collect_list with groupby, when you do a collect_list on a certain column, the null values in this column are removed. I have checked and this is true.

But in my case, I need to keep the null columns -- How can I achieve this?

I did not find any info on this kind of a variant of collect_list function.


Background context to explain why I want nulls:

I have a dataframe df as below:

cId   |  eId  |  amount  |  city
1     |  2    |   20.0   |  Paris
1     |  2    |   30.0   |  Seoul
1     |  3    |   10.0   |  Phoenix
1     |  3    |   5.0    |  null

I want to write this to an Elasticsearch index with the following mapping:

"mappings": {
    "doc": {
        "properties": {
            "eId": { "type": "keyword" },
            "cId": { "type": "keyword" },
            "transactions": {
                "type": "nested", 
                "properties": {
                    "amount": { "type": "keyword" },
                    "city": { "type": "keyword" }
                }
            }
        }
    }
 }      

In order to conform to the nested mapping above, I transformed my df so that for each combination of eId and cId, I have an array of transactions like this:

df_nested = df.groupBy('eId','cId').agg(collect_list(struct('amount','city')).alias("transactions"))
df_nested.printSchema()
root
 |-- cId: integer (nullable = true)
 |-- eId: integer (nullable = true)
 |-- transactions: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- amount: float (nullable = true)
 |    |    |-- city: string (nullable = true)

Saving df_nested as a json file, there are the json records that I get:

{"cId":1,"eId":2,"transactions":[{"amount":20.0,"city":"Paris"},{"amount":30.0,"city":"Seoul"}]}
{"cId":1,"eId":3,"transactions":[{"amount":10.0,"city":"Phoenix"},{"amount":30.0}]}

As you can see - when cId=1 and eId=3, one of my array elements where amount=30.0 does not have the city attribute because this was a null in my original data (df). The nulls are being removed when I use the collect_list function.

However, when I try writing df_nested to elasticsearch with the above index, it errors because there is a schema mismatch. This is basically the reason as to why I want to retain my nulls after applying the collect_list function.



回答1:


This should give you what you need:

from pyspark.sql.functions import create_map, collect_list, lit, col, to_json

df = spark.createDataFrame([[1, 2, 20.0, "Paris"], [1, 2, 30.0, "Seoul"], 
    [1, 3, 10.0, "Phoenix"], [1, 3, 5.0, None]], 
    ["cId", "eId", "amount", "city"])

df_nested = df.withColumn(
        "transactions", 
         create_map(lit("city"), col("city"), lit("amount"), col("amount")))\
    .groupBy("eId","cId")\
    .agg(collect_list("transactions").alias("transactions"))

That gives me

+---+---+------------------------------------------------------------------+
|eId|cId|transactions                                                      |
+---+---+------------------------------------------------------------------+
|2  |1  |[[city -> Paris, amount -> 20.0], [city -> Seoul, amount -> 30.0]]|
|3  |1  |[[city -> Phoenix, amount -> 10.0], [city ->, amount -> 5.0]]     |
+---+---+------------------------------------------------------------------+

Then the json for your column of interest is as you want it to be:

>>> for row in df_nested.select(to_json("transactions").alias("json")).collect():
print(row["json"])

[{"city":"Paris","amount":"20.0"},{"city":"Seoul","amount":"30.0"}]
[{"city":"Phoenix","amount":"10.0"},{"city":null,"amount":"5.0"}]


来源:https://stackoverflow.com/questions/49395458/pypsark-retain-null-values-when-using-collect-list

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!