Selecting empty array values from a Spark DataFrame

一世执手 提交于 2019-12-24 00:59:56

问题


Given a DataFrame with the following rows:

rows = [
    Row(col1='abc', col2=[8], col3=[18], col4=[16]),
    Row(col2='def', col2=[18], col3=[18], col4=[]),
    Row(col3='ghi', col2=[], col3=[], col4=[])]

I'd like to remove rows with an empty array for each of col2, col3 and col4 (i.e. the 3rd row).

For example I might expect this code to work:

df.where(~df.col2.isEmpty(), ~df.col3.isEmpty(), ~df.col4.isEmpty()).collect()

I have two problems

  1. how to combine where clauses with and but more importantly...
  2. how to determine if the array is empty.

So, is there a builtin function to query for empty arrays? Is there an elegant way to coerce an empty array to an na or null value?

I'm trying to avoid using python to solve it, either with a UDF or .map().


回答1:


how to combine where clauses with and

To construct boolean expressions on columns you should use &, | and ~ operators so in your case it should be something like this

~lit(True) & ~lit(False)

Since these operators have higher precedence than the comparison operators for complex expressions you'll have to use parentheses:

(lit(1) > lit(2)) & (lit(3) > lit(4))

how to determine if the array is empty.

I am pretty sure there is no elegant way to handle this without an UDF. I guess you already know you can use a Python UDF like this

isEmpty = udf(lambda x: len(x) == 0, BooleanType())

It is also possible to use a Hive UDF:

df.registerTempTable("df")
query = "SELECT * FROM df WHERE {0}".format(
  " AND ".join("SIZE({0}) > 0".format(c) for c in ["col2", "col3", "col4"]))

sqlContext.sql(query)

Only feasible non-UDF solution that comes to mind is to cast to string

cols = [
    col(c).cast(StringType()) != lit("ArrayBuffer()")
    for c in  ["col2", "col3", "col4"]
]
cond = reduce(lambda x, y: x & y, cols)
df.where(cond)

but it smells from a mile away.

It is also possible to explode an array, groupBy, agg using count and join but is most likely far to expensive to be useful in any real life scenario.

Probably the best approach to avoid UDFs and dirty hacks is to replace empty arrays with NULL.



来源:https://stackoverflow.com/questions/32445351/selecting-empty-array-values-from-a-spark-dataframe

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!