How to extract floats from vector columns in PySpark?

风格不统一 提交于 2020-03-28 06:40:25

问题


My Spark DataFrame has data in the following format:

The printSchema() shows that each column is of the type vector.

I tried to get the values out of [ and ] using the code below (for 1 columns col1):

from pyspark.sql.functions import udf
from pyspark.sql.types import FloatType

firstelement=udf(lambda v:float(v[0]),FloatType())
df.select(firstelement('col1')).show()

However, how can I apply it to all columns of df?


回答1:


1. Extract first element of a single vector column:

To get the first element of a vector column, you can use the answer from this SO: discussion Access element of a vector in a Spark DataFrame (Logistic Regression probability vector)

Here's a reproducible example:

>>> from pyspark.sql import functions as f
>>> from pyspark.sql.types import FloatType
>>> df = spark.createDataFrame([{"col1": [0.2], "col2": [0.25]},
                                {"col1": [0.45], "col2":[0.85]}])
>>> df.show()
+------+------+
|  col1|  col2|
+------+------+
| [0.2]|[0.25]|
|[0.45]|[0.85]|
+------+------+

>>> firstelement=f.udf(lambda v:float(v[0]),FloatType())
>>> df.withColumn("col1", firstelement("col1")).show()
+----+------+
|col1|  col2|
+----+------+
| 0.2|[0.25]|
|0.45|[0.85]|
+----+------+

2. Extract first element of multiple vector columns:

To generalize the above solution to multiple columns, apply a for loop. Here's an example:

>>> from pyspark.sql import functions as f
>>> from pyspark.sql.types import FloatType

>>> df = spark.createDataFrame([{"col1": [0.2], "col2": [0.25]},
                                {"col1": [0.45], "col2":[0.85]}])
>>> df.show()
+------+------+
|  col1|  col2|
+------+------+
| [0.2]|[0.25]|
|[0.45]|[0.85]|
+------+------+

>>> firstelement=f.udf(lambda v:float(v[0]),FloatType())
>>> df = df.select([firstelement(c).alias(c) for c in df.columns])
>>> df.show()
+----+----+
|col1|col2|
+----+----+
| 0.2|0.25|
|0.45|0.85|
+----+----+



回答2:


As I understand your problem, you do not required to use UDF to change Vector into normal Float Type. Use pyspark predefined function concat_ws for it.

>>> from pyspark.sql.functions import *
>>> df.show()
+------+
|   num|
+------+
| [211]|
|[3412]|
| [121]|
| [121]|
|  [34]|
|[1441]|
+------+

>>> df.printSchema()
root
 |-- num: array (nullable = true)
 |    |-- element: string (containsNull = true)

>>> df.withColumn("num", concat_ws("", col("num"))).show()
+----+
| num|
+----+
| 211|
|3412|
| 121|
| 121|
|  34|
|1441|
+----+


来源:https://stackoverflow.com/questions/60287632/how-to-extract-floats-from-vector-columns-in-pyspark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!