Dummy Encoding using Pyspark [duplicate]

拈花ヽ惹草 提交于 2019-12-10 13:57:04

问题


I am hoping to dummy encode my categorical variables to numerical variables like shown in the image below, using Pyspark syntax.

I read in data like this

data = sqlContext.read.csv("data.txt", sep = ";", header = "true")

In python I am able to encode my variables using the below code

data = pd.get_dummies(data, columns = ['Continent'])

However I am not sure how to do it in Pyspark.

Any assistance would be greatly appreciated.


回答1:


Try this:

import pyspark.sql.functions as F 
categ = df.select('Continent').distinct().rdd.flatMap(lambda x:x).collect()
exprs = [F.when(F.col('Continent') == cat,1).otherwise(0)\
            .alias(str(cat)) for cat in categ]
df = df.select(exprs+df.columns)

Exclude df.columns if you do not want the original columns in your transformed dataframe.



来源:https://stackoverflow.com/questions/46528207/dummy-encoding-using-pyspark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!