How to use Pandas UDF Functionality in pyspark

别等时光非礼了梦想. 提交于 2021-01-29 10:52:12

问题


I have a spark frame with two columns which looks like:

+-------------------------------------------------------------+------------------------------------+
|docId                                                        |id                                  |
+-------------------------------------------------------------+------------------------------------+
|DYSDG6-RTB-91d663dd-949e-45da-94dd-e604b6050cb5-1537142434000|91d663dd-949e-45da-94dd-e604b6050cb5|
|VAVLS7-RTB-8e2c1917-0d6b-419b-a59e-cd4acc255bb7-1537142445000|8e2c1917-0d6b-419b-a59e-cd4acc255bb7|
|VAVLS7-RTB-c818dcde-7a68-4c1e-9cc4-c841660732d2-1537146854000|c818dcde-7a68-4c1e-9cc4-c841660732d2|
|IW2BYL-RTB-E9727F7D-D1BA-479C-9D3A-931F87E78B0A-1537146572000|E9727F7D-D1BA-479C-9D3A-931F87E78B0A|
|DYSDG6-RTB-f50f79e9-3ec3-4bd8-8e53-f62c3f80bcb0-1537146220000|f50f79e9-3ec3-4bd8-8e53-f62c3f80bcb0|
+-------------------------------------------------------------+------------------------------------+

I have a function that convert the id column into an 85 bit encoded string :

def convert_id(id):
    import base64 as bs
    id_str = str(id).replace("-", "") 
    return str(bs.a85encode(bytearray.fromhex(id_str)))[2:-1]

I want to transform this using pandas udf which is reported to be faster than the normal udf's.

How can I achieve this ? TIA.


回答1:


Done. Simple function can help to achieve this:

@pandas_udf(returnType=StringType())
def convert_id(id):
    converted = id.map(lambda x : str(bs.a85encode(bytearray.fromhex(str(x).replace("-", ""))))[2:-1])
    return converted


来源:https://stackoverflow.com/questions/52401542/how-to-use-pandas-udf-functionality-in-pyspark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!