pythonic way to parse/split URLs in a pandas dataframe

◇◆丶佛笑我妖孽 提交于 2019-12-06 06:18:09

You can use Series.map to accomplish the same in one line:

df['protocol'],df['domain'],df['path'],df['query'],df['fragment'] = zip(*df['url'].map(urlparse.urlsplit))

Using timeit, this ran in 2.31 ms per loop instead of 179 ms per loop as in the original method, when run on 186 urls. (Note however, the code is not optimized for duplicates and will run the same urls through urlparse mulitple times.)

Full Code:

import pandas

urls = ['https://www.google.com/something','https://mail.google.com/anohtersomething','https://www.amazon.com/yetanotherthing'] # tested with list of 186 urls instead
df['protocol'],df['domain'],df['path'],df['query'],df['fragment'] = zip(*df['url'].map(urlparse.urlsplit))

I think there are too many lookups happening when you're writing back to the df. It looks like each df.loc[row_index, ...] needs to check as many rows as you've got urls in total (size of df.url). It means that first you look at all the rows at least once to find the unique urls, then for each url you do it again to find matching rows, then again for each write. So assuming unique takes only one full scan, you're scanning the table on average 1+N+(5N/2) times. You should only need one time really.

Unless you've got a huge number of repetitions, you could just ignore the duplicates, traverse df row-by-row and make sure you're using integer index for each iteration. (.iloc) If you're not storing other data in the row, you can also assign all fields at once:

df.iloc[idx] = {'protocol': ..., 'domain': ..., ...}
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!