pandas: How do I split text in a column into multiple rows?

后端 未结 7 1178
说谎
说谎 2020-11-22 09:47

I\'m working with a large csv file and the next to last column has a string of text that I want to split by a specific delimiter. I was wondering if there is a simple way to

7条回答
  •  醉酒成梦
    2020-11-22 10:10

    Differently from Dan, I consider his answer quite elegant... but unfortunately it is also very very inefficient. So, since the question mentioned "a large csv file", let me suggest to try in a shell Dan's solution:

    time python -c "import pandas as pd;
    df = pd.DataFrame(['a b c']*100000, columns=['col']);
    print df['col'].apply(lambda x : pd.Series(x.split(' '))).head()"
    

    ... compared to this alternative:

    time python -c "import pandas as pd;
    from scipy import array, concatenate;
    df = pd.DataFrame(['a b c']*100000, columns=['col']);
    print pd.DataFrame(concatenate(df['col'].apply( lambda x : [x.split(' ')]))).head()"
    

    ... and this:

    time python -c "import pandas as pd;
    df = pd.DataFrame(['a b c']*100000, columns=['col']);
    print pd.DataFrame(dict(zip(range(3), [df['col'].apply(lambda x : x.split(' ')[i]) for i in range(3)]))).head()"
    

    The second simply refrains from allocating 100 000 Series, and this is enough to make it around 10 times faster. But the third solution, which somewhat ironically wastes a lot of calls to str.split() (it is called once per column per row, so three times more than for the others two solutions), is around 40 times faster than the first, because it even avoids to instance the 100 000 lists. And yes, it is certainly a little ugly...

    EDIT: this answer suggests how to use "to_list()" and to avoid the need for a lambda. The result is something like

    time python -c "import pandas as pd;
    df = pd.DataFrame(['a b c']*100000, columns=['col']);
    print pd.DataFrame(df.col.str.split().tolist()).head()"
    

    which is even more efficient than the third solution, and certainly much more elegant.

    EDIT: the even simpler

    time python -c "import pandas as pd;
    df = pd.DataFrame(['a b c']*100000, columns=['col']);
    print pd.DataFrame(list(df.col.str.split())).head()"
    

    works too, and is almost as efficient.

    EDIT: even simpler! And handles NaNs (but less efficient):

    time python -c "import pandas as pd;
    df = pd.DataFrame(['a b c']*100000, columns=['col']);
    print df.col.str.split(expand=True).head()"
    

提交回复
热议问题