Fast/Efficient counting of list of space delimited strings in Python

痴心易碎 提交于 2020-01-06 03:06:10

问题


Given the input:

x = ['foo bar', 'bar blah', 'black sheep']

I could do this to get the count of each word in the list of space delimited string:

from itertools import chain
from collections import Counter
c = Counter(chain(*map(str.split, x)))

Or I could simple iterate through and get:

c = Counter()
for sent in x:
    for word in sent.split():
        c[word]+=1

[out]:

Counter({'bar': 2, 'sheep': 1, 'blah': 1, 'foo': 1, 'black': 1})

The question is which is more efficient if the input list of string is extremely huge? Are there other ways to achieve the same counter object?

Imagine it's a text file object that has billions of lines with 10-20 words each.


回答1:


Assuming you are in Python 3x, both chain(*map(str.split, x)) and simple iteration will create intermediate lists sequentially from each line; this will not take up much memory in either case. Performance should be very close and may be implementation-dependent.

However, it is most efficient memory-wise to create a generator function to feed Counter(). Either way you use string.split(), it creates intermediate lists which are not necessary. This could cause slowdown if you have a particularly long line, but to be honest it's unlikely.

Such a generator function is described below. Note that I am using optional typing for clarity.

from typing import Iterable, Generator
def gen_words(strings: Iterable[str]) -> Generator[str]:
    for string in strings:
        start = 0
        for i, char in enumerate(string):
            if char == ' ':
                if start != i:
                    yield string[start:i]
                start = i
        if start != i:
            yield string[start:i]
c = counter(gen_words(strings))



回答2:


The answer to your question is profiling.

Following are some profiling tools:

  • print time.time() in strategic places. (or use Unix time)
  • cProfile
  • line_profiler
  • heapy tracks all objects inside Python’s memory (good for memory leaks)
  • For long-running systems, use dowser: allows live objects introspection (web browser interface)
  • memory_profiler for RAM usage
  • examine Python bytecode with dis


来源:https://stackoverflow.com/questions/43623668/fast-efficient-counting-of-list-of-space-delimited-strings-in-python

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!