Python: Memory usage and optimization when modifying lists

前端 未结 7 1074
失恋的感觉
失恋的感觉 2021-02-04 03:28

The problem

My concern is the following: I am storing a relativity large dataset in a classical python list and in order to process the data I must iterate over the li

相关标签:
7条回答
  • 2021-02-04 03:56

    You do not provide enough information I can find to answer this question really well. I don't know your use case well enough to tell you what data structures will get you the time complexities you want if you have to optimize for time. The typical solution is to build a new list rather than repeated deletions, but obviously this doubles(ish) memory usage.

    If you have memory usage issues, you might want to abandon using in-memory Python constructs and go with an on-disk database. Many databases are available and sqlite ships with Python. Depending on your usage and how tight your memory requirements are, array.array or numpy might help you, but this is highly dependent on what you need to do. array.array will have all the same time complexities as list and numpy arrays sort of will but work in some different ways. Using lazy iterators (like generators and the stuff in the itertools module) can often reduce memory usage by a factor of n.

    Using a database will improve time to delete items from arbitrary locations (though order will be lost if this is important). Using a dict will do the same, but potentially at high memory usage.

    You can also consider blist as a drop-in replacement for a list that might get some of the compromises you want. I don't believe it will drastically increase memory usage, but it will change item removal to O(log n). This comes at the cost of making other operations more expensive, of course.

    I would have to see testing to believe that the constant factor for memory use for your doubly linked list implementation would be less than the 2 that you get by simply creating a new list. I really doubt it.

    You will have to share more about your problem class for a more concrete answer, I think, but the general advice is

    • Iterate over a list building a new list as you go along (or using a generator to yield the items when you need them). If you actually need a list, this will have a memory factor of 2, which scales fine but doesn't help if you are short on memory period.
    • If you are running out of memory, rather than microoptimization you probably want an on-disk database or to store your data in a file.
    0 讨论(0)
  • 2021-02-04 03:59

    A doubly linked list is worse than just reallocating the list. A Python list uses 5 words + one word per element. A doubly linked list will use 5 words per element. Even if you use a singly linked list, it's still going to be 4 words per element - a lot worse than the less than 2 words per element that rebuilding the list will take.

    From memory usage perspective, moving items up the list and deleting the slack at the end is the best approach. Python will release the memory if the list gets less than half full. The question to ask yourself is, does it really matter. The list entries probably point to some data, unless you have lots of duplicate objects in the list, the memory used for the list is insignificant compared to the data. Given that, you might just as well just build a new list.

    For building a new list, the approach you suggested is not that good. There's no apparent reason why you couldn't just go over the list once. Also, calling gc.collect() is unnecessary and actually harmful - the CPython reference counting will release the old list immediately anyway, and even the other garbage collectors are better off collecting when they hit memory pressure. So something like this will work:

    while processingdata:
        retained = []
        for item in somelist:
            dosomething(item)
            if not somecondition(item):
                retained.append(item)
        somelist = retained
    

    If you don't mind using side effects in list comprehensions, then the following is also an option:

    def process_and_decide(item):
        dosomething(item)
        return not somecondition(item)
    
    while processingdata:
        somelist = [item for item in somelist if process_and_decide(item)]
    

    The inplace method can also be refactored so the mechanism and business logic are separated:

    def inplace_filter(func, list_):
        pos = 0
        for item in list_:
            if func(item):
                list_[pos] = item
                pos += 1
        del list_[pos:]
    
    while processingdata:
        inplace_filter(process_and_decide, somelist)
    
    0 讨论(0)
  • 2021-02-04 04:00

    A set (or even a dict) might be what you're looking for. It's the same underlying structure as a dictionary (without the associated values), but your objects do need to be hashable.

    If order is important in your list/set, you could make an ordered set. There is a good recipe on activestate for an OrderedSet. There is another slick suggestion in this answer. Python 2.7 and 3.1 also have an an OrderedDict You would ave to test the implementation for your self to see how the overhead impacts you, but the speed gains from the hashtable may well be worth it.

    Depending on what sort of comparisons you're making on the objects in the list, a heap (heapq module) might also fit your problem. The heap will minimize the number of operations required for inserting and removing items in the underlying list.

    0 讨论(0)
  • 2021-02-04 04:07

    Without knowing the specifics of what you're doing with this list, it's hard to know exactly what would be best in this case. If your processing stage depends on the current index of the list element, this won't work, but if not, it appears you've left off the most Pythonic (and in many ways, easiest) approach: generators.

    If all you're doing is iterating over each element, processing it in some way, then either including that element in the list or not, use a generator. Then you never need to store the entire iterable in memory.

    def process_and_generate_data(source_iterable):
        for item in source_iterable:
            dosomestuff(item)
            if not somecondition(item):
                yield item
    

    You would need to have a processing loop that dealt with persisting the processed iterable (writing it back to a file, or whatever), or if you have multiple processing stages you'd prefer to separate into different generators you could have your processing loop pass one generator to the next.

    0 讨论(0)
  • 2021-02-04 04:12

    From your description it sounds like a deque ("deck") would be exactly what you are looking for:

    http://docs.python.org/library/collections.html#deque-objects

    "Iterate" across it by repeatedly calling pop() and then, if you want to keep the popped item in the deque, returning that item to the front with appendleft(item). To keep up with when you're done iterating and have seen everything in the deque, either put in a marker object like None that you watch for, or just ask for the deque's len() when you start a particular loop and use range() to pop() exactly that many items.

    I believe you will find all of the operations you need are then O(1).

    0 讨论(0)
  • 2021-02-04 04:14

    Python stores only references to objects in the list - not the elements themselves. If you grow a list item by item, the list (that is the list of references to the objects) will grow one by one, eventually reaching the end of the excess memory that Python preallocated at the end of the list (of references!). It then copies the list (of references!) into a new larger place while your list elements stay at their old location. As your code visits all the elements in the old list anyway, copying the references to a new list by new_list[i]=old_list[i] will be nearly no burden at all. The only performance hint is to allocate all new elements at once instead of appending them (OTOH the Python docs say that amortized append is still O(1) as the number of excess elements grows with the list size). If you are lacking the place for the new list (of references) then I fear you are out of luck - any data structure that would evade the O(n) in-place insert/delete will likely be bigger than a simple array of 4- or 8-byte entries.

    0 讨论(0)
提交回复
热议问题