Iteration over list slices

后端 未结 9 543
名媛妹妹
名媛妹妹 2020-11-30 00:59

I want an algorithm to iterate over list slices. Slices size is set outside the function and can differ.

In my mind it is something like:

for list_of_x         


        
相关标签:
9条回答
  • 2020-11-30 01:38

    If you want to divide a list into slices you can use this trick:

    list_of_slices = zip(*(iter(the_list),) * slice_size)
    

    For example

    >>> zip(*(iter(range(10)),) * 3)
    [(0, 1, 2), (3, 4, 5), (6, 7, 8)]
    

    If the number of items is not dividable by the slice size and you want to pad the list with None you can do this:

    >>> map(None, *(iter(range(10)),) * 3)
    [(0, 1, 2), (3, 4, 5), (6, 7, 8), (9, None, None)]
    

    It is a dirty little trick


    OK, I'll explain how it works. It'll be tricky to explain but I'll try my best.

    First a little background:

    In Python you can multiply a list by a number like this:

    [1, 2, 3] * 3 -> [1, 2, 3, 1, 2, 3, 1, 2, 3]
    ([1, 2, 3],) * 3 -> ([1, 2, 3], [1, 2, 3], [1, 2, 3])
    

    And an iterator object can be consumed once like this:

    >>> l=iter([1, 2, 3])
    >>> l.next()
    1
    >>> l.next()
    2
    >>> l.next()
    3
    

    The zip function returns a list of tuples, where the i-th tuple contains the i-th element from each of the argument sequences or iterables. For example:

    zip([1, 2, 3], [20, 30, 40]) -> [(1, 20), (2, 30), (3, 40)]
    zip(*[(1, 20), (2, 30), (3, 40)]) -> [[1, 2, 3], [20, 30, 40]]
    

    The * in front of zip used to unpack arguments. You can find more details here. So

    zip(*[(1, 20), (2, 30), (3, 40)])
    

    is actually equivalent to

    zip((1, 20), (2, 30), (3, 40))
    

    but works with a variable number of arguments

    Now back to the trick:

    list_of_slices = zip(*(iter(the_list),) * slice_size)
    

    iter(the_list) -> convert the list into an iterator

    (iter(the_list),) * N -> will generate an N reference to the_list iterator.

    zip(*(iter(the_list),) * N) -> will feed those list of iterators into zip. Which in turn will group them into N sized tuples. But since all N items are in fact references to the same iterator iter(the_list) the result will be repeated calls to next() on the original iterator

    I hope that explains it. I advice you to go with an easier to understand solution. I was only tempted to mention this trick because I like it.

    0 讨论(0)
  • 2020-11-30 01:44

    Expanding on the answer of @Ants Aasma: In Python 3.7 the handling of the StopIteration exception changed (according to PEP-479). A compatible version would be:

    from itertools import chain, islice
    
    def ichunked(seq, chunksize):
        it = iter(seq)
        while True:
            try:
                yield chain([next(it)], islice(it, chunksize - 1))
            except StopIteration:
                return
    
    0 讨论(0)
  • 2020-11-30 01:52

    Do you mean something like:

    def callonslices(size, fatherList, foo):
      for i in xrange(0, len(fatherList), size):
        foo(fatherList[i:i+size])
    

    If this is roughly the functionality you want you might, if you desire, dress it up a bit in a generator:

    def sliceup(size, fatherList):
      for i in xrange(0, len(fatherList), size):
        yield fatherList[i:i+size]
    

    and then:

    def callonslices(size, fatherList, foo):
      for sli in sliceup(size, fatherList):
        foo(sli)
    
    0 讨论(0)
  • 2020-11-30 01:53

    Your question could use some more detail, but how about:

    def iterate_over_slices(the_list, slice_size):
        for start in range(0, len(the_list)-slice_size):
            slice = the_list[start:start+slice_size]
            foo(slice)
    
    0 讨论(0)
  • 2020-11-30 01:54

    For a near-one liner (after itertools import) in the vein of Nadia's answer dealing with non-chunk divisible sizes without padding:

    >>> import itertools as itt
    >>> chunksize = 5
    >>> myseq = range(18)
    >>> cnt = itt.count()
    >>> print [ tuple(grp) for k,grp in itt.groupby(myseq, key=lambda x: cnt.next()//chunksize%2)]
    [(0, 1, 2, 3, 4), (5, 6, 7, 8, 9), (10, 11, 12, 13, 14), (15, 16, 17)]
    

    If you want, you can get rid of the itertools.count() requirement using enumerate(), with a rather uglier:

    [ [e[1] for e in grp] for k,grp in itt.groupby(enumerate(myseq), key=lambda x: x[0]//chunksize%2) ]
    

    (In this example the enumerate() would be superfluous, but not all sequences are neat ranges like this, obviously)

    Nowhere near as neat as some other answers, but useful in a pinch, especially if already importing itertools.

    0 讨论(0)
  • 2020-11-30 01:55

    Use a generator:

    big_list = [1,2,3,4,5,6,7,8,9]
    slice_length = 3
    def sliceIterator(lst, sliceLen):
        for i in range(len(lst) - sliceLen + 1):
            yield lst[i:i + sliceLen]
    
    for slice in sliceIterator(big_list, slice_length):
        foo(slice)
    

    sliceIterator implements a "sliding window" of width sliceLen over the squence lst, i.e. it produces overlapping slices: [1,2,3], [2,3,4], [3,4,5], ... Not sure if that is the OP's intention, though.

    0 讨论(0)
提交回复
热议问题