I\'ve got a large one-dimensional array of integers I need to take slices off. That\'s trivial, I\'d just do a[start:end]
. The problem is that I need more of th
There is no numpy method to do this. Note that since it is irregular, it would only be a list of arrays/slices anyways. However I would like to add that for all (binary) ufuncs
which are almost all functions in numpy (or they are at least based on them), there is the reduceat
method, which might help you to avoid actually creating a list of slices, and thus, if the slices are small, speed up calculations too:
In [1]: a = np.arange(10)
In [2]: np.add.reduceat(a, [0,4,7]) # add up 0:4, 4:7 and 7:end
Out[2]: array([ 6, 15, 24])
In [3]: np.maximum.reduceat(a, [0,4,7]) # maximum of each of those slices
Out[3]: array([3, 6, 9])
In [4]: w = np.asarray([0,4,7,10]) # 10 for the total length
In [5]: np.add.reduceat(a, w[:-1]).astype(float)/np.diff(w) # equivalent to mean
Out[5]: array([ 1.5, 5. , 8. ])
EDIT: Since your slices overlap, I will add that this is OK too:
# I assume that start is sorted for performance reasons.
reductions = np.column_stack((start, end)).ravel()
sums = np.add.reduceat(a, reductions)[::2]
The [::2]
should be no big deal here normally, since no real extra work is done for overlapping slices.
Also there is one problem here with slices for which stop==len(a)
. This must be avoided. If you have exactly one slice with it, you could just do reductions = reductions[:-1]
(if its the last one), but otherwise you will simply need to append a value to a
to trick reduceat
:
a = np.concatenate((a, [0]))
As adding one value to the end does not matter since you work on the slices anyways.