NumPy: function for simultaneous max() and min()

后端 未结 12 782
天涯浪人
天涯浪人 2020-11-27 04:54

numpy.amax() will find the max value in an array, and numpy.amin() does the same for the min value. If I want to find both max and min, I have to call both functions, which

相关标签:
12条回答
  • 2020-11-27 05:15

    Just to get some ideas on the numbers one could expect, given the following approaches:

    import numpy as np
    
    
    def extrema_np(arr):
        return np.max(arr), np.min(arr)
    
    import numba as nb
    
    
    @nb.jit(nopython=True)
    def extrema_loop_nb(arr):
        n = arr.size
        max_val = min_val = arr[0]
        for i in range(1, n):
            item = arr[i]
            if item > max_val:
                max_val = item
            elif item < min_val:
                min_val = item
        return max_val, min_val
    
    import numba as nb
    
    
    @nb.jit(nopython=True)
    def extrema_while_nb(arr):
        n = arr.size
        odd = n % 2
        if not odd:
            n -= 1
        max_val = min_val = arr[0]
        i = 1
        while i < n:
            x = arr[i]
            y = arr[i + 1]
            if x > y:
                x, y = y, x
            min_val = min(x, min_val)
            max_val = max(y, max_val)
            i += 2
        if not odd:
            x = arr[n]
            min_val = min(x, min_val)
            max_val = max(x, max_val)
        return max_val, min_val
    
    %%cython -c-O3 -c-march=native -a
    #cython: language_level=3, boundscheck=False, wraparound=False, initializedcheck=False, cdivision=True, infer_types=True
    
    
    import numpy as np
    
    
    cdef void _extrema_loop_cy(
            long[:] arr,
            size_t n,
            long[:] result):
        cdef size_t i
        cdef long item, max_val, min_val
        max_val = arr[0]
        min_val = arr[0]
        for i in range(1, n):
            item = arr[i]
            if item > max_val:
                max_val = item
            elif item < min_val:
                min_val = item
        result[0] = max_val
        result[1] = min_val
    
    
    def extrema_loop_cy(arr):
        result = np.zeros(2, dtype=arr.dtype)
        _extrema_loop_cy(arr, arr.size, result)
        return result[0], result[1]
    
    %%cython -c-O3 -c-march=native -a
    #cython: language_level=3, boundscheck=False, wraparound=False, initializedcheck=False, cdivision=True, infer_types=True
    
    
    import numpy as np
    
    
    cdef void _extrema_while_cy(
            long[:] arr,
            size_t n,
            long[:] result):
        cdef size_t i, odd
        cdef long x, y, max_val, min_val
        max_val = arr[0]
        min_val = arr[0]
        odd = n % 2
        if not odd:
            n -= 1
        max_val = min_val = arr[0]
        i = 1
        while i < n:
            x = arr[i]
            y = arr[i + 1]
            if x > y:
                x, y = y, x
            min_val = min(x, min_val)
            max_val = max(y, max_val)
            i += 2
        if not odd:
            x = arr[n]
            min_val = min(x, min_val)
            max_val = max(x, max_val)
        result[0] = max_val
        result[1] = min_val
    
    
    def extrema_while_cy(arr):
        result = np.zeros(2, dtype=arr.dtype)
        _extrema_while_cy(arr, arr.size, result)
        return result[0], result[1]
    

    (the extrema_loop_*() approaches are similar to what is proposed here, while extrema_while_*() approaches are based on the code from here)

    The following timings:

    indicate that the extrema_while_*() are the fastest, with extrema_while_nb() being fastest. In any case, also the extrema_loop_nb() and extrema_loop_cy() solutions do outperform the NumPy-only approach (using np.max() and np.min() separately).

    Finally, note that none of these is as flexible as np.min()/np.max() (in terms of n-dim support, axis parameter, etc.).

    (full code is available here)

    0 讨论(0)
  • 2020-11-27 05:16

    This is an old thread, but anyway, if anyone ever looks at this again...

    When looking for the min and max simultaneously, it is possible to reduce the number of comparisons. If it is floats you are comparing (which I guess it is) this might save you some time, although not computational complexity.

    Instead of (Python code):

    _max = ar[0]
    _min=  ar[0]
    for ii in xrange(len(ar)):
        if _max > ar[ii]: _max = ar[ii]
        if _min < ar[ii]: _min = ar[ii]
    

    you can first compare two adjacent values in the array, and then only compare the smaller one against current minimum, and the larger one against current maximum:

    ## for an even-sized array
    _max = ar[0]
    _min = ar[0]
    for ii in xrange(0, len(ar), 2)):  ## iterate over every other value in the array
        f1 = ar[ii]
        f2 = ar[ii+1]
        if (f1 < f2):
            if f1 < _min: _min = f1
            if f2 > _max: _max = f2
        else:
            if f2 < _min: _min = f2
            if f1 > _max: _max = f1
    

    The code here is written in Python, clearly for speed you would use C or Fortran or Cython, but this way you do 3 comparisons per iteration, with len(ar)/2 iterations, giving 3/2 * len(ar) comparisons. As opposed to that, doing the comparison "the obvious way" you do two comparisons per iteration, leading to 2*len(ar) comparisons. Saves you 25% of comparison time.

    Maybe someone one day will find this useful.

    0 讨论(0)
  • 2020-11-27 05:17

    Nobody mentioned numpy.percentile, so I thought I would. If you ask for [0, 100] percentiles, it will give you an array of two elements, the min (0th percentile) and the max (100th percentile).

    However, it doesn't satisfy the OP's purpose: it's not faster than min and max separately. That's probably due to some machinery that would allow for non-extreme percentiles (a harder problem, which should take longer).

    In [1]: import numpy
    
    In [2]: a = numpy.random.normal(0, 1, 1000000)
    
    In [3]: %%timeit
       ...: lo, hi = numpy.amin(a), numpy.amax(a)
       ...: 
    100 loops, best of 3: 4.08 ms per loop
    
    In [4]: %%timeit
       ...: lo, hi = numpy.percentile(a, [0, 100])
       ...: 
    100 loops, best of 3: 17.2 ms per loop
    
    In [5]: numpy.__version__
    Out[5]: '1.14.4'
    

    A future version of Numpy could put in a special case to skip the normal percentile calculation if only [0, 100] are requested. Without adding anything to the interface, there's a way to ask Numpy for min and max in one call (contrary to what was said in the accepted answer), but the standard implementation of the library doesn't take advantage of this case to make it worthwhile.

    0 讨论(0)
  • 2020-11-27 05:20

    The shortest way I've come up with is this:

    mn, mx = np.sort(ar)[[0, -1]]
    

    But since it sorts the array, it's not the most efficient.

    Another short way would be:

    mn, mx = np.percentile(ar, [0, 100])
    

    This should be more efficient, but the result is calculated, and a float is returned.

    0 讨论(0)
  • 2020-11-27 05:24

    I don't think that passing over the array twice is a problem. Consider the following pseudo-code:

    minval = array[0]
    maxval = array[0]
    for i in array:
        if i < minval:
           minval = i
        if i > maxval:
           maxval = i
    

    While there is only 1 loop here, there are still 2 checks. (Instead of having 2 loops with 1 check each). Really the only thing you save is the overhead of 1 loop. If the arrays really are big as you say, that overhead is small compared to the actual loop's work load. (Note that this is all implemented in C, so the loops are more or less free anyway).


    EDIT Sorry to the 4 of you who upvoted and had faith in me. You definitely can optimize this.

    Here's some fortran code which can be compiled into a python module via f2py (maybe a Cython guru can come along and compare this with an optimized C version ...):

    subroutine minmax1(a,n,amin,amax)
      implicit none
      !f2py intent(hidden) :: n
      !f2py intent(out) :: amin,amax
      !f2py intent(in) :: a
      integer n
      real a(n),amin,amax
      integer i
    
      amin = a(1)
      amax = a(1)
      do i=2, n
         if(a(i) > amax)then
            amax = a(i)
         elseif(a(i) < amin) then
            amin = a(i)
         endif
      enddo
    end subroutine minmax1
    
    subroutine minmax2(a,n,amin,amax)
      implicit none
      !f2py intent(hidden) :: n
      !f2py intent(out) :: amin,amax
      !f2py intent(in) :: a
      integer n
      real a(n),amin,amax
      amin = minval(a)
      amax = maxval(a)
    end subroutine minmax2
    

    Compile it via:

    f2py -m untitled -c fortran_code.f90
    

    And now we're in a place where we can test it:

    import timeit
    
    size = 100000
    repeat = 10000
    
    print timeit.timeit(
        'np.min(a); np.max(a)',
        setup='import numpy as np; a = np.arange(%d, dtype=np.float32)' % size,
        number=repeat), " # numpy min/max"
    
    print timeit.timeit(
        'untitled.minmax1(a)',
        setup='import numpy as np; import untitled; a = np.arange(%d, dtype=np.float32)' % size,
        number=repeat), '# minmax1'
    
    print timeit.timeit(
        'untitled.minmax2(a)',
        setup='import numpy as np; import untitled; a = np.arange(%d, dtype=np.float32)' % size,
        number=repeat), '# minmax2'
    

    The results are a bit staggering for me:

    8.61869883537 # numpy min/max
    1.60417699814 # minmax1
    2.30169081688 # minmax2
    

    I have to say, I don't completely understand it. Comparing just np.min versus minmax1 and minmax2 is still a losing battle, so it's not just a memory issue ...

    notes -- Increasing size by a factor of 10**a and decreasing repeat by a factor of 10**a (keeping the problem size constant) does change the performance, but not in a seemingly consistent way which shows that there is some interplay between memory performance and function call overhead in python. Even comparing a simple min implementation in fortran beats numpy's by a factor of approximately 2 ...

    0 讨论(0)
  • 2020-11-27 05:25

    Is there a function in the numpy API that finds both max and min with only a single pass through the data?

    No. At the time of this writing, there is no such function. (And yes, if there were such a function, its performance would be significantly better than calling numpy.amin() and numpy.amax() successively on a large array.)

    0 讨论(0)
提交回复
热议问题