Recurrence with numpy

前端 未结 4 903
广开言路
广开言路 2020-12-20 03:14

Is there any way to make recurrence without using for\'s in numpy?

Using np.add with out keyword do the trick with dtype=\"int\"

4条回答
  •  有刺的猬
    2020-12-20 03:50

    Your add works for this calculation, but has to be applied repeatedly, so that the nonzero values propagate. I don't see how your calculation generated [ 1. 1. 2. 1. 3. 1. 4. 1. 5. 1.].

    In [619]: fib=np.zeros(10,int);fib[:2]=1
    In [620]: for _ in range(10):
         ...:     np.add(fib[:-2], fib[1:-1], out=fib[2:])
         ...:     print(fib)
         ...: 
    [1 1 2 1 0 0 0 0 0 0]   # **
    [1 1 2 3 3 1 0 0 0 0]
    [1 1 2 3 5 6 4 1 0 0]
    [ 1  1  2  3  5  8 11 10  5  1]
    [ 1  1  2  3  5  8 13 19 21 15]
     ...
    

    (edit - Note that the first np.add acts as though it is fully buffered. Compare the result at ** with both of your object and float arrays. I'm using version 1.13.1 on Py3.)

    Or to highlight the good values at each stage

    In [623]: fib=np.zeros(20,int);fib[:2]=1
    In [624]: for i in range(10):
         ...:     np.add(fib[:-2], fib[1:-1], out=fib[2:])
         ...:     print(fib[:(i+2)])
         ...: 
    [1 1]
    [1 1 2]
    [1 1 2 3]
    [1 1 2 3 5]
    [1 1 2 3 5 8]
    [ 1  1  2  3  5  8 13]
    [ 1  1  2  3  5  8 13 21]
    [ 1  1  2  3  5  8 13 21 34]
    [ 1  1  2  3  5  8 13 21 34 55]
    [ 1  1  2  3  5  8 13 21 34 55 89]
    

    fib[2:] = fib[:-2]+fib[1:-1] does the same thing.

    As discussed in the documentation for ufunc.at, operations like np.add use buffering, even with the out parameter. So while they do interate in C level code, they don't accumulate; that is, results from the ith step aren't used at the i+1 step.

    add.at can be used to perform unbuffered a[i] += b. That's handy when the indexes contain dupicates. but it doesn't allow for feedback from the changed a values to b. So it isn't useful here.

    The is also a add.accumulate (aka np.cumsum) which can be handy for certain iterative definitions. But it's hard to apply in general cases.

    cumprod (multiply accumulate) can work with the qmatrix approach

    Numpy's matrix_power function giving wrong results for large exponents

    Use np.matrix to define the qmatrix, so that * multiply means matrix product (as opposed to elementwise):

    In [647]: qmatrix = numpy.matrix([[1, 1], [1, 0]])
    

    Populate an object matrix with copies (pointers actually) to this matrix.

    In [648]: M = np.empty(10, object)
    In [649]: M[:] = [qmatrix for _ in range(10)]
    

    Apply cumprod, and extract one element from each matrix.

    In [650]: m1=np.cumprod(M)
    In [651]: m1
    Out[651]: 
    array([matrix([[1, 1],
            [1, 0]]), matrix([[2, 1],
            [1, 1]]),
           matrix([[3, 2],
            [2, 1]]), matrix([[5, 3],
            [3, 2]]),
           matrix([[8, 5],
            [5, 3]]),
           matrix([[13,  8],
            [ 8,  5]]),
           matrix([[21, 13],
            [13,  8]]),
           matrix([[34, 21],
            [21, 13]]),
           matrix([[55, 34],
            [34, 21]]),
           matrix([[89, 55],
            [55, 34]])], dtype=object)
    In [652]: [x[0,1] for x in m1]
    Out[652]: [1, 1, 2, 3, 5, 8, 13, 21, 34, 55]
    

    The linked answer uses numpy.linalg.matrix_power which also does repeated matrix products. For a single power, matrix_power is faster, but for a whole range of powers, the cumprod is faster.

提交回复
热议问题