Find a root of a function in a given range

前端 未结 3 1577
感动是毒
感动是毒 2021-01-19 15:35

I have a set of functions f_t with several roots (two actually). I want to find the \"first\" root and doing this with fsolve works fine most of th

相关标签:
3条回答
  • 2021-01-19 15:56

    Classically, you could use root:

    import numpy as np
    from scipy.optimize import root
    
    def func(x, t):
        return x ** 2 - 1. / t
    
    t = 5000
    
    res = root(func, 0.5, args=(t, )).x[0]
    print res
    

    That would print the positive one, in this case 0.0141421356237.

    If you want to specify the range and determine all roots within this interval, you can use chebpy:

    from chebpy import chebfun
    
    x = chebfun('x', [-100000, 100000])
    t = 5000
    f = x ** 2 - 1. / t
    
    rts = f.roots()
    print rts
    

    This would print both the positive and the negative root, in this case

    [-0.01413648  0.01413648]
    

    If you only want to look in the positive range, you can change

    x = chebfun('x', [-100000, 100000])
    

    to

    x = chebfun('x', [0, 100000])
    

    I am, however, not sure how to use infinity, but you can just use a very high number for practical purposes, I think.

    0 讨论(0)
  • 2021-01-19 15:58

    You can use scipy.optimize.bisect, which takes two parameters a and b that define the starting interval. There are a few limitations, though:

    • The interval needs to be finite. You cannot search in [0, inf].
    • The function must flip sign at the root (f(a) and f(b) must have opposite signs) so, for example, you cannot find the root of f(x) = abs(x) (If that is even a "root" in the mathematical sense). Also, it won't work for f(x) = x**2 - 1 and an interval of [a, b] with a<-1 and b>1.
    • The method is not gradient based. This can be an advantage if the function is very jagged or expensive to evaluate but it may be slower on other functions.

    An alternative is to use scipy.optimize.minimize to minimize abs(f(x)). This function can take bounds that include infinity. But minimization may end up in a non-root local minimum of the function.

    0 讨论(0)
  • 2021-01-19 16:01

    It is generally accepted that for smooth, well-behaved functions, the Brent method is the fastest method guaranteed to give a root. As with the other two methods listed, you must provide an interval [a,b] across which the function is continuous and changes sign.

    The Scipy implementation is documented here. An example use case for the function you mentioned could look like this:

    from __future__ import division
    import scipy
    
    def func(x,t):
        return(x**2 - 1/t)
    
    t0 = 1
    min = 0
    max = 100000 # set max to some sufficiently large value
    
    root = scipy.optimize.brentq(func, min, max, args = (t0)) # args just supplies any extra
                                                           # argument for the function that isn't the varied parameter
    
    0 讨论(0)
提交回复
热议问题