I am trying to find the optimal solution to the follow system of equations in Python:
(x-x1)^2 + (y-y1)^2 - r1^2 = 0
(x-x2)^2 + (y-y2)^2 - r2^2 = 0
(x-x3)^2 + (y
If I understand your question correctly, I think this is what you're after:
from scipy.optimize import minimize
import numpy as np
def f(coord,x,y,r):
return np.sum( ((coord[0] - x)**2) + ((coord[1] - y)**2) - (r**2) )
x = np.array([0, 2, 0])
y = np.array([0, 0, 2])
r = np.array([.88, 1, .75])
# initial (bad) guess at (x,y) values
initial_guess = np.array([100,100])
res = minimize(f,initial_guess,args = [x,y,r])
Which yields:
>>> print res.x
[ 0.66666666 0.66666666]
You might also try the least squares method which expects an objective function that returns a vector. It wants to minimize the sum of the squares of this vector. Using least squares, your objective function would look like this:
def f2(coord,args):
x,y,r = args
# notice that we're returning a vector of dimension 3
return ((coord[0]-x)**2) + ((coord[1] - y)**2) - (r**2)
And you'd minimize it like so:
from scipy.optimize import leastsq
res = leastsq(f2,initial_guess,args = [x,y,r])
Which yields:
>>> print res[0]
>>> [ 0.77961518 0.85811473]
This is basically the same as using minimize
and re-writing the original objective function as:
def f(coord,x,y,r):
vec = ((coord[0]-x)**2) + ((coord[1] - y)**2) - (r**2)
# return the sum of the squares of the vector
return np.sum(vec**2)
This yields:
>>> print res.x
>>> [ 0.77958326 0.8580965 ]
Note that args
are handled a bit differently with leastsq
, and that the data structures returned by the two functions are also different. See the documentation for scipy.optimize.minimize and scipy.optimize.leastsq for more details.
See the scipy.optimize documentation for more optimization options.