scipy-optimize

Python- scipy ODR going crazy

。_饼干妹妹 提交于 2021-02-10 06:09:37
问题 I would like to use scipy's ODR to fit a curve to a set of variables with variances. In this case, I am fitting a linear function with a set Y axis crossing point (e.g. a*x+100 ). Due to my inability to find an estimator (I asked about that here), I am using scipy.optimize curve_fit to estimate the initial a value. Now, the function works perfectly without standard deviation, but when I add it, the output makes completely no sense (the curve is far above all of the points). What could be the

Scipy minimisation optimisation row-wise on DataFrame

不羁岁月 提交于 2021-01-29 18:41:12
问题 TYPO FIXEDD I need to perform a minimization optimisation for each timestep in my timeseries. The optimisation sets the price based on values in different columns across the row and a series of inequality constraints. My Dataframe has the following columns across time series of 48 years: ['CAPEX_TOT', 'CAPEX_R', 'CAPEX_WS', 'debt_BP', 'principal','interest', 'debt_service', 'debt_EP', 'OPEX', 'OPEX_R', 'OPEX_WS', 'DELIVERY_BAWSCA', 'DELIVERY_OTHER_DEMAND', 'DELIVERY_SAN_FRANCISCO_CITY',

Python scipy.optimise.curve_fit gives linear fit

…衆ロ難τιáo~ 提交于 2021-01-29 04:20:21
问题 I have come across a problem when playing with the parameters of the curve_fit from scipy. I have initially copied the code suggested by the docs. I then changed the equation slightly and it was fine, but having increased the np.linspace, the whole prediction ended up being a straight line. Any ideas? import numpy as np from scipy.optimize import curve_fit import matplotlib.pyplot as plt def f(x, a, b, c): # This works fine on smaller numbers return (a - c) * np.exp(-x / b) + c xdata = np

`f0` passed has more than 1 dimension. - fmin_l_bfgs_b

心已入冬 提交于 2021-01-28 11:48:17
问题 I want to find parameters which minimalize a function but I get an error. So far use scipy.optimize.fmin but I want to add bounds for every argument. This is my code def Kou_calibration_full(): i=0 global opt p0 = spo.brute(Kou_error_function, ((0.10,0.31, 0.1),(0.01,2.6, 0.5), (0.1,0.92,0.2), (1.1,20,7),(0.1,20,7)), finish=None) opt = spo.minimize(Kou_error_function, p0, bounds=((0.10,0.31),(0.01,2.6), (0.1,0.92), (1.1,20),(0.1,20))) return opt -----------------------------------------------

Trying to fit a trig function to data with scipy

杀马特。学长 韩版系。学妹 提交于 2021-01-27 13:04:39
问题 I am trying to fit some data using scipy.optimize.curve_fit . I have read the documentation and also this StackOverflow post, but neither seem to answer my question. I have some data which is simple, 2D data which looks approximately like a trig function. I want to fit it with a general trig function using scipy . My approach is as follows: from __future__ import division import numpy as np from scipy.optimize import curve_fit #Load the data data = np.loadtxt('example_data.txt') t = data[:,0]

Trying to fit a trig function to data with scipy

人盡茶涼 提交于 2021-01-27 13:00:55
问题 I am trying to fit some data using scipy.optimize.curve_fit . I have read the documentation and also this StackOverflow post, but neither seem to answer my question. I have some data which is simple, 2D data which looks approximately like a trig function. I want to fit it with a general trig function using scipy . My approach is as follows: from __future__ import division import numpy as np from scipy.optimize import curve_fit #Load the data data = np.loadtxt('example_data.txt') t = data[:,0]

Product feature optimization with constraints

心已入冬 提交于 2021-01-07 04:12:56
问题 I have trained a Lightgbm model on learning to rank dataset. The model predicts relevance score of a sample. So higher the prediction the better it is. Now that the model has learned I would like to find the best values of some features that gives me the highest prediction score. So, lets say I have features u,v,w,x,y,z and the features I would like to optimize over are x,y,z . maximize f(u,v,w,x,y,z) w.r.t features x,y,z where f is a lightgbm model subject to constraints : y = Ax + b z = 4

Product feature optimization with constraints

不羁岁月 提交于 2021-01-07 04:10:21
问题 I have trained a Lightgbm model on learning to rank dataset. The model predicts relevance score of a sample. So higher the prediction the better it is. Now that the model has learned I would like to find the best values of some features that gives me the highest prediction score. So, lets say I have features u,v,w,x,y,z and the features I would like to optimize over are x,y,z . maximize f(u,v,w,x,y,z) w.r.t features x,y,z where f is a lightgbm model subject to constraints : y = Ax + b z = 4

Scipy Optimize Curve fit not properly fitting with real data

为君一笑 提交于 2021-01-01 08:56:32
问题 I am trying to fit a decaying exponential function to real world data. I'm having a problem with aligning the function to the actual data. Here's my code: def test_func(x, a, b, c): return a*np.exp(-b*x)*np.sin(c*x) my_time = np.linspace(0,2.5e-6,25000) p0 = [60000, 700000, 2841842] params, params_covariance = curve_fit(test_func, my_time, my_amp,p0) My signal and fitted function My question: why doesn't the fitted function start where my data starts increasing in amplitude? 回答1: As I said in

passing the Jacobian to newton_krylov nonlinear solver in python

十年热恋 提交于 2020-12-13 04:41:52
问题 I want to solve a nonlinear system of algebraic equations by using newton_krylov solver in python. The function defining the system of equations looks like: def sys(x, param1, param2, param3, param4): ... return f It gets a vector x at the input, together with three vectors param1 , param2 , param3 and one number param4 , that serve as parameters, and it returns another vector f at the output. I have also a function that gives the Jacobian of the system; it looks like def jac(x, param1,