least-squares

Least squares Levenburg Marquardt with Apache commons

核能气质少年 提交于 2019-12-12 05:29:41
问题 I'm using the non linear least squares Levenburg Marquardt algorithm in java to fit a number of exponential curves (A+Bexp(Cx)). Although the data is quite clean and has a good approximation to the model the algorithm is not able to model the majority of them even with a excessive number of iterations(5000-6000). For the curves it can model, it does so in about 150 iterations. LeastSquaresProblem problem = new LeastSquaresBuilder() .start(start).model(jac).target(dTarget) .lazyEvaluation

How to guess the actual lorentzian function without relaxation behavior with Least square curve fitting

守給你的承諾、 提交于 2019-12-12 05:26:58
问题 I wanted to ask you if it would be possible to implement this idea: So all in all, I measure a signal (blue curve, See plot of the measured data and the initial guess for the lorentzian function), this signal is a convolution of a lorentzian function and a certain relaxation kernel. I have an initial guess of the lorentzian function (see green curve), but as you notice, the green curve is not really aperfect lorentzian function , as it is still dissymmetric in the bottom. I have never used

optimize common parameters across several equations using least squares in python

半腔热情 提交于 2019-12-11 14:34:22
问题 I would like to optimize common parameters among several equations, but i dont know how to fit them simultaneously. The problem is essentially like this, with four equations to solve and three parameters to optimize: a+b+c+1750=T 12=a/T*100 15=b/T*100 37=c/T*100 where I would like to find the optimal values of a,b, and c. Does anybody have a suggestion, perhaps using a least squares method? I am only familiar when there is but one equation to solve. 回答1: It seems your equations actually have

How to solve an overdetermined set of equations using non-linear lest squares in Matlab

坚强是说给别人听的谎言 提交于 2019-12-11 12:58:58
问题 A11 = cos(x)*cos(y) (1) A12 = cos(x)*sin(y) (2) A13 = -sin(y) (3) A21 = sin(z)*sin(x)*cos(y) - cos(z)*sin(y) (4) A22 = sin(z)*sin(y)*sin(x) + cos(z)*cos(y) (5) A23 = cos(x)*sin(z) (6) A31 = cos(z)*sin(x)*cos(z) + sin(z)*sin(x) (7) A32 = cos(z)*sin(x)*sin(y) - sin(z)*cos(y) (8) A33 = cos(x)*cos(z) (9) I have a set of nine equations and only three unknowns. The unknowns are x, y and z. I know the values of A11, A12, A13 ....... A33. But these values might have some noise and therefore I will

LMS batch gradient descent with NumPy

你。 提交于 2019-12-11 09:45:33
问题 I'm trying to write some very simple LMS batch gradient descent but I believe I'm doing something wrong with the gradient. The ratio between the order of magnitude and the initial values for theta is very different for the elements of theta so either theta[2] doesn't move (e.g. if alpha = 1e-8 ) or theta[1] shoots off (e.g. if alpha = .01 ). import numpy as np y = np.array([[400], [330], [369], [232], [540]]) x = np.array([[2104,3], [1600,3], [2400,3], [1416,2], [3000,4]]) x = np.concatenate(

Ordinary least squares with glmnet and lm

£可爱£侵袭症+ 提交于 2019-12-11 02:38:10
问题 This question was asked in stackoverflow.com/q/38378118 but there was no satisfactory answer. LASSO with λ = 0 is equivalent to ordinary least squares, but this does not seem to be the case for glmnet() and lm() in R. Why? library(glmnet) options(scipen = 999) X = model.matrix(mpg ~ 0 + ., data = mtcars) y = as.matrix(mtcars["mpg"]) coef(glmnet(X, y, lambda = 0)) lm(y ~ X) Their regression coefficients agree by at most 2 significant figures, perhaps due to slightly different termination

MATLAB: Piecewise function in curve fitting toolbox using fittype

早过忘川 提交于 2019-12-10 23:42:38
问题 Ignore the red fitted curve first. I'd like to get a curve to the blue datapoints. I know the first part (up to y~200 in this case) is linear, then a different curve (combination of two logarithmic curves but could also be approximated differently) and then it saturates at about 250 or 255. I tried it like this: func = fittype('(x>=0 & x<=xTrans1).*(A*x+B)+(x>=xTrans1 & x<=xTrans2).*(C*x+D)+(x>=xTrans2).*E*255'); freg = fit(foundData(:,1), foundData(:,2), func); plot(freg, foundData(:,1),

R script - NLS not working

不羁岁月 提交于 2019-12-10 19:36:56
问题 I have 5 (x,y) data points and I'm trying to find a best fit solution consisting of two lines which intersect at a point (x0,y0), and which follow these equations: y1 = (m1)(x1 - x0) + y0 y2 = (m2)(x2 - x0) + y0 Specifically, I require that the intersection must occur between x=2 and x=3. Have a look at the code: #Initialize x1, y1, x2, y2 x1 <- c(1,2) y1 <- c(10,10) x2 <- c(3,4,5) y2 <- c(20,30,40) g <- c(TRUE, TRUE, FALSE, FALSE, FALSE) q <- nls(c(y1, y2) ~ ifelse(g == TRUE, m1 * (x1 - x0)

Get Durbin-Watson and Jarque-Bera statistics from OLS Summary in Python

醉酒当歌 提交于 2019-12-10 09:55:58
问题 I am running the OLS summary for a column of values. Part of the OLS is the Durbin-Watson and Jarque-Bera (JB) statistics and I want to pull those values out directly since they have already been calculated rather than running the steps as extra steps like I do now with durbinwatson. Here is the code I have: import pandas as pd import statsmodels.api as sm csv = mydata.csv df = pd.read_csv(csv) var = df[variable] year = df['Year'] model = sm.OLS(var,year) results = model.fit() summary =

linear regression using lm() - surprised by the result

为君一笑 提交于 2019-12-09 03:11:14
问题 I used a linear regression on data I have, using the lm function. Everything works (no error message), but I'm somehow surprised by the result: I am under the impression R "misses" a group of points, i.e. the intercept and slope are not the best fit. For instance, I am referring to the group of points at coordinates x=15-25,y=0-20. My questions: is there a function to compare fit with "expected" coefficients and "lm-calculated" coefficients? have I made a silly mistake when coding, leading