least-squares

Least Linear Squares: scipy.optimize.curve_fit() throws “Result from function call is not a proper array of floats.”

旧城冷巷雨未停 提交于 2019-12-25 04:07:08
问题 i am trying to implement an python code right now, which computes the least squared error for a matrix equation First, I have some 2 dimensional Data XDATA (Array of Array of Floats), Shape 100,9 Second, I have some 2 dimesional YDATA, Shape 100,3 My function creates an array of 3 entries out of an array of 9 entries and 3 unknow parameters. So im trying to estimate these parameters via linear least squares: #linear regression #Messured Data xdata = np.array(data[0:9]) xdata = xdata.astype(np

Faster way of finding least-square solution for large matrix

橙三吉。 提交于 2019-12-24 17:43:29
问题 I want to find the least-square solution of a matrix and I am using the numpy linalg.lstsq function; weights = np.linalg.lstsq(semivariance, prediction, rcond=None) The dimension for my variables are; semivariance is a float of size 5030 x 5030 prediction is a 1D array of length 5030 The problem I have is it takes approximately 80sec to return the value of weights and I have to repeat the calculation of weights about 10000 times so the computational time is just elevated. Is there a faster

Using optim to find minimization while also forcing the parameters to sum to 1

江枫思渺然 提交于 2019-12-24 07:17:32
问题 I am trying to use R and optim to calculate mixing proportions. So, for example, let's say I have a rock composition, 60% SiO2 and 40% CaO. I want to know how much of two different phases I have to mix to produce my rock. Let's say phase2 is 35% SiO2 and 65% CaO and Phase2 80% SiO2 and 20% CaO. ***Edits: I have updated the code to include a 3rd phase, and my attempt at using compositions package.I also have attempted to set bounds on the optim search range. #Telling R the composition of both

Least square optimization in R

房东的猫 提交于 2019-12-24 04:08:24
问题 I am wondering how one could solve the following problem in R. We have a v vector (of n elements) and a B matrix (of dimension m x n ). E.g: > v [1] 2 4 3 1 5 7 > B [,1] [,2] [,3] [,4] [,5] [,6] [1,] 2 1 5 5 3 4 [2,] 4 5 6 3 2 5 [3,] 3 7 5 1 7 6 I am looking for the m -long vector u such that sum( ( v - ( u %*% B) )^2 ) is minimized (i.e. minimizes the sum of squares). 回答1: You are describing linear regression, which can be done with the lm function: coefficients(lm(v~t(B)+0)) # t(B)1 t(B)2 t

High performance calculation of least squares difference from all possible combinations (n lists)

廉价感情. 提交于 2019-12-24 02:07:21
问题 I'm looking for a very efficient way to calculate all possible combinations from n lists and then keep the combination with the smallest least squares difference. I already have a code that does it, but when it gets to several million combinations things get slow. candidates_len contains a list of lists with lengths i.e. [[500, 490, 510, 600][300, 490, 520][305, 497, 515]] candidates_name contains a list of lists with names i.e. [['a', 'b', 'c', 'd']['mi', 'mu', 'ma']['pi', 'pu', 'pa']] Both

Linear regression line in MATLAB scatter plot

左心房为你撑大大i 提交于 2019-12-23 09:12:05
问题 I am trying to get the residuals for the scatter plot of two variables. I could get the least squares linear regression line using lsline function of matlab. However, I want to get the residuals as well. How can I get this in matlab. For that I need to know the parameters a and b of the linear regression line ax+b 回答1: Use the function polyfit to obtain the regression parameters. You can then evaluate the fitted values and calculate your residuals accordingly. Basically polyfit performs least

LASSO with $\lambda = 0$ and OLS produce different results in R glmnet

送分小仙女□ 提交于 2019-12-23 07:02:14
问题 I expect LASSO with no penalization ($\lambda=0$) to yield the same (or very similar) coefficient estimates as an OLS fit. However, I get different coefficient estimates in R putting the same data (x,y) into glmnet(x, y , alpha=1, lambda=0) for LASSO fit with no penalization and lm(y ~ x) for OLS fit. Why is that? 回答1: You're using the function wrong. The x should be the model matrix. Not the raw predictor value. When you do that, you get the exact same results: x <- rnorm(500) y <- rnorm(500

Convergence of a very large non-linear least squares optimization

谁说我不能喝 提交于 2019-12-22 10:52:45
问题 I'm trying to solve the following problem: I have a lot (~80000) surface patches of an organ that's growing. I measure each of its areas over time (18 time-points) and want to fit a growth curve to it (bi-logistic model, eg. just the sum of two logistic functions bcs. there are two 'growth spurts' happening in the observed period). I have box constraints to ensure that the exponential terms don't explode and a linear constraint that one growth spurt has to happen after the other. Also, in

how to set up the initial value for curve_fit to find the best optimizing, not just local optimizing?

跟風遠走 提交于 2019-12-21 06:57:52
问题 I am trying to fit a power-law function, and in order to find the best fit parameter. However, I find that if the initial guess of parameter is different, the "best fit" output is different. Unless I find the right initial guess, I can get the best optimizing, instead of local optimizing. Is there any way to find the **appropriate initial guess ** ????. My code is listed below. Please feel free make any input. Thanks! import numpy as np import pandas as pd from scipy.optimize import curve_fit

Splines inside nonlinear least squares in R

自闭症网瘾萝莉.ら 提交于 2019-12-21 05:12:07
问题 Consider a nonlinear least squares model in R, for example of the following form): y ~ theta / ( 1 + exp( -( alpha + beta * x) ) ) (my real problem has several variables and the outer function is not logistic but a bit more involved; this one is simpler but I think if I can do this my case should follow almost immediately) I'd like to replace the term "alpha + beta * x" with (say) a natural cubic spline. here's some code to create some example data with a nonlinear function inside the