least-squares

Two stage least square in R

家住魔仙堡 提交于 2019-12-20 14:19:14
问题 I want to run a two stage probit least square regression in R. Does anyone know how to do this? Is there any package out there? I know it's possible to do it using Stata, so I imagine it's possible to do it with R. 回答1: You might want to be more specific when you say 'two-stage-probit-least-squares'. Since you refer to a Stata program that implements this I am guessing you are talking about the CDSIMEQ package, which implements the Amemiya (1978) procedure for the Heckit model (a.k.a

How to use least squares with weight matrix?

喜夏-厌秋 提交于 2019-12-20 10:48:16
问题 I know how to solve A.X = B by least squares using Python: Example: A=[[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,0,0]] B=[1,1,1,1,1] X=numpy.linalg.lstsq(A, B) print X[0] # [ 5.00000000e-01 5.00000000e-01 -1.66533454e-16 -1.11022302e-16] But what about solving this same equation with a weight matrix not being Identity: A.X = B (W) Example: A=[[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,0,0]] B=[1,1,1,1,1] W=[1,2,3,4,5] 回答1: I don't know how you have defined your weights, but you could try

How to use least squares with weight matrix?

烈酒焚心 提交于 2019-12-20 10:48:10
问题 I know how to solve A.X = B by least squares using Python: Example: A=[[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,0,0]] B=[1,1,1,1,1] X=numpy.linalg.lstsq(A, B) print X[0] # [ 5.00000000e-01 5.00000000e-01 -1.66533454e-16 -1.11022302e-16] But what about solving this same equation with a weight matrix not being Identity: A.X = B (W) Example: A=[[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,1,1],[1,1,0,0]] B=[1,1,1,1,1] W=[1,2,3,4,5] 回答1: I don't know how you have defined your weights, but you could try

function for weighted least squares estimates

◇◆丶佛笑我妖孽 提交于 2019-12-20 10:46:32
问题 Does R have a function for weighted least squares? Specifically, I am looking for something that computes intercept and slope. Data sets 1 3 5 7 9 11 14 17 19 25 29 17 31 19 27 31 62 58 35 29 21 18 102153 104123 96564 125565 132255 115454 114555 132255 129564 126455 124578 The dependent variable is dataset 3 and dataset 1 and 2 are the independent variables. 回答1: Yes, of course, there is a weights= option to lm() , the basic linear model fitting function. Quick example: R> df <- data.frame(x

How to solve a least squares (underdetermined system) quickly?

前提是你 提交于 2019-12-20 10:36:47
问题 I have a program in R that is computing a large amount of least squares solutions (>10,000: typically 100,000+) and, after profiling, these are the current bottlenecks for the program. I have a matrix A with column vectors that correspond to spanning vectors and a solution b . I am attempting to solve for the least-squares solution x of Ax=b . The matrices are typically 4xj in size - many of them are not square (j < 4) and so general solutions to under-determined systems are what I am looking

What is the difference between numpy.linalg.lstsq and scipy.linalg.lstsq?

。_饼干妹妹 提交于 2019-12-20 09:29:40
问题 lstsq tries to solve Ax=b minimizing |b - Ax| . Both scipy and numpy provide a linalg.lstsq function with a very similar interface. The documentation does not mention which kind of algorithm is used, neither for scipy.linalg.lstsq nor for numpy.linalg.lstsq, but it seems to do pretty much the same. The implementation seems to be different for scipy.linalg.lstsq and numpy.linalg.lstsq. Both seem to use LAPACK, both algorithms seem to use a SVD. Where is the difference? Which one should I use?

python scipy leastsq fit with complex numbers

前提是你 提交于 2019-12-20 05:36:31
问题 I have a data set of complex numbers, and I'd like to be able to find parameters that best fit the data. Can you fit data in complex numbers using leastsq as implemented by scipy in python? For example, my code is something like this: import cmath from scipy.optimize import leastsq def residuals(p,y,x): L,Rs,R1,C=p denominator=1+(x**2)*(C**2)*(R1**2) sim=complex(Rs+R1/denominator,x*L-(R1**2)*x*C/denominator) return(y-sim) z=<read in data, store as complex number> x0=np.array[1, 2, 3, 4] res =

Is mldivide always the same as OLS in MATLAB?

孤人 提交于 2019-12-19 09:46:36
问题 I am doing a comparison of some alternate linear regression techniques. Clearly these will be bench-marked relative to OLS (Ordinary Least Squares). But I just want a pure OLS method, no preconditioning of the data to uncover ill-conditioning in the data as you find when you use regress() . I had hoped to simply use the classic (XX)^-1XY expression? However this would necessitate using the inv() function, but in the MATLAB guide page for inv() it recommends that you use mldivide when doing

Is mldivide always the same as OLS in MATLAB?

怎甘沉沦 提交于 2019-12-19 09:46:24
问题 I am doing a comparison of some alternate linear regression techniques. Clearly these will be bench-marked relative to OLS (Ordinary Least Squares). But I just want a pure OLS method, no preconditioning of the data to uncover ill-conditioning in the data as you find when you use regress() . I had hoped to simply use the classic (XX)^-1XY expression? However this would necessitate using the inv() function, but in the MATLAB guide page for inv() it recommends that you use mldivide when doing

Weighted least square - fit a plane to 3D point set

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-19 03:37:40
问题 I am fitting a plane to a 3D point set with the least square method. I already have algorithm to do that, but I want to modify it to use weighted least square. Meaning I have a weight for each point (the bigger weight, the closer the plane should be to the point). The current algorithm (without weight) looks like this: Compute the sum: for(Point3D p3d : pointCloud) { pos = p3d.getPosition(); fSumX += pos[0]; fSumY += pos[1]; fSumZ += pos[2]; fSumXX += pos[0]*pos[0]; fSumXY += pos[0]*pos[1];