least-squares

Compute least squares using java

女生的网名这么多〃 提交于 2019-12-05 12:09:43
I am trying to find a java code to compute the least squares solution (x) in the Ax=b equation. Suppose that A = [1 0 0;1 0 0]; b = [1; 2]; x = A\b returns the x = 1.5000 0 0 I found Class LeastSquares, public LeastSquares(double[] a, double[] b, int degree) but in the input both A and B are one dimensional arrays, however, in above example, A is a matrix and B is an array. In Class NonNegativeLeastSquares public NonNegativeLeastSquares(int M, int N, double a[][],double b[]) A is a matrix and B is an array, but the description of the class says that it finds an approximate solution to the

How to compute minimal but fast linear regressions on each column of a response matrix?

回眸只為那壹抹淺笑 提交于 2019-12-05 11:56:58
I want to compute ordinary least square ( OLS ) estimates in R without using "lm" , and this for several reasons. First, "lm" also computes lots of stuff I don't need (such as the fitted values) considering that data size is an issue in my case. Second, I want to be able to implement OLS myself in R before doing it in another language (eg. in C with the GSL). As you may know, the model is: Y=Xb+E; with E ~ N(0, sigma^2). As detailed below, b is a vector with 2 parameters, the mean (b0) and another coefficients (b1). At the end, for each linear regression I will do, I want the estimate for b1

Simultaneous data fitting in python with leastsq

流过昼夜 提交于 2019-12-05 07:21:42
问题 I didn't program for a long time and never was good at it, but it is kind of important task I am struggling with. I am trying to fit two sets of data (x – time, y1 and y2 – different columns of values which should be read from text file). For each dataset (y1 and y2) I have a function which should fit them. Inside both of the functions I have several parameters to be fitted. For some time values the data for "y" is absent, so the task is to program it somehow when “y” is missing and fit the

Fitting 2D sum of gaussians, scipy.optimise.leastsq (Ans: Use curve_fit!)

若如初见. 提交于 2019-12-05 05:08:41
问题 I want to fit an 2D sum of gaussians to this data: After failing at fitting a sum to this initially I instead sampled each peak separately (image) and returned a fit by find it's moments (essentially using this code). Unfortunately, this results in an incorrect peak position measurement, due to the overlapping signal of the neighbouring peaks. Below is a plot of the sum of the separate fits. Obviously their peak all lean toward the centre. I need to account for this in order to return the

get the R^2 value from scipy.linalg.lstsq

自闭症网瘾萝莉.ら 提交于 2019-12-05 02:28:10
问题 I have a fitted 3D data-set using scipy.linalg.lstsq function. I was using: # best-fit quadratic curve A = np.c_[np.ones(data.shape[0]), data[:,:2], np.prod(data[:,:2], axis=1), data[:,:2]**2] C,_,_,_ = scipy.linalg.lstsq(A, data[:,2]) #evaluating on grid Z = np.dot(np.c_[np.ones(XX.shape), XX, YY, XX*YY, XX**2, YY**2], C).reshape(X.shape) But How can I be able to get the R^2 value from this for the fitted-surface .? Is there any way I can check the significance of the fitting result ? Any

How to set a weighted least-squares in r for heteroscedastic data?

风流意气都作罢 提交于 2019-12-05 00:10:04
问题 I'm running a regression on census data where my dependent variable is life expectancy and I have eight independent variables. The data is aggregated be cities, so I have many thousand observations. My model is somewhat heteroscedastic though. I want to run a weighted least-squares where each observation is weighted by the city’s population. In this case, it would mean that I want to weight the observations by the inverse of the square root of the population. It’s unclear to me, however, what

Plane fit of 3D points with Singular Value Decomposition

岁酱吖の 提交于 2019-12-04 21:41:46
Dear fellow stackoverflow users, I am trying to calculate the normal vectors over an arbitrary (but smooth) surface defined by a set of 3D points. For this, I am using a plane fitting algorithm that finds the local least square plane based on the 10 nearest neighbors of the point at which I'm calculating the normal vector. However, it does not always find what seems to be the best plane. Thus, I'm wondering whether there is a flaw in my implementation or a flaw in my algorithm. I'm using Singular Value Decomposition as I found recommended in several links on the subject of plane fitting. Here

pseudo inverse of sparse matrix in python

时光毁灭记忆、已成空白 提交于 2019-12-04 21:10:41
问题 I am working with data from neuroimaging and because of the large amount of data, I would like to use sparse matrices for my code (scipy.sparse.lil_matrix or csr_matrix). In particular, I will need to compute the pseudo-inverse of my matrix to solve a least-square problem. I have found the method sparse.lsqr, but it is not very efficient. Is there a method to compute the pseudo-inverse of Moore-Penrose (correspondent to pinv for normal matrices). The size of my matrix A is about 600'000x2000

how to set up the initial value for curve_fit to find the best optimizing, not just local optimizing?

心不动则不痛 提交于 2019-12-04 19:33:40
I am trying to fit a power-law function, and in order to find the best fit parameter. However, I find that if the initial guess of parameter is different, the "best fit" output is different. Unless I find the right initial guess, I can get the best optimizing, instead of local optimizing. Is there any way to find the **appropriate initial guess ** ????. My code is listed below. Please feel free make any input. Thanks! import numpy as np import pandas as pd from scipy.optimize import curve_fit import matplotlib.pyplot as plt %matplotlib inline # power law function def func_powerlaw(x,a,b,c):

How to use leastsq function from scipy.optimize in python to fit both a straight line and a quadratic line to data sets x and y

馋奶兔 提交于 2019-12-04 18:32:03
问题 How would i fit a straight line and a quadratic to the data set below using the leastsq function from scipy.optimize? I know how to use polyfit to do it. But i need to use leastsq function. Here are the x and y data sets: x: 1.0,2.5,3.5,4.0,1.1,1.8,2.2,3.7 y: 6.008,15.722,27.130,33.772,5.257,9.549,11.098,28.828 Can someone help me out please? 回答1: The leastsq() method finds the set of parameters that minimize the error function ( difference between yExperimental and yFit). I used a tuple to