least-squares

Solving an overdetermined constraint system

荒凉一梦 提交于 2019-12-08 19:12:02
问题 I have n real number variables (don't know, don't really care), let's call them X[n] . I also have m >> n relationships between them let's call them R[m] , of the form: X[i] = alpha*X[j] , alpha is a nonzero positive real number, i and j are distinct but the (i, j) pair is not necessarily unique (i.e. there can be two relationships between the same variables with a different alpha factor) What I'm trying to do is find a set of alpha parameters that solve the overdetermined system in some

Eigen's LeastSquaresConjugateGradient solver: using Incomplete Cholesky preconditioner and specifying coefficient starting values

↘锁芯ラ 提交于 2019-12-08 07:26:23
问题 To solve a rectangular sparse linear system of equations I would like to use Eigen's LeastSquaresConjugateGradient. Aside from the default Jacobi and Identity preconditioner I was wondering though if it is possible to also use Incomplete Cholesky as a preconditioner within LeastSquaresConjugateGradient? The Rcpp code I have and which uses the default Jacobi ([LeastSquareDiagonalPreconditioner) preconditioner is: library(inline) library(RcppEigen) solve_sparse_lsconjgrad <- cxxfunction(

NumPy ValueError: The truth value of an array with more than one element is ambiguous. Use a.any() or a.all() leastsq

冷暖自知 提交于 2019-12-08 06:33:49
问题 from sympy import * from scipy import * from scipy.integrate import quad import scipy.optimize as optimize import numpy as np import collections import math from scipy.optimize import leastsq file= DATA+'Union21.dat' with open(file, "r") as f: data0=[(float(v[1]),float(v[2]), float(v[3])) for v in [x.split() for x in f.readlines()][1:]] #print data0 z=np.array([float(t[0]) for t in data0]) mu=np.array([float(t[1]) for t in data0]) dmu=np.array([float(t[2]) for t in data0]) c=3*10^8 def calka

Out of memory when using `outer` in solving my big normal equation for least squares estimation

主宰稳场 提交于 2019-12-08 06:11:16
问题 Consider the following example in R: x1 <- rnorm(100000) x2 <- rnorm(100000) g <- cbind(x1, x2, x1^2, x2^2) gg <- t(g) %*% g gginv <- solve(gg) bigmatrix <- outer(x1, x2, "<=") Gw <- t(g) %*% bigmatrix beta <- gginv %*% Gw w1 <- bigmatrix - g %*% beta If I try to run such a thing in my computer, it will throw a memory error (because the bigmatrix is too big). Do you know how can I achieve the same, without running into this problem? 回答1: This is a least squares problem with 100,000 responses.

How to compute minimal but fast linear regressions on each column of a response matrix?

巧了我就是萌 提交于 2019-12-07 08:53:38
问题 I want to compute ordinary least square ( OLS ) estimates in R without using "lm" , and this for several reasons. First, "lm" also computes lots of stuff I don't need (such as the fitted values) considering that data size is an issue in my case. Second, I want to be able to implement OLS myself in R before doing it in another language (eg. in C with the GSL). As you may know, the model is: Y=Xb+E; with E ~ N(0, sigma^2). As detailed below, b is a vector with 2 parameters, the mean (b0) and

Compute least squares using java

天大地大妈咪最大 提交于 2019-12-07 06:11:55
问题 I am trying to find a java code to compute the least squares solution (x) in the Ax=b equation. Suppose that A = [1 0 0;1 0 0]; b = [1; 2]; x = A\b returns the x = 1.5000 0 0 I found Class LeastSquares, public LeastSquares(double[] a, double[] b, int degree) but in the input both A and B are one dimensional arrays, however, in above example, A is a matrix and B is an array. In Class NonNegativeLeastSquares public NonNegativeLeastSquares(int M, int N, double a[][],double b[]) A is a matrix and

Out of memory when using `outer` in solving my big normal equation for least squares estimation

∥☆過路亽.° 提交于 2019-12-06 14:48:37
Consider the following example in R: x1 <- rnorm(100000) x2 <- rnorm(100000) g <- cbind(x1, x2, x1^2, x2^2) gg <- t(g) %*% g gginv <- solve(gg) bigmatrix <- outer(x1, x2, "<=") Gw <- t(g) %*% bigmatrix beta <- gginv %*% Gw w1 <- bigmatrix - g %*% beta If I try to run such a thing in my computer, it will throw a memory error (because the bigmatrix is too big). Do you know how can I achieve the same, without running into this problem? This is a least squares problem with 100,000 responses. Your bigmatrix is the response (matrix), beta is the coefficient (matrix), while w1 is the residual (matrix

Plane fit of 3D points with Singular Value Decomposition

流过昼夜 提交于 2019-12-06 12:20:28
问题 Dear fellow stackoverflow users, I am trying to calculate the normal vectors over an arbitrary (but smooth) surface defined by a set of 3D points. For this, I am using a plane fitting algorithm that finds the local least square plane based on the 10 nearest neighbors of the point at which I'm calculating the normal vector. However, it does not always find what seems to be the best plane. Thus, I'm wondering whether there is a flaw in my implementation or a flaw in my algorithm. I'm using

Get Durbin-Watson and Jarque-Bera statistics from OLS Summary in Python

一曲冷凌霜 提交于 2019-12-05 21:17:02
I am running the OLS summary for a column of values. Part of the OLS is the Durbin-Watson and Jarque-Bera (JB) statistics and I want to pull those values out directly since they have already been calculated rather than running the steps as extra steps like I do now with durbinwatson. Here is the code I have: import pandas as pd import statsmodels.api as sm csv = mydata.csv df = pd.read_csv(csv) var = df[variable] year = df['Year'] model = sm.OLS(var,year) results = model.fit() summary = results.summary() print summary #print dir(results) residuals = results.resid durbinwatson = statsmodels

plot 3D line, matlab

为君一笑 提交于 2019-12-05 19:54:20
My question is pretty standard but can't find a solution of that. I have points=[x,y,z] and want to plot best fit line. I am using function given below (and Thanx Smith) % LS3DLINE.M Least-squares line in 3 dimensions. % % Version 1.0 % Last amended I M Smith 27 May 2002. % Created I M Smith 08 Mar 2002 % --------------------------------------------------------------------- % Input % X Array [x y z] where x = vector of x-coordinates, % y = vector of y-coordinates and z = vector of % z-coordinates. % Dimension: m x 3. % % Output % x0 Centroid of the data = point on the best-fit line. %