Gradient descent and normal equation method for solving linear regression gives different solutions

China☆狼群 提交于 2019-12-03 14:42:50

I finally had time to get back to this. There is no "bug".

If the matrix is singular, then there are infinitely many solutions. You can choose any solution from that set, and get equally as good an answer. The pinv(X)*y solution is a good one that many like because it is the minimum norm solution.

There is NEVER a good reason to use inv(X)*y. Even worse, is to use inverse on the normal equations, thus inv(X'*X)*X'*y is simply numerical crap. I don't care who told you to use it, they are guiding you to the wrong place. (Yes, it will work acceptably for problems that are well-conditioned, but most of the time you don't know when it is about to give you crap. So why use it?)

The normal equations are in general a bad thing to do, EVEN if you are solving a regularized problem. There are ways to do that that avoid squaring the condition number of the system, although I won't explain them unless asked as this answer has gotten long enough.

X\y will also yield a result that is reasonable.

There is ABSOLUTELY no good reason to throw an unconstrained optimizer at the problem, as this will yield results that are unstable, completely dependent on your starting values.

As an example, I'll start with a singular problem.

X = repmat([1 2],5,1);
y = rand(5,1);

>> X\y
Warning: Rank deficient, rank = 1, tol =  2.220446e-15. 
ans =
                         0
         0.258777984694222

>> pinv(X)*y
ans =
         0.103511193877689
         0.207022387755377

pinv and backslash return slightly different solutions. As it turns out, there is a basic solution, to which we can add ANY amount of the nullspace vector for the row space of X.

null(X)
ans =
         0.894427190999916
        -0.447213595499958

pinv generates the minimum norm solution. Of all of the solutions that might have resulted, this one has minimum 2-norm.

In contrast, backslash generates a solution that will have one or more variables set to zero.

But if you use an unconstrained optimizer, it will generate a solution that is completely dependent on your starting values. Again, ANLY amount of that null vector can be added to your solution, and you still have an entirely valid solution, with the same value of the sum of squares of errors.

Note that even though no singularity waring is returned, this need not mean your matrix is not close to singular. You have changed little about the problem, so it is STILL close, just not enough to trigger the warning.

As others mentioned, an ill-conditioned hessian matrix is likely the cause of your problem.

The number of steps that standard gradient descent algorithms take to reach a local optimum is a function of the largest eigenvalue of the hessian divided by the smallest (this is known as the condition number of the Hessian). So, if your matrix is ill-conditioned, then it could take an extremely large number of iterations for gradient descent to converge to an optimum. (For the singular case, it could converge to many points, of course.)

I would suggest trying three different things to verify that an unconstrained optimization algorithm works for your problem (which it should): 1) Generate some synthetic data by computing the result of a known linear function for random inputs and adding a small amount of gaussian noise. Make sure that you have many more data points than dimensions. This should produce a non-singular hessian. 2) Add a regularization terms to your error function to increase the condition number of the hessian. 3) Use a second order method like conjugate gradient or L-BFGS rather than gradient descent to reduce the number of steps needed for the algorithm to converge. (You will probably need to do this in conjunction with #2).

Could you post a little more about what you X looks like? You're using pinv() which is Moore-Penrose pseudo inverse. If the matrix is ill-conditioned this could cause problems with obtaining the inverse. I would bet that the gradient-descent method is closer to the mark.

You should see which method is actually giving you the smallest error. That will indicate which method is struggling. I suspect that the normal equation method is the troubled solution because if X is ill-conditioned then you can have some problems there.

You should probably replace your normal equation solution with theta = X\y which will use a QR-decomposition method to solve it.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!