Regularized logistic regression code in matlab

前端 未结 4 2020
伪装坚强ぢ
伪装坚强ぢ 2021-01-31 06:43

I\'m trying my hand at regularized LR, simple with this formulas in matlab:

The cost function:

J(theta) = 1/m*sum((-y_i)*log(h(x_i)-(1-y_i)*log(1-h(x_i))         


        
相关标签:
4条回答
  • 2021-01-31 06:46

    Here is an answer that eliminates the loops

    m = length(y); % number of training examples
    
    predictions = sigmoid(X*theta);
    reg_term = (lambda/(2*m)) * sum(theta(2:end).^2);
    calcErrors = -y.*log(predictions) - (1 -y).*log(1-predictions);
    J = (1/m)*sum(calcErrors)+reg_term;
    
    % prepend a 0 column to our reg_term matrix so we can use simple matrix addition
    reg_term = [0 (lambda*theta(2:end)/m)'];
    grad = sum(X.*(predictions - y)) / m + reg_term;
    
    0 讨论(0)
  • 2021-01-31 06:48

    Vectorized:

    function [J, grad] = costFunctionReg(theta, X, y, lambda)
    
    hx = sigmoid(X * theta);
    m = length(X);
    
    J = (sum(-y' * log(hx) - (1 - y')*log(1 - hx)) / m) + lambda * sum(theta(2:end).^2) / (2*m);
    grad =((hx - y)' * X / m)' + lambda .* theta .* [0; ones(length(theta)-1, 1)] ./ m ;
    
    end
    
    0 讨论(0)
  • 2021-01-31 06:59

    Finally got it, after rewriting it again like for the 4th time, this is the correct code:

    function [J, grad] = costFunctionReg(theta, X, y, lambda)
    J = 0;
    grad = zeros(size(theta));
    
    temp_theta = [];
    
    for jj = 2:length(theta)
    
        temp_theta(jj) = theta(jj)^2;
    end
    
    theta_reg = lambda/(2*m)*sum(temp_theta);
    
    temp_sum =[];
    
    for ii =1:m
    
       temp_sum(ii) = -y(ii)*log(sigmoid(theta'*X(ii,:)'))-(1-y(ii))*log(1-sigmoid(theta'*X(ii,:)'));
    
    end
    
    tempo = sum(temp_sum);
    
    J = (1/m)*tempo+theta_reg;
    
    %regulatization
    %theta 0
    
    reg_theta0 = 0;
    
    for i=1:m
        reg_theta0(i) = ((sigmoid(theta'*X(i,:)'))-y(i))*X(i,1)
    end
    
    theta_temp(1) = (1/m)*sum(reg_theta0)
    
    grad(1) = theta_temp
    
    sum_thetas = []
    thetas_sum = []
    
    for j = 2:size(theta)
        for i = 1:m
    
            sum_thetas(i) = ((sigmoid(theta'*X(i,:)'))-y(i))*X(i,j)
        end
    
        thetas_sum(j) = (1/m)*sum(sum_thetas)+((lambda/m)*theta(j))
        sum_thetas = []
    end
    
    for z=2:size(theta)
        grad(z) = thetas_sum(z)
    end
    
    
    % =============================================================
    
    end
    

    If its helps anyone, or anyone has any comments on how can I do it better. :)

    0 讨论(0)
  • 2021-01-31 07:10

    I used more variables, so you could see clearly what comes from the regular formula, and what comes from "the regularization cost added". Additionally, It is a good practice to use "vectorization" instead of loops in Matlab/Octave. By doing this, you guarantee a more optimized solution.

     function [J, grad] = costFunctionReg(theta, X, y, lambda)
    
        %Hypotheses
        hx = sigmoid(X * theta);
    
        %%The cost without regularization
        J_partial = (-y' * log(hx) - (1 - y)' * log(1 - hx)) ./ m;
    
    
        %%Regularization Cost Added
        J_regularization = (lambda/(2*m)) * sum(theta(2:end).^2);
    
        %%Cost when we add regularization
        J = J_partial + J_regularization;
    
        %Grad without regularization
        grad_partial = (1/m) * (X' * (hx -y));
    
        %%Grad Cost Added
        grad_regularization = (lambda/m) .* theta(2:end);
    
        grad_regularization = [0; grad_regularization];
    
        grad = grad_partial + grad_regularization;
    
    0 讨论(0)
提交回复
热议问题