Logistic regression in Julia using Optim.jl

后端 未结 2 1948
攒了一身酷
攒了一身酷 2021-02-09 10:55

I\'m trying to implement a simple regularized logistic regression algorithm in Julia. I\'d like to use Optim.jl library to minimize my cost function, but I can\'t get it to work

相关标签:
2条回答
  • 2021-02-09 11:29

    Here is an example of unregularized logistic regression that uses the autodifferentiation functionality of Optim.jl. It might help you with your own implementation.

    using Optim
    
    const X = rand(100, 3)
    const true_β = [5,2,4]
    const true_y =  1 ./ (1 + exp(-X*true_β))
    
    function objective(β)
        y = 1 ./ (1 + exp(-X*β))
        return sum((y - true_y).^2)  # Use SSE, non-standard for log. reg.
    end
    
    println(optimize(objective, [3.0,3.0,3.0],
                    autodiff=true, method=LBFGS()))
    

    Which gives me

    Results of Optimization Algorithm
     * Algorithm: L-BFGS
     * Starting Point: [3.0,3.0,3.0]
     * Minimizer: [4.999999945789497,1.9999999853962256,4.0000000047769495]
     * Minimum: 0.000000
     * Iterations: 14
     * Convergence: true
       * |x - x'| < 1.0e-32: false
       * |f(x) - f(x')| / |f(x)| < 1.0e-08: false
       * |g(x)| < 1.0e-08: true
       * Exceeded Maximum Number of Iterations: false
     * Objective Function Calls: 53
     * Gradient Call: 53
    
    0 讨论(0)
  • 2021-02-09 11:36

    Below you find my cost and gradient computation functions for Logistic Regression using closures and currying (version for those who got used to a function that returns the cost and gradient):

    function cost_gradient(θ, X, y, λ)
        m = length(y)
        return (θ::Array) -> begin 
            h = sigmoid(X * θ)   
            J = (1 / m) * sum(-y .* log(h) .- (1 - y) .* log(1 - h)) + λ / (2 * m) * sum(θ[2:end] .^ 2)         
        end, (θ::Array, storage::Array) -> begin  
            h = sigmoid(X * θ) 
            storage[:] = (1 / m) * (X' * (h .- y)) + (λ / m) * [0; θ[2:end]]        
        end
    end
    

    Sigmoid function implementation:

    sigmoid(z) = 1.0 ./ (1.0 + exp(-z))
    

    To apply cost_gradient in Optim.jl do the following:

    using Optim
    #...
    # Prerequisites:
    # X size is (m,d), where d is the number of training set features
    # y size is (m,1)
    # λ as the regularization parameter, e.g 1.5
    # ITERATIONS number of iterations, e.g. 1000
    X=[ones(size(X,1)) X] #add x_0=1.0 column; now X size is (m,d+1)
    initialθ = zeros(size(X,2),1) #initialTheta size is (d+1, 1)
    cost, gradient! = cost_gradient(initialθ, X, y, λ)
    res = optimize(cost, gradient!, initialθ, method = ConjugateGradient(), iterations = ITERATIONS);
    θ = Optim.minimizer(res);
    

    Now, you can easily predict (e.g. training set validation):

    predictions = sigmoid(X * θ) #X size is (m,d+1)
    

    Either try my approach or compare it with your implementation.

    0 讨论(0)
提交回复
热议问题