Gradient descent algorithm won't converge

前端 未结 6 1072
轻奢々
轻奢々 2021-02-06 05:42

I\'m trying to write out a bit of code for the gradient descent algorithm explained in the Stanford Machine Learning lecture (lecture 2 at around 25:00). Below is the implementa

6条回答
  •  名媛妹妹
    2021-02-06 06:29

    It's not clean from your description what problem you're solving. Also it's very dangerous to post links to external resources - you can be blocked in stackoverflow.

    In any case - gradient descend method and (subgradient descend too) with fixed step size (ML community call it learning rate) should not necesseray converge.

    p.s. Machine Learning community is not interesting in "convergence condition" and "convergence to what" - they are interested in create "something" which pass cross-validation with good result.

    If you're curious about optimization - start to look in convex optimization. Unfortunately it's hard to find job on it, but it append clean vision into what happens in various math optimization things.

    Here is source code which demonstrate it for simple quadratic objective:

    #!/usr/bin/env python
    # Gradiend descend method (without stepping) is not converged for convex         
    # objective
    
    alpha = 0.1
    
    #k = 10.0 # jumping around minimum
    k = 20.0   # diverge
    #k = 0.001  # algorithm converged but gap to the optimal is big
    
    def f(x): return k*x*x
    def g(x): return 2*k*x
    
    x0 = 12
    xNext = x0
    i = 0
    threshold = 0.01
    
    while True:
        i += 1
        xNext = xNext + alpha*(-1)*(g(xNext))
        obj = (xNext)
        print "Iteration: %i, Iterate: %f, Objective: %f, Optimality Gap: %f" % (i, xNext, obj, obj - f(0.0))
    
        if (abs(g(xNext)) < threshold):
            break
        if i > 50:
            break
    
    print "\nYou launched application with x0=%f,threshold=%f" % (x0, threshold)
    

提交回复
热议问题