问题
(Sorry if this sounds a bit naive)
I want to have a look at the meat of the TensorFlow implementation for GradientDescent - and see for myself how are they handling termination condition, step-size adaptiveness, etc. I traced the code down for training_ops.apply_gradient_descent
but I can't find the implementation :(
回答1:
TensorFlow Optimizer
interface, (which GradientDescentOptimizer
implements) defines a a single step of minimization. Termination conditions or adjusting step size is implemented by the user. In MNIST for Beginners tutorial, the termination conditions is "stop after 1000" steps which you can see in for i in range(1000)
loop
apply_gradient_descent(a,b,c)
is a fused op that multiplies c
by b
and adds it to a
. There are some extra levels of indirection to go from Python wrapper to C++ implementation detailed in Adding a new op HowTo, but as a shortcut you can usually find C++ implementation by converting from snake-case and searching for that, so ApplyGradientDescent
in this case. That leads to implementation in tensorflow/core/kernels/training_ops.cc
来源:https://stackoverflow.com/questions/35724469/where-can-i-have-a-look-at-tensorflow-gradient-descent-main-loop