What's different about momentum gradient update in Tensorflow and Theano like this?

三世轮回 提交于 2019-12-05 08:10:46

If you look at the implementation of momentum optimizer in TensorFlow [link], it is implemented as follows:

accum = accum * momentum() + grad;
var -= accum * lr();

As you see, the formulas are a bit different. Scaling momentum term by the learning rate should resolve your differences.

It is also very easy to implement such optimizer by yourself. The resulting code would look similar to the snippet in Theano that you included.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!