Tensorflow batch_norm does not work properly when testing (is_training=False)

前端 未结 1 1035
臣服心动
臣服心动 2021-02-15 11:31

I am training the following model:

with slim.arg_scope(inception_arg_scope(is_training=True)):
    logits_v, endpoints_v          


        
相关标签:
1条回答
  • 2021-02-15 11:46

    I met the same problem and solved. When you use slim.batch_norm,be sure to use slim.learning.create_train_op instead of tf.train.GradientDecentOptimizer(lr).minimize(loss) or other optimizer. Try it to see if it works!

    0 讨论(0)
提交回复
热议问题