tf.layers.batch_normalization large test error
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I'm trying to use batch normalization. I tried to use tf.layers.batch_normalization on a simple conv net for mnist. I get high accuracy for train step (>98%) but very low test accuracy ( my code # Input placeholders x = tf . placeholder ( tf . float32 , [ None , 784 ], name = 'x-input' ) y_ = tf . placeholder ( tf . float32 , [ None , 10 ], name = 'y-input' ) is_training = tf . placeholder ( tf . bool ) # inut layer input_layer = tf . reshape ( x , [- 1 , 28 , 28 , 1 ]) with tf . name_scope ( 'conv1' ): #Convlution #1 ([5,5] :