TensorFlow always converging to same output for all items after training

前端 未结 1 559
無奈伤痛
無奈伤痛 2021-01-15 17:30

This is the piece of code that I am working with:

import tensorflow as tf
import numpy as np
from PIL import Image
from os import listdir

nodes_l1 = 500
nod         


        
相关标签:
1条回答
  • 2021-01-15 18:05

    There are a few possible issues I see with this. The first is that you are using densely connected layers to process large image net images. You should be using convolutional networks for images. I think this is your biggest problem. Only after applying a pyramid of convolutional / pooling layers to reduce the spatial dimensions into "features" should you add a dense layer.

    https://www.tensorflow.org/versions/r0.11/tutorials/deep_cnn/index.html

    Secondly, even if you were going to use dense layers you should not apply the softmax function as an activation between hidden layers (with some exceptions such as in attention models but this is a more advanced concept.) Softmax forces the sum of every activation to in the layer to one which you probably don't want. I would change the activation between hidden layers to relu or at least tanh.

    Finally, I have found that when networks approach a constant value it can help to lower the learning rate. I don't think this is your issue though. My first two comments are what you should focus on.

    0 讨论(0)
提交回复
热议问题