Deep-Learning Nan loss reasons

后端 未结 9 2115
执念已碎
执念已碎 2020-11-28 02:12

Perhaps too general a question, but can anyone explain what would cause a Convolutional Neural Network to diverge?

Specifics:

I am using Tensorflow\'s iris_tra

相关标签:
9条回答
  • 2020-11-28 02:43

    There are lots of things I have seen make a model diverge.

    1. Too high of a learning rate. You can often tell if this is the case if the loss begins to increase and then diverges to infinity.

    2. I am not to familiar with the DNNClassifier but I am guessing it uses the categorical cross entropy cost function. This involves taking the log of the prediction which diverges as the prediction approaches zero. That is why people usually add a small epsilon value to the prediction to prevent this divergence. I am guessing the DNNClassifier probably does this or uses the tensorflow opp for it. Probably not the issue.

    3. Other numerical stability issues can exist such as division by zero where adding the epsilon can help. Another less obvious one if the square root who's derivative can diverge if not properly simplified when dealing with finite precision numbers. Yet again I doubt this is the issue in the case of the DNNClassifier.

    4. You may have an issue with the input data. Try calling assert not np.any(np.isnan(x)) on the input data to make sure you are not introducing the nan. Also make sure all of the target values are valid. Finally, make sure the data is properly normalized. You probably want to have the pixels in the range [-1, 1] and not [0, 255].

    5. The labels must be in the domain of the loss function, so if using a logarithmic-based loss function all labels must be non-negative (as noted by evan pu and the comments below).

    0 讨论(0)
  • 2020-11-28 02:47

    Although most of the points are already discussed. But I would like to highlight again one more reason for NaN which is missing.

    tf.estimator.DNNClassifier(
        hidden_units, feature_columns, model_dir=None, n_classes=2, weight_column=None,
        label_vocabulary=None, optimizer='Adagrad', activation_fn=tf.nn.relu,
        dropout=None, config=None, warm_start_from=None,
        loss_reduction=losses_utils.ReductionV2.SUM_OVER_BATCH_SIZE, batch_norm=False
    )
    

    By default activation function is "Relu". It could be possible that intermediate layer's generating a negative value and "Relu" convert it into the 0. Which gradually stops training.

    I observed the "LeakyRelu" able to solve such problems.

    0 讨论(0)
  • 2020-11-28 02:49

    If you'd like to gather more information on the error and if the error occurs in the first few iterations, I suggest you run the experiment in CPU-only mode (no GPUs). The error message will be much more specific.

    Source: https://github.com/tensorflow/tensor2tensor/issues/574

    0 讨论(0)
提交回复
热议问题