Why “softmax_cross_entropy_with_logits_v2” backprops into labels

前端 未结 1 1217
后悔当初
后悔当初 2021-01-18 10:45

I am wondering why in Tensorflow version 1.5.0 and later, softmax_cross_entropy_with_logits_v2 defaults to backpropagating into both labels and logits. What are some applica

1条回答
  •  夕颜
    夕颜 (楼主)
    2021-01-18 11:08

    I saw the github issue below asking the same question, you might want to follow it for future updates.

    https://github.com/tensorflow/minigo/issues/37

    I don't speak for the developers who made this decision, but I would surmise that they would do this by default because it is indeed used often, and for most application where you aren't backpropagating into the labels, the labels are a constant anyway and won't be adversely affected.

    Two common uses cases for backpropagating into labels are:

    • Creating adversarial examples

    There is a whole field of study around building adversarial examples that fool a neural network. Many of the approaches used to do so involve training a network, then holding the network fixed and backpropagating into the labels (original image) to tweak it (under some constraints usually) to produce a result that fools the network into misclassifying the image.

    • Visualizing the internals of a neural network.

    I also recommend people watch the deepviz toolkit video on youtube, you'll learn a ton about the internal representations learned by a neural network.

    https://www.youtube.com/watch?v=AgkfIQ4IGaM

    If you continue digging into that and find the original paper you'll find that they also backpropagate into the labels to generate images which highly activate certain filters in the network in order to understand them.

    0 讨论(0)
提交回复
热议问题