Easy way to clamp Neural Network outputs between 0 and 1?

喜你入骨 提交于 2020-06-17 00:04:28

问题


So I'm working on writing a GAN neural network and I want to set my network's output to 0 if it is less than 0 and 1 if it is greater than 1 and leave it unchanged otherwise. I'm pretty new to tensorflow, but I don't know of any tensorflow function or activation to do this without unwanted side effects. So I made my loss function so it calculates the loss as if the output was clamped, with this code:

def discriminator_loss(real_output, fake_output):
    real_output_clipped = min(max(real_output.numpy()[0], 
    0), 1)
    fake_output_clipped = min(max(fake_output.numpy()[0], 
    0), 1)

    real_clipped_tensor = 
    tf.Variable([[real_output_clipped]], dtype = "float32")
    fake_clipped_tensor = 
    tf.Variable([[fake_output_clipped]], dtype = "float32")

    real_loss = cross_entropy(tf.ones_like(real_output), 
    real_clipped_tensor)
    fake_loss = cross_entropy(tf.zeros_like(fake_output), 
    fake_clipped_tensor)

    total_loss = real_loss + fake_loss
    return total_loss

but I get this error:

ValueError: No gradients provided for any variable: ['dense_50/kernel:0', 'dense_50/bias:0', 'dense_51/kernel:0', 'dense_51/bias:0', 'dense_52/kernel:0', 'dense_52/bias:0', 'dense_53/kernel:0', 'dense_53/bias:0'].

Does anyone know a better way to do this, or a way to fix this error?

Thanks!


回答1:


You can apply a ReLU layer from Keras as your final layer and set max_value=1.0. For example:

model = tf.keras.Sequential()
model.add(tf.keras.layers.Dense(32, input_shape=(16,)))
model.add(tf.keras.layers.Dense(32))
model.add(tf.keras.layers.ReLU(max_value=1.0))

You can read more about it here: https://www.tensorflow.org/api_docs/python/tf/keras/layers/ReLU




回答2:


TF probably does not know how to update your network weights based on this loss. The input of the cross entropy are tensors (variables) that are directly assigned from numpy arrays and are not connected to your actual network outputs.

If you want to perform operations on tensors that will remain within the graph and (hopefully) be differentiable, use the available TF operations. There's a "clip_by_value" operation described here: https://www.tensorflow.org/versions/r1.15/api_docs/python/tf/clip_by_value.

E.g. real_output_clipped = tf.clip_by_value(real_output, clip_value_min=0, clip_value_max=1)



来源:https://stackoverflow.com/questions/62072838/easy-way-to-clamp-neural-network-outputs-between-0-and-1

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!