DCGANs: discriminator getting too strong too quickly to allow generator to learn [closed]

◇◆丶佛笑我妖孽 提交于 2019-12-04 14:28:09

问题


I am trying to use this version of the DCGAN code (implemented in Tensorflow) with some of my data. I run into the problem of the discriminator becoming too strong way too quickly for generator to learn anything.

Now there are some tricks typically recommended for that problem with GANs:

  • batch normalisation (already there in DCGANs code)

  • giving a head start to generator.

I did some version of the latter by allowing 10 iterations of generator per 1 of discriminator (not just in the beginning, but throughout the entire training), and that's how it looks:

Adding more generator iterations in this case helps only by slowing down the inevitable - discriminator growing too strong and suppressing the generator learning.

Hence I would like to ask for an advice on whether there is another way that could help the problem of a too strong discriminator?


回答1:


To summarise this topic - the generic advice would be:

  • try playing with model parameters (like learning rates, for instance)
  • try adding more variety to the input data
  • try adjusting the architecture of both generator and discriminator networks.

However, in my case the issue was the data scaling: I've changed the format of the input data from the initial .jpg to .npy and lost the rescaling on the way. Please note that this DCGAN-tensorflow code rescales the input data to [-1,1] range, and the model is tuned to work with this range.




回答2:


I think there are several ways to decrease discriminator:

  1. Try leaky_relu and dropout in discriminator function:

    def leaky_relu(x, alpha, name="leaky_relu"): return tf.maximum(x, alpha * x , name=name)

Here is entire definition:

def discriminator(images, reuse=False):

# Implement a seperate leaky_relu function
def leaky_relu(x, alpha, name="leaky_relu"):
    return tf.maximum(x, alpha * x , name=name)

# Leaky parameter Alpha 
alpha = 0.2

# Add batch normalization, kernel initializer, the LeakyRelu activation function, ect. to the layers accordingly
with tf.variable_scope('discriminator', reuse=reuse):
    # 1st conv with Xavier weight initialization to break symmetry, and in turn, help converge faster and prevent local minima.
    images = tf.layers.conv2d(images, 64, 5, strides=2, padding="same", kernel_initializer=tf.contrib.layers.xavier_initializer())
    # batch normalization
    bn = tf.layers.batch_normalization(images, training=True)
    # Leaky relu activation function
    relu = leaky_relu(bn, alpha, name="leaky_relu")
    # Dropout "rate=0.1" would drop out 10% of input units, oppsite with keep_prob
    drop = tf.layers.dropout(relu, rate=0.2)

    # 2nd conv with Xavier weight initialization, 128 filters.
    images = tf.layers.conv2d(drop, 128, 5, strides=2, padding="same", kernel_initializer=tf.contrib.layers.xavier_initializer())
    bn = tf.layers.batch_normalization(images, training=True)
    relu = leaky_relu(bn, alpha, name="leaky_relu")
    drop = tf.layers.dropout(relu, rate=0.2)

    # 3rd conv with Xavier weight initialization, 256 filters, strides=1 without reshape
    images = tf.layers.conv2d(drop, 256, 5, strides=1, padding="same", kernel_initializer=tf.contrib.layers.xavier_initializer())
    #print(images)
    bn = tf.layers.batch_normalization(images, training=True)
    relu = leaky_relu(bn, alpha, name="leaky_relu")
    drop = tf.layers.dropout(relu, rate=0.2)


    flatten = tf.reshape(drop, (-1, 7 * 7 * 128))
    logits = tf.layers.dense(flatten, 1)
    ouput = tf.sigmoid(logits)  

    return ouput, logits
  1. Add label smoothing in discriminator loss to prevent discriminator becoming to strong. Increase smooth value according to d_loss performance.

    d_loss_real = tf.reduce_mean( tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)*(1.0 - smooth)))



来源:https://stackoverflow.com/questions/44313306/dcgans-discriminator-getting-too-strong-too-quickly-to-allow-generator-to-learn

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!