Reusing layer weights in Tensorflow
问题 I am using tf.slim to implement an autoencoder. I's fully convolutional with the following architecture: [conv, outputs = 1] => [conv, outputs = 15] => [conv, outputs = 25] => => [conv_transpose, outputs = 25] => [conv_transpose, outputs = 15] => [conv_transpose, outputs = 1] It has to be fully convolutional and I cannot do pooling (limitations of the larger problem). I want to use tied weights, so encoder_W_3 = decoder_W_1_Transposed (so the weights of the first decoder layer are the ones of