Unsupervised pre-training for convolutional neural network in theano

后端 未结 1 1552
名媛妹妹
名媛妹妹 2021-01-30 21:06

I would like to design a deep net with one (or more) convolutional layers (CNN) and one or more fully connected hidden layers on top.
For deep network with fully connected l

相关标签:
1条回答
  • 2021-01-30 21:59

    This paper describes an approach for building a stacked convolutional autoencoder. Based on that paper and some Google searches I was able to implement the described network. Basically, everything you need is described in the Theano convolutional network and denoising autoencoder tutorials with one crucial exception: how to reverse the max-pooling step in the convolutional network. I was able to work that out using a method from this discussion - the trickiest part is figuring out the right dimensions for W_prime as these will depend on the feed forward filter sizes and the pooling ratio. Here is my inverting function:

        def get_reconstructed_input(self, hidden):
            """ Computes the reconstructed input given the values of the hidden layer """
            repeated_conv = conv.conv2d(input = hidden, filters = self.W_prime, border_mode='full')
    
            multiple_conv_out = [repeated_conv.flatten()] * np.prod(self.poolsize)
    
            stacked_conv_neibs = T.stack(*multiple_conv_out).T
    
            stretch_unpooling_out = neibs2images(stacked_conv_neibs, self.pl, self.x.shape)
    
            rectified_linear_activation = lambda x: T.maximum(0.0, x)
            return rectified_linear_activation(stretch_unpooling_out + self.b_prime.dimshuffle('x', 0, 'x', 'x'))
    
    0 讨论(0)
提交回复
热议问题