autoencoder

How to initialise weights of a MLP using an autoencoder #2nd part - Deep autoencoder #3rd part - Stacked autoencoder

耗尽温柔 提交于 2019-12-24 00:46:06
问题 I have built an autoencoder (1 encoder 8:5, 1 decoder 5:8) which takes the Pima-Indian-Diabetes dataset (https://raw.githubusercontent.com/jbrownlee/Datasets/master/pima-indians-diabetes.data.csv) and reduces its dimension (from 8 to 5). I would now like to use these reduced features to classify the data using an mlp. Now, here, I have some problems with the basic understanding of the architecture. How do I use the weights of the autoencoder and feed them into the mlp? I have checked these

Specifying a seq2seq autoencoder. What does RepeatVector do? And what is the effect of batch learning on predicting output?

柔情痞子 提交于 2019-12-23 22:34:14
问题 I am building a basic seq2seq autoencoder, but I'm not sure if I'm doing it correctly. model = Sequential() # Encoder model.add(LSTM(32, activation='relu', input_shape =(timesteps, n_features ), return_sequences=True)) model.add(LSTM(16, activation='relu', return_sequences=False)) model.add(RepeatVector(timesteps)) # Decoder model.add(LSTM(16, activation='relu', return_sequences=True)) model.add(LSTM(32, activation='relu', return_sequences=True)) model.add(TimeDistributed(Dense(n_features)))'

How to create an autoencoder where each layer of encoder should represent the same as a layer of the decoder

泪湿孤枕 提交于 2019-12-23 21:18:12
问题 I want to build an autoencoder where each layer in the encoder has the same meaning as a correspondent layer in the decoder. So if the autoencoder is perfectly trained, the values of those layers should be roughly the same. So lets say the autoencoder consists of e1 -> e2 -> e3 -> d2 -> d1, whereas e1 is the input and d1 the output. A normal autoencoder trains to have the same result in d1 as e1, but I want the additional constraint, that e2 and d2 are the same. Therefore I want an additional

Keras LSTM autoencoder with embedding layer

我是研究僧i 提交于 2019-12-23 15:49:27
问题 I am trying to build a text LSTM autoencoder in Keras. I want to use an embedding layer but I'am not sure how to implement this. The code looks like this. inputs = Input(shape=(timesteps, input_dim)) embedding_layer = Embedding(numfeats + 1, EMBEDDING_DIM, weights=[data_gen.get_embedding_matrix()], input_length=maxlen, trainable=False) embedded_sequence = embedding_layer(inputs) encoded = LSTM(num_units)(inputs) decoded = RepeatVector(timesteps)(encoded) decoded = LSTM(???, return_sequences

keras autoencoder “Error when checking target”

痴心易碎 提交于 2019-12-23 03:32:15
问题 i'm trying to adapt the 2d convolutional autoencoder example from the keras website: https://blog.keras.io/building-autoencoders-in-keras.html to my own case where i use 1d inputs: from keras.layers import Input, Dense, Conv1D, MaxPooling1D, UpSampling1D from keras.models import Model from keras import backend as K import scipy as scipy import numpy as np mat = scipy.io.loadmat('edata.mat') emat = mat['edata'] input_img = Input(shape=(64,1)) # adapt this if using `channels_first` image data

How can I implement KL-divergence regularization for Keras?

眉间皱痕 提交于 2019-12-23 03:00:31
问题 This is a follow-up question for this question Keras backend mean function: " 'float' object has no attribute 'dtype' "? I am trying to make a new regularizer for Keras. Here is my code import keras from keras import initializers from keras.models import Model, Sequential from keras.layers import Input, Dense, Activation from keras import regularizers from keras import optimizers from keras import backend as K kullback_leibler_divergence = keras.losses.kullback_leibler_divergence def kl

Keras fit_generator producing exception: output of generator should be a tuple(x, y, sample_weight) or (x, y). Found: [[[[ 0.86666673

南笙酒味 提交于 2019-12-22 18:04:03
问题 I am trying to build an autoencoder for non MNIST, non Imagenet data. Using https://blog.keras.io/building-autoencoders-in-keras.html as my base. However, am getting the following error. **Exception: output of generator should be a tuple (x, y, sample_weight) or (x, y). Found: [[[[ 0.86666673 0.86666673 0.86666673 ..., 0.62352943 0.627451 0.63137257] [ 0.86666673 0.86666673 0.86666673 ..., 0.63137257 0.627451 0.627451 ] [ 0.86666673 0.86666673 0.86666673 ..., 0.63137257 0.627451 0.62352943] .

How do I split an convolutional autoencoder?

时光总嘲笑我的痴心妄想 提交于 2019-12-22 08:54:46
问题 I have compiled an autoencoder (full code is below), and after training it I would like to split it into two separate models: encoder (layers e1...encoded) and decoder (all other layers) in which to feed manually modified images that had been encoded by the decoder. I have succeeded in creating an encoder as a separate model with: encoder = Model(input_img, autoencoder.layers[6].output) But the same approach fails when I try to make a decoder: encoded_input = Input(shape=(4,4,8)) decoder =

How to inject values into the middle of TensorFlow graph?

北城余情 提交于 2019-12-22 06:44:36
问题 Consider the following code: x = tf.placeholder(tf.float32, (), name='x') z = x + tf.constant(5.0) y = tf.mul(z, tf.constant(0.5)) with tf.Session() as sess: print(sess.run(y, feed_dict={x: 30})) The resulting graph is x -> z -> y. Sometimes I'm interested in computing y all the way from from x but sometimes I have z to start and would like inject this value into the graph. So the z needs to behave like a partial placeholder. How can I do that? (For anyone interested why I need this. I am

Tying Autoencoder Weights in a Dense Keras Layer

大兔子大兔子 提交于 2019-12-21 05:26:26
问题 I am attempting to create a custom, Dense layer in Keras to tie weights in an Autoencoder. I have tried following an example for doing this in convolutional layers here, but it seemed like some of the steps did not apply for the Dense layer (also, the code is from over two years ago). By tying weights, I want the decode layer to use the transposed weight matrix of the encode layer. This approach is also taken in this article (page 5). Below is the relevant quote from the article: Here, we