autoencoder

Why do predictions differ for Autoencoder vs. Encoder + Decoder?

有些话、适合烂在心里 提交于 2020-01-05 06:07:09
问题 I build a CNN 1d Autoencoder in Keras, following the advice in this SO question, where Encoder and Decoder are separated. My goal is to re-use the decoder, once the Autoencoder has been trained. The central layer of my Autoencoder is a Dense layer, because I would like to learn it afterwards. My problem is that if I compile and fit the whole Autoencoder, written as Decoder()Encoder()(x) where x is the input, I get a different prediction when I do autoencoder.predict(training_set) w.r.t. if I

Save and load keras autoencoder

左心房为你撑大大i 提交于 2020-01-05 05:26:09
问题 Look at this strange load/save model situation. I saved variational autoencoder model and its encoder and decoder: autoencoder.save("autoencoder_save", overwrite=True) encoder.save("encoder_save", overwrite=True) decoder.save("decoder_save", overwrite=T) After that I loaded all of it from the disk: autoencoder_disk = load_model("autoencoder_save", custom_objects={'KLDivergenceLayer': KLDivergenceLayer, 'nll': nll}) encoder_disk = load_model("encoder_save", custom_objects={'KLDivergenceLayer':

Deep autoencoder in Keras converting one dimension to another i

て烟熏妆下的殇ゞ 提交于 2020-01-05 04:19:33
问题 I am doing an image captioning task using vectors for representing both images and captions. The caption vectors have a legth/dimension of size 128. The image vectors have a length/dimension of size 2048. What I want to do is to train an autoencoder, to get an encoder which is able to convert text vector into a image vector. And a decoder which is able to convert an image vector into a text vector. Encoder: 128 -> 2048. Decoder: 2048 -> 128. I followed this tutorial to implement a shallow

How does binary cross entropy loss work on autoencoders?

≡放荡痞女 提交于 2019-12-31 10:45:15
问题 I wrote a vanilla autoencoder using only Dense layer. Below is my code: iLayer = Input ((784,)) layer1 = Dense(128, activation='relu' ) (iLayer) layer2 = Dense(64, activation='relu') (layer1) layer3 = Dense(28, activation ='relu') (layer2) layer4 = Dense(64, activation='relu') (layer3) layer5 = Dense(128, activation='relu' ) (layer4) layer6 = Dense(784, activation='softmax' ) (layer5) model = Model (iLayer, layer6) model.compile(loss='binary_crossentropy', optimizer='adam') (trainX, trainY),

Getting error while adding embedding layer to lstm autoencoder

江枫思渺然 提交于 2019-12-31 01:52:10
问题 I have a seq2seq model which is working fine. I want to add an embedding layer in this network which I faced with an error. this is my architecture using pretrained word embedding which is working fine(Actually the code is almost the same code available here, but I want to include the Embedding layer in the model rather than using the pretrained embedding vectors): LATENT_SIZE = 20 inputs = Input(shape=(SEQUENCE_LEN, EMBED_SIZE), name="input") encoded = Bidirectional(LSTM(LATENT_SIZE), merge

keras autoencoder vs PCA

纵然是瞬间 提交于 2019-12-30 10:18:11
问题 I am playing with a toy example to understand PCA vs keras autoencoder I have the following code for understanding PCA: import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from sklearn import decomposition from sklearn import datasets iris = datasets.load_iris() X = iris.data pca = decomposition.PCA(n_components=3) pca.fit(X) pca.explained_variance_ratio_ array([ 0.92461621, 0.05301557, 0.01718514]) pca.components_ array([[ 0.36158968, -0.08226889, 0

keras autoencoder vs PCA

左心房为你撑大大i 提交于 2019-12-30 10:18:06
问题 I am playing with a toy example to understand PCA vs keras autoencoder I have the following code for understanding PCA: import numpy as np import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import Axes3D from sklearn import decomposition from sklearn import datasets iris = datasets.load_iris() X = iris.data pca = decomposition.PCA(n_components=3) pca.fit(X) pca.explained_variance_ratio_ array([ 0.92461621, 0.05301557, 0.01718514]) pca.components_ array([[ 0.36158968, -0.08226889, 0

Python/Keras/Theano wrong dimensions for Deep Autoencoder

有些话、适合烂在心里 提交于 2019-12-30 04:25:10
问题 I'm trying to follow the Deep Autoencoder Keras example. I'm getting a dimension mismatch exception, but for the life of me, I can't figure out why. It works when I use only one encoded dimension, but not when I stack them. Exception: Input 0 is incompatible with layer dense_18: expected shape=(None, 128), found shape=(None, 32)* The error is on the line decoder = Model(input=encoded_input, output=decoder_layer(encoded_input)) from keras.layers import Dense,Input from keras.models import

Differene between Autoencoder Network and Fully Convolution Network

断了今生、忘了曾经 提交于 2019-12-24 20:19:23
问题 what is the main difference between autoencoder networks and fully convolutional network ? Please help me understand the difference between architecture of these two networks? 回答1: 1] AutoEncoder : Autoencodder is a dimensionality reduction technique It has two parts an encoder and decoder Enocder maps the raw data to a hidden representation (Latent Space Representation) Decoder maps the hidden representation back to raw data The network automatically learns this hidden representation and it

Issues Training CNN with Prime number input dimensions

三世轮回 提交于 2019-12-24 06:07:51
问题 I am currently developing a CNN model with Keras (an autoencoder). This type my inputs are of shape (47,47,3) , that is a 47x47 image with 3 (RGB) layers. I have worked with some CNN's in the past, but this time my input dimensions are prime numbers (47 pixels). This I think is causing issues with my implementation, specifically when using MaxPooling2D and UpSampling2D in my model. I noticed that some dimensions are lost when max pooling and then up sampling . Using model.summary() I can see