autoencoder

Value error with dimensions in designing a simple autoencoder

怎甘沉沦 提交于 2019-12-13 06:29:36
问题 Hi I am trying out a simple autoencoder in Python 3.5 using Keras library. The issue I face is - ValueError: Error when checking input: expected input_40 to have 2 dimensions, but got array with shape (32, 256, 256, 3). My dataset is very small (60 RGB images with dimension - 256*256 and one same type of image to validate). I am a bit new to Python. Please help. import matplotlib.pyplot as plt from keras.layers import Input, Dense from keras.models import Model #Declaring the model encoding

Stacked Autoencoder

时光毁灭记忆、已成空白 提交于 2019-12-13 04:07:30
问题 I have a basic autoencoder structure. I want to change it to a stacked autoencoder. From what I know the stacked AE differs in 2 ways: It is made up of layers of sparse vanilla AEs It does layer-wise training. I want to know if sparsity is a necessity for stacked AEs or just increasing number of hidden layers in vanilla AE structure will make it a stacked AE? class Autoencoder(Chain): def __init__(self): super().__init__() with self.init_scope(): # encoder part self.l1 = L.Linear(1308608,500)

InvalidType: Invalid operation is performed

风流意气都作罢 提交于 2019-12-13 03:58:33
问题 I am trying to write a stacked autoencoder. Since this a stacked autoencoder we need to train the first autoencoder and pass the weights to the second autoencoder. So during training we need to define train_data_for_next_layer. Here I am getting error: InvalidType: Invalid operation is performed in: LinearFunction (Forward) Expect: x.shape[1] == W.shape[1] Actual: 784 != 250 I am having issue with the last line. Is this problem due to incorrect model layer, I want to know what is the issue

Tensorflow Autoencoder with custom training examples from binary file

若如初见. 提交于 2019-12-12 23:06:03
问题 I'm trying to adapt the Tensorflow Autoencoder code found here (https://github.com/aymericdamien/TensorFlow-Examples/blob/master/examples/3_NeuralNetworks/autoencoder.py) to use my own training examples. My training examples are single channel 29*29 (gray level) images saved as UINT8 values continuously in a binary file. I have created a module which creates data_batches which will guide the training. This is the module: import tensorflow as tf # various initialization variables BATCH_SIZE =

Getting the learned representation of the data from the unsupervised learning in pylearn2

余生长醉 提交于 2019-12-11 08:13:00
问题 We can train an autoencoder in pylearn2 using below YAML file (along with pylearn2/scripts/train.py) !obj:pylearn2.train.Train { dataset: &train !obj:pylearn2.datasets.mnist.MNIST { which_set: 'train', start: 0, stop: 50000 }, model: !obj:pylearn2.models.autoencoder.DenoisingAutoencoder { nvis : 784, nhid : 500, irange : 0.05, corruptor: !obj:pylearn2.corruption.BinomialCorruptor { corruption_level: .2, }, act_enc: "tanh", act_dec: null, # Linear activation on the decoder side. }, algorithm:

Unsure about the result my autoencoder neural network is giving me from Keras predict

三世轮回 提交于 2019-12-11 04:08:52
问题 I'm trying to build an Autoencoder neural network for finding outliers in a single column list of text. My input have 138 lines and they look like this: amaze_header_2.png amaze_header.png circle_shape.xml disableable_ic_edit_24dp.xml fab_label_background.xml fab_shadow_black.9.png fab_shadow_dark.9.png I've built an autoencoder network using Keras, and I use a python function to convert my text input into an array with the ascii representation of each character, padded by zeroes so they all

is their a scatter_update() for placeholder in tensorflow

这一生的挚爱 提交于 2019-12-11 03:47:42
问题 I am coding a denoising autoencoder function with tensorflow (which is a little long so i won't post the entire code) and every thing is working well except when i am adding masking noise to a batch Masking noise is just taking a random proportion of the features to 0. So the problem is just taking some value in a matrix to 0.(trivial if i had a np.array for exepmle) So i see ,if it's a tf.variable, how to modify one element of a matrix thanks to tf.scatter_update() But then when I try with a

How to simplify DataLoader for Autoencoder in Pytorch

℡╲_俬逩灬. 提交于 2019-12-11 02:44:01
问题 Is there any easier way to set up the dataloader, because input and target data is the same in case of an autoencoder and to load the data during training? The DataLoader always requires two inputs. Currently I define my dataloader like this: X_train = rnd.random((300,100)) X_val = rnd.random((75,100)) train = data_utils.TensorDataset(torch.from_numpy(X_train).float(), torch.from_numpy(X_train).float()) val = data_utils.TensorDataset(torch.from_numpy(X_val).float(), torch.from_numpy(X_val)

keras shapes while UpSampling mismatch

拥有回忆 提交于 2019-12-11 00:56:21
问题 I'm trying to run this convolutional auto encoder sample but with my own data, so I modified its InputLayer accoridng to my images. However, on the output layer there is a problem with dimensions. I'm sure the problem is with UpSampling, but I'm not sure why is this happening: here goes the code. N, H, W = X_train.shape input_img = Input(shape=(H,W,1)) # adapt this if using `channels_first` image data format x = Conv2D(16, (3, 3), activation='relu', padding='same')(input_img) x = MaxPooling2D

ValueError: Error when checking target: expected lstm_27 to have 2 dimensions, but got array with shape (1, 11, 1)

冷暖自知 提交于 2019-12-11 00:16:05
问题 I am trying to incorporate a simple LSTM autoencoder mentioned in the keras.io website with a sequence input. It is throwing an error at the LSTM layer input. from keras.layers import Input, LSTM, RepeatVector from keras.models import Model import numpy as np def autoencoder(timesteps,input_dim): inputs = Input(shape=(timesteps, input_dim)) encoded = LSTM(300)(inputs) decoded = RepeatVector(timesteps)(encoded) decoded = LSTM(input_dim, return_sequences=True)(decoded) encoder = Model(inputs,