autoencoder

Reusing layer weights in Tensorflow

半城伤御伤魂 提交于 2019-12-21 04:51:26
问题 I am using tf.slim to implement an autoencoder. I's fully convolutional with the following architecture: [conv, outputs = 1] => [conv, outputs = 15] => [conv, outputs = 25] => => [conv_transpose, outputs = 25] => [conv_transpose, outputs = 15] => [conv_transpose, outputs = 1] It has to be fully convolutional and I cannot do pooling (limitations of the larger problem). I want to use tied weights, so encoder_W_3 = decoder_W_1_Transposed (so the weights of the first decoder layer are the ones of

How to load trained autoencoder weights for decoder?

余生颓废 提交于 2019-12-20 03:49:06
问题 I have a CNN 1d autoencoder which has a dense central layer. I would like to train this Autoencoder and save its model. I would also like to save the decoder part, with this goal: feed some central features (calculated independently) to the trained and loaded decoder, to see what are the images of these independently calculated features through the decoder. ## ENCODER encoder_input = Input(batch_shape=(None,501,1)) x = Conv1D(256,3, activation='tanh', padding='valid')(encoder_input) x =

Keras实现autoencoder

流过昼夜 提交于 2019-12-20 01:39:52
Keras使我们搭建神经网络变得异常简单,之前我们使用了Sequential来搭建LSTM: keras实现LSTM 。 我们要使用Keras的functional API搭建更加灵活的网络结构,比如说本文的autoencoder,关于autoencoder的介绍可以在这里找到: deep autoencoder 。 现在我们就开始。 step 0 导入需要的包 1 import keras 2 from keras.layers import Dense, Input 3 from keras.datasets import mnist 4 from keras.models import Model 5 import numpy as np step 1 数据预处理 这里需要说明一下,导入的原始数据shape为(60000,28,28),autoencoder使用(60000,28*28),而且autoencoder属于无监督学习,所以只需要导入x_train和x_test. 1 (x_train, _), (x_test, _) = mnist.load_data() 2 x_train = x_train.astype('float32')/255.0 3 x_test = x_test.astype('float32')/255.0 4 #print(x_train

Variationnal auto-encoder: implementing warm-up in Keras

守給你的承諾、 提交于 2019-12-19 07:12:11
问题 I recently read this paper which introduces a process called "Warm-Up" (WU), which consists in multiplying the loss in the KL-divergence by a variable whose value depends on the number of epoch (it evolves linearly from 0 to 1) I was wondering if this is the good way to do that: beta = K.variable(value=0.0) def vae_loss(x, x_decoded_mean): # cross entropy xent_loss = K.mean(objectives.categorical_crossentropy(x, x_decoded_mean)) # kl divergence for k in range(n_sample): epsilon = K.random

TensorFlow: how is dataset.train.next_batch defined?

…衆ロ難τιáo~ 提交于 2019-12-18 11:28:28
问题 I am trying to learn TensorFlow and studying the example at: https://github.com/aymericdamien/TensorFlow-Examples/blob/master/notebooks/3_NeuralNetworks/autoencoder.ipynb I then have some questions in the code below: for epoch in range(training_epochs): # Loop over all batches for i in range(total_batch): batch_xs, batch_ys = mnist.train.next_batch(batch_size) # Run optimization op (backprop) and cost op (to get loss value) _, c = sess.run([optimizer, cost], feed_dict={X: batch_xs}) # Display

Extract encoder and decoder from trained autoencoder

纵然是瞬间 提交于 2019-12-18 05:06:38
问题 I want to divide the autoencoder learning and applying into two parts following https://blog.keras.io/building-autoencoders-in-keras.html and using the fashion-mnist data for testing purposes: Load the images, do the fitting that may take some hours or days and use a callback to save the best autoencoder model. That process can be some weeks before the following part. Use this best model (manually selected by filename) and plot original image, the encoded representation made by the encoder of

Layer conv2d_3 was called with an input that isn't a symbolic tensor

一世执手 提交于 2019-12-17 20:59:51
问题 hi I am building a image classifier for one-class classification in which i've used autoencoder while running this model I am getting this error (ValueError: Layer conv2d_3 was called with an input that isn't a symbolic tensor. Received type: . Full input: [(128, 128, 3)]. All inputs to the layer should be tensors.) num_of_samples = img_data.shape[0] labels = np.ones((num_of_samples,),dtype='int64') labels[0:376]=0 names = ['cat'] Y = np_utils.to_categorical(labels, num_class) input_shape=img

Intermediate layer makes tensorflow optimizer to stop working

我的未来我决定 提交于 2019-12-17 18:29:36
问题 This graph trains a simple signal identity encoder, and in fact shows that the weights are being evolved by the optimizer: import tensorflow as tf import numpy as np initia = tf.random_normal_initializer(0, 1e-3) DEPTH_1 = 16 OUT_DEPTH = 1 I = tf.placeholder(tf.float32, shape=[None,1], name='I') # input W = tf.get_variable('W', shape=[1,DEPTH_1], initializer=initia, dtype=tf.float32, trainable=True) # weights b = tf.get_variable('b', shape=[DEPTH_1], initializer=initia, dtype=tf.float32,

ValueError: Error when checking target: expected model_2 to have shape (None, 252, 252, 1) but got array with shape (300, 128, 128, 3)

一世执手 提交于 2019-12-17 05:14:22
问题 hi I am building a image classifier for one-class classification in which i've used autoencoder while running this model I am getting this error by this line (autoencoder_model.fit) (ValueError: Error when checking target: expected model_2 to have shape (None, 252, 252, 1) but got array with shape (300, 128, 128, 3).) num_of_samples = img_data.shape[0] labels = np.ones((num_of_samples,),dtype='int64') labels[0:376]=0 names = ['cats'] input_shape=img_data[0].shape X_train, X_test = train_test

how to have a LSTM Autoencoder model over the whole vocab prediction while presenting words as embedding

社会主义新天地 提交于 2019-12-13 20:50:15
问题 So I have been working on LSTM Autoencoder model . I have also created various version of this model. 1. create the model using the already trained word embedding: in this scenario, I used the weights of already trained Glove vector, as the weight of features(text data). This is the structure: inputs = Input(shape=(SEQUENCE_LEN, EMBED_SIZE), name="input") encoded = Bidirectional(LSTM(LATENT_SIZE), merge_mode="sum", name="encoder_lstm")(inputs) encoded =Lambda(rev_entropy)(encoded) decoded =