tf.keras

What is the difference between keras and tf.keras?

你说的曾经没有我的故事 提交于 2020-01-22 15:25:14
问题 I'm learning TensorFlow and Keras. I'd like to try https://www.amazon.com/Deep-Learning-Python-Francois-Chollet/dp/1617294438/, and it seems to be written in Keras. Would it be fairly straightforward to convert code to tf.keras ? I'm not more interested in the portability of the code, rather than the true difference between the two. 回答1: At this point tensorflow has pretty much entirely adopted the keras API and for a good reason - it's simple, easy to use and easy to learn, whereas "pure"

CUDNN_STATUS_BAD_PARAM when trying to perform inference on a LSTM Seq2Seq with masked inputs

末鹿安然 提交于 2020-01-22 15:00:20
问题 I'm using keras layers on tensorflow 2.0 to build a simple LSTM-based Seq2Seq model for text generation . versions I'm using: Python 3.6.9, Tensorflow 2.0.0, CUDA 10.0, CUDNN 7.6.1, Nvidia driver version 410.78. I'm aware of the criteria needed by TF to delegate to CUDNNLstm when a GPU is present (I do have a GPU and my model/data fill all these criteria). Training goes smoothly (with a warning message, see the end of this post) and I can verify that CUDNNLstm is being used. However, when I

Keras ImageDataGenerator for multiple inputs and image based target output

谁说胖子不能爱 提交于 2020-01-22 02:31:11
问题 I have a model which takes two Images as inputs and generates a single image as a Target output. All of my training image-data is in the following sub-folders: input1 input2 target Can I use the ImageDataGenerator class and methods like flow_from_directory and model.fit_generator method in keras to train the network? How can I do this? since most examples I have come across deal with single input and a label-based target output. In my case, I have a non-categorical target output data and

Memory leak when running universal-sentence-encoder-large itterating on dataframe

人走茶凉 提交于 2020-01-16 09:10:12
问题 I have 140K sentences I want to get embeddings for. I am using TF_HUB Universal Sentence Encoder and am iterating over the sentences(I know it's not the best way but when I try to feed over 500 sentences into the model it crashes). My Environment is: Ubuntu 18.04 Python 3.7.4 TF 1.14 Ram: 16gb processor: i-5 my code is: version 1 I iterate inside the tf.session context manager embed = hub.Module("https://tfhub.dev/google/universal-sentence-encoder-large/3") df = pandas_repository.get

Create tf.keras callback to save model predictions and targets for each batch during training in tf 2.0

笑着哭i 提交于 2020-01-15 06:18:09
问题 In tensorflow 2 fetches and assign is not any more supported. Accessing batch results in tf 1.x in a custom keras callback is possible following the answer provided in https://stackoverflow.com/a/47081613/9949099 In tf.keras and tf 2.0 under eager execution fetches are not supported, therefore the solution provided for tf 1.x is not working. Is there a way to get the y_true and y_pred inside the on_batch_end callback of a tf.keras custom callback? I have tried to modify the answer working in

Keras that does not support TensorFlow 2.0. We recommend using `tf.keras`, or alternatively, downgrading to TensorFlow 1.14

南笙酒味 提交于 2020-01-05 08:27:09
问题 I am having an error regarding (Keras that does not support TensorFlow 2.0. We recommend using tf.keras , or alternatively, downgrading to TensorFlow 1.14.) any recommendations. thanks import keras #For building the Neural Network layer by layer from keras.models import Sequential #To randomly initialize the weights to small numbers close to 0(But not 0) from keras.layers import Dense classifier=tf.keras.Sequential() classifier.add(Dense(output_dim = 6, init = 'uniform', activation = 'relu',

AttributeError: The layer has never been called and thus has no defined input shape

ぃ、小莉子 提交于 2020-01-04 04:08:25
问题 I'm tring to build an autoencoder in TensorFlow 2.0 by creating three classes: Encoder, Decoder and AutoEncoder. Since I don't want to manually set input shapes I'm trying to infer the output shape of the decoder from the encoder's input_shape. import os import shutil import numpy as np import tensorflow as tf from tensorflow.keras import Model from tensorflow.keras.layers import Dense, Layer def mse(model, original): return tf.reduce_mean(tf.square(tf.subtract(model(original), original)))

How to deactivate a dropout layer called with training=True in a Keras model?

十年热恋 提交于 2019-12-31 04:46:05
问题 I wish to view the final output of training a tf.keras model. In this case it would be an array of predictions from the softmax function, e.g. [0,0,0,1,0,1]. Other threads on here have suggested using model.predict(training_data), but this won't work for my situation since I am using dropout at training and validation, so neurons are randomly dropped and predicting again with the same data will give a different result. def get_model(): inputs = tf.keras.layers.Input(shape=(input_dims,)) x =

__init__() got an unexpected keyword argument 'inputs'

孤者浪人 提交于 2019-12-25 01:39:02
问题 class Model: def __init__(self): self.model = Sequential() self.model.add(Conv2D(24, 3, 2, 'valid', input_shape=(75, 75, 3))) self.model.add(BatchNormalization()) self.model.add(Conv2D(24, 3, 2)) self.model.add(BatchNormalization()) self.model.add(Conv2D(24, 3, 2)) self.model.add(BatchNormalization()) self.model.add(Conv2D(24, 3, 2)) self.model.add(BatchNormalization()) self.model.add(Flatten()) def get_model(self): return self.model class CNN_MLP: def __init__(self): model = Model() self

How to apply Layer Normalisation in LSTMCell

妖精的绣舞 提交于 2019-12-24 19:00:25
问题 I want to apply Layer Normalisation to recurrent neural network while using tf.compat.v1.nn.rnn_cell.LSTMCell . There is a LayerNormalization class but how should I apply this in LSTMCell. I am using tf.compat.v1.nn.rnn_cell.LSTMCell because I want to use projection layer. How should I achieve Normalisation in this case. class LM(tf.keras.Model): def __init__(self, hidden_size=2048, num_layers=2): super(LM, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.lstm