tf.keras

model.summary() can't print output shape while using subclass model

痞子三分冷 提交于 2020-04-09 19:05:27
问题 This is the two methods for creating a keras model, but the output shapes of the summary results of the two methods are different. Obviously, the former prints more information and makes it easier to check the correctness of the network. import tensorflow as tf from tensorflow.keras import Input, layers, Model class subclass(Model): def __init__(self): super(subclass, self).__init__() self.conv = layers.Conv2D(28, 3, strides=1) def call(self, x): return self.conv(x) def func_api(): x = Input

saved_model.prune() in TF2.0

梦想与她 提交于 2020-03-26 05:13:05
问题 I am trying to prune nodes of a SavedModel that was generated with tf.keras. The pruning script is as follows: svmod = tf.saved_model.load(fn) #version 1 #svmod = tfk.experimental.load_from_saved_model(fn) #version 2 feeds = ['foo:0'] fetches = ['bar:0'] svmod2 = svmod.prune(feeds=feeds, fetches=fetches) tf.saved_model.save(svmod2, '/tmp/saved_model/') #version 1 #tfk.experimental.export_saved_model(svmod2, '/tmp/saved_model/') #version 2 If I use version #1 pruning works but gives ValueError

Why does my model work with `tf.GradientTape()` but fail when using `keras.models.Model.fit()`

白昼怎懂夜的黑 提交于 2020-03-23 12:03:53
问题 After much effort, I managed to build a tensorflow 2 implementation of an existing pytorch style-transfer project. Then I wanted to get all the nice extra features that are available through Keras standard learning, e.g. model.fit() . But the same model fails when learning through model.fit() . The model seems to learn the content features, but is unable to learn style features. This is the diagram of the model in quesion: def vgg_layers19(content_layers, style_layers, input_shape=(256,256,3)

“UserWarning: An input could not be retrieved. It could be because a worker has died. We do not have any information on the lost sample.”

眉间皱痕 提交于 2020-03-22 03:57:07
问题 While training model I got this warning "UserWarning: An input could not be retrieved. It could be because a worker has died.We do not have any information on the lost sample.)", after showing this warning, model starts training. What does this warning means? Is it something that will affect my training and I need to worry about? 回答1: This is just a user warning that will be usually thrown when you try to fetch the inputs,targets during training. This is because a timeout is set for the

Keras-tuner search function throws Failed to create a NewWriteableFile error

自作多情 提交于 2020-03-15 07:36:13
问题 The relatively new keras-tuner module for tensorflow-2 is causing the error 'Failed to create a NewWriteableFile'. The tuner.search function is working, it is only after the trial completes that the error is thrown. This is a tutorial from the sentdex Youtube channel. Here is the code: from tensorflow import keras from tensorflow.keras.datasets import fashion_mnist from tensorflow.keras.layers import Dense, Conv2D, MaxPooling2D, Activation, Flatten from kerastuner.tuners import RandomSearch

States argument missing in custom Model using custom RNN layer

安稳与你 提交于 2020-03-03 09:59:27
问题 I'm building my own layer in Tensorflow 2.1 and using it in custom model. In the example below I copied MinimalRNNCell code from tensorflow website (https://www.tensorflow.org/api_docs/python/tf/keras/layers/RNN) and I'm trying to use this layer in my model. However, when trying to fit the model I'm getting an error saying that call method of a cell requires "states" argument and I'm not providing it. How should I correct my model to provide that argument? My code: import tensorflow as tf

KerasLayer vs tf.keras.applications performances

て烟熏妆下的殇ゞ 提交于 2020-02-05 03:36:41
问题 I've trained some networks with ResNetV2 50 ( https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/4 ) and it work very well for my datasets. Then I tried tf.keras.applications.ResNet50 and accuracy is very lower than the other. Here two models: The first (with hub) base_model = hub.KerasLayer('https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/4', input_shape=(IMAGE_H, IMAGE_W, 3)) base_model.trainable = False model = tf.keras.Sequential([ base_model , Dense(num_classes,

Why does my Keras model train after I load it, even though I have not actually supplied any new training data?

瘦欲@ 提交于 2020-01-30 09:17:48
问题 I am trying to train and make predictions with an LSTM model using tf.keras. I have written code in two different files, LSTMTraining.py which trains the Keras Model (and save it to a file), and Predict.py, which is supposed to load in the Keras model and use it to make predictions. For some reason, when I load the model in Predict.py, it starts training, even though I have not used the model.fit() command in that file. Why is this happening? I have saved the model into multiple different

add LSTM/GRU to BERT embeddings in keras tensorflow

℡╲_俬逩灬. 提交于 2020-01-24 11:34:10
问题 I am experimenting with BERT embeddings following this code https://github.com/strongio/keras-bert/blob/master/keras-bert.py These are the important bits of the code (lines 265-267): bert_output = BertLayer(n_fine_tune_layers=3)(bert_inputs) dense = tf.keras.layers.Dense(256, activation="relu")(bert_output) pred = tf.keras.layers.Dense(1, activation="sigmoid")(dense) I want to add a GRU between BertLayer and the Dense layer bert_output = BertLayer(n_fine_tune_layers=3)(bert_inputs) gru_out =

In TensorFlow 2.0 with eager-execution, how to compute the gradients of a network output wrt a specific layer?

◇◆丶佛笑我妖孽 提交于 2020-01-23 10:49:06
问题 I have a network made with InceptionNet, and for an input sample bx , I want to compute the gradients of the model output w.r.t. the hidden layer. I have the following code: bx = tf.reshape(x_batch[0, :, :, :], (1, 299, 299, 3)) with tf.GradientTape() as gtape: #gtape.watch(x) preds = model(bx) print(preds.shape, end=' ') class_idx = np.argmax(preds[0]) print(class_idx, end=' ') class_output = model.output[:, class_idx] print(class_output, end=' ') last_conv_layer = model.get_layer('inception