tf.keras

Predictions from a model become very small. The loss is either 0 or a positive constant

一曲冷凌霜 提交于 2020-05-17 06:06:39
问题 I am implementing the following architecture in Tensorflow. Dual Encoder LSTM https://i.stack.imgur.com/ZmcsX.png During the first few iterations, the loss remains 0.6915 but after that as you can see in the output below, no matter how many iterations I run, the loss keeps varying between -0.0 and a positive constant depending upon the hyperparams. This is happening because the predictions of my model become very small(close to zero) or very high (close to 1). So the model cannot be trained.

Predictions from a model become very small. The loss is either 0 or a positive constant

对着背影说爱祢 提交于 2020-05-17 06:04:52
问题 I am implementing the following architecture in Tensorflow. Dual Encoder LSTM https://i.stack.imgur.com/ZmcsX.png During the first few iterations, the loss remains 0.6915 but after that as you can see in the output below, no matter how many iterations I run, the loss keeps varying between -0.0 and a positive constant depending upon the hyperparams. This is happening because the predictions of my model become very small(close to zero) or very high (close to 1). So the model cannot be trained.

tf.keras plot_model: add_node() received a non node class object

狂风中的少年 提交于 2020-05-16 05:53:10
问题 I'm getting back into python and have been trying out some stuff with tensorflow and keras. I wanted to use the plot_model function and after sorting out some graphviz issues I am now getting this error - TypeError: add_node() received a non node class object: I've tried to find an answer myself but have come up short, as the only answer I found with this error didn't seem to be to do with tf. Any suggestions or alternative ideas would be greatly appreciated. Here's the code and error message

NotImplementedError: Cannot convert a symbolic Tensor (truediv_2:0) to a numpy array

南笙酒味 提交于 2020-05-14 13:40:06
问题 If you execute the following TensorFlow 2.1 code import tensorflow as tf import tensorflow_probability as tfp tf.config.experimental_run_functions_eagerly(True) def get_mnist_data(normalize=True, categorize=True): img_rows, img_cols = 28, 28 (x_train, y_train), (x_test, y_test) = tf.keras.datasets.mnist.load_data() if tf.keras.backend.image_data_format() == 'channels_first': x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols) x_test = x_test.reshape(x_test.shape[0], 1, img_rows

How to train a parameter outside the model?

喜你入骨 提交于 2020-04-30 07:19:31
问题 I am implementing the following architecture in Tensorflow 2.0 Dual Encoder LSTM C and R are sentences encoded into a fixed dimension by the two LSTM's. Then they are passed through a function sigmoid(CMR). We can assume that R and C are both 256 dimensional matrices and M is a 256 * 256 matrix. The matrix M is learned during training. Since I want to train M, I declared M = tf.Variable(shape,trainable = True). But after fitting the model, the values of M are still not changing. How to tell

How to train a parameter outside the model?

*爱你&永不变心* 提交于 2020-04-30 07:18:47
问题 I am implementing the following architecture in Tensorflow 2.0 Dual Encoder LSTM C and R are sentences encoded into a fixed dimension by the two LSTM's. Then they are passed through a function sigmoid(CMR). We can assume that R and C are both 256 dimensional matrices and M is a 256 * 256 matrix. The matrix M is learned during training. Since I want to train M, I declared M = tf.Variable(shape,trainable = True). But after fitting the model, the values of M are still not changing. How to tell

How do I calculate the matthews correlation coefficient in tensorflow

旧时模样 提交于 2020-04-13 17:01:11
问题 So I made a model with tensorflow keras and it seems to work ok. However, my supervisor said it would be useful to calculate the Matthews correlation coefficient, as well as the accuracy and loss it already calculates. my model is very similar to the code in the tutorial here (https://www.tensorflow.org/tutorials/keras/basic_classification) except with a much smaller dataset. is there a prebuilt function or would I have to get the prediction for each test and calculate it by hand? 回答1:

How do I calculate the matthews correlation coefficient in tensorflow

二次信任 提交于 2020-04-13 16:59:28
问题 So I made a model with tensorflow keras and it seems to work ok. However, my supervisor said it would be useful to calculate the Matthews correlation coefficient, as well as the accuracy and loss it already calculates. my model is very similar to the code in the tutorial here (https://www.tensorflow.org/tutorials/keras/basic_classification) except with a much smaller dataset. is there a prebuilt function or would I have to get the prediction for each test and calculate it by hand? 回答1:

ValueError: None is only supported in the 1st dimension. Tensor 'flatbuffer_data' has invalid shape '[None, None, 1, 512]'

妖精的绣舞 提交于 2020-04-12 02:15:23
问题 I am trying to convert my tensorflow model (2.0) into tensorflow lite format. My model has two input layers as follows: import tensorflow as tf from tensorflow import keras from tensorflow.keras.models import load_model from tensorflow.keras.layers import Lambda, Input, add, Dot, multiply, dot from tensorflow.keras.backend import dot, transpose, expand_dims from tensorflow.keras.models import Model r1 = Input(shape=[None, 1, 512], name='flatbuffer_data') # I want to take a variable amount of

model.summary() can't print output shape while using subclass model

こ雲淡風輕ζ 提交于 2020-04-09 19:07:14
问题 This is the two methods for creating a keras model, but the output shapes of the summary results of the two methods are different. Obviously, the former prints more information and makes it easier to check the correctness of the network. import tensorflow as tf from tensorflow.keras import Input, layers, Model class subclass(Model): def __init__(self): super(subclass, self).__init__() self.conv = layers.Conv2D(28, 3, strides=1) def call(self, x): return self.conv(x) def func_api(): x = Input