tf.keras

Keras - Validation Loss and Accuracy stuck at 0

二次信任 提交于 2021-01-20 19:11:10
问题 I am trying to train a simple 2 layer Fully Connected neural net for Binary Classification in Tensorflow keras. I have split my data into Training and Validation sets with a 80-20 split using sklearn's train_test_split() . When I call model.fit(X_train, y_train, validation_data=[X_val, y_val]) , it shows 0 validation loss and accuracy for all epochs , but it trains just fine. Also, when I try to evaluate it on the validation set, the output is non-zero. Can someone please explain why I am

Keras - Validation Loss and Accuracy stuck at 0

爱⌒轻易说出口 提交于 2021-01-20 19:09:58
问题 I am trying to train a simple 2 layer Fully Connected neural net for Binary Classification in Tensorflow keras. I have split my data into Training and Validation sets with a 80-20 split using sklearn's train_test_split() . When I call model.fit(X_train, y_train, validation_data=[X_val, y_val]) , it shows 0 validation loss and accuracy for all epochs , but it trains just fine. Also, when I try to evaluate it on the validation set, the output is non-zero. Can someone please explain why I am

How to get intermediate outputs in TF 2.3 Eager with learning_phase?

两盒软妹~` 提交于 2021-01-20 09:46:18
问题 Example below works in 2.2; K.function is changed significantly in 2.3, now building a Model in Eager execution, so we're passing Model(inputs=[learning_phase,...]) . I do have a workaround in mind, but it's hackish, and lot more complex than K.function ; if none can show a simple approach, I'll post mine. from tensorflow.keras.layers import Input, Dense from tensorflow.keras.models import Model from tensorflow.python.keras import backend as K import numpy as np ipt = Input((16,)) x = Dense

keras.layers.TimeDistributed with hub.KerasLayer NotImplementedError

时光总嘲笑我的痴心妄想 提交于 2021-01-20 09:36:32
问题 I want to use tf.keras.TimeDistributed() layer with the tf.hub inception_v3 CNN model from the latest TensorFLow V2 version (tf-nightly-gpu-2.0-preview). The output is shown below. It seemst that tf.keras.TimeDistributed() is not fully implemented to work with tf.hub models. Somehow, the shape of the input layer cannot be computed. My question: Is there a workaround this problem? tf.keras.TimeDistributed with regular tf.keras.layer works fine. I just would like to apply the CNN model to each

keras.layers.TimeDistributed with hub.KerasLayer NotImplementedError

给你一囗甜甜゛ 提交于 2021-01-20 09:36:27
问题 I want to use tf.keras.TimeDistributed() layer with the tf.hub inception_v3 CNN model from the latest TensorFLow V2 version (tf-nightly-gpu-2.0-preview). The output is shown below. It seemst that tf.keras.TimeDistributed() is not fully implemented to work with tf.hub models. Somehow, the shape of the input layer cannot be computed. My question: Is there a workaround this problem? tf.keras.TimeDistributed with regular tf.keras.layer works fine. I just would like to apply the CNN model to each

Compatibility between keras and tf.keras models

怎甘沉沦 提交于 2020-12-29 18:17:54
问题 I am interested in training a model in tf.keras and then loading it with keras. I know this is not highly-advised, but I am interested in using tf.keras to train the model because tf.keras is easier to build input pipelines I want to take advantage of the tf.dataset API and I am interested in loading it with keras because I want to use coreml to deploy the model to ios. I want to use coremltools to convert my model to ios, and coreml tools only works with keras, not tf.keras. I have run into

Compatibility between keras and tf.keras models

北战南征 提交于 2020-12-29 18:17:18
问题 I am interested in training a model in tf.keras and then loading it with keras. I know this is not highly-advised, but I am interested in using tf.keras to train the model because tf.keras is easier to build input pipelines I want to take advantage of the tf.dataset API and I am interested in loading it with keras because I want to use coreml to deploy the model to ios. I want to use coremltools to convert my model to ios, and coreml tools only works with keras, not tf.keras. I have run into

Compatibility between keras and tf.keras models

扶醉桌前 提交于 2020-12-29 18:16:37
问题 I am interested in training a model in tf.keras and then loading it with keras. I know this is not highly-advised, but I am interested in using tf.keras to train the model because tf.keras is easier to build input pipelines I want to take advantage of the tf.dataset API and I am interested in loading it with keras because I want to use coreml to deploy the model to ios. I want to use coremltools to convert my model to ios, and coreml tools only works with keras, not tf.keras. I have run into

how to see tensor value of a layer output in keras

点点圈 提交于 2020-12-13 09:27:21
问题 I have a Seq2Seq model. I am interested to print out the matrix value of the output of the encoder per iteration. So for example as the dimension of the matrix in the encoder is (?,20) and the epoch =5 and in each epoch, there are 10 iteration, I would like to see 10 matrix of the dimension (?,20) per epoch . I have gone to several links as here but it still does not print out the value matrix. With this code as mentioned in the aboved link: import keras.backend as K k_value = K.print_tensor

Unable to understand the behavior of method `build` in tensorflow keras layers (tf.keras.layers.Layer)

拈花ヽ惹草 提交于 2020-12-12 04:36:18
问题 Layers in tensorflow keras have a method build that is used to defer the weights creation to a time when you have seen what the input is going to be. a layer's build method I have a few questions i have not been able to find the answer of: here it is said that If you assign a Layer instance as attribute of another Layer, the outer layer will start tracking the weights of the inner layer. What does it mean to track the weights of a layer? The same link also mentions that We recommend creating