tf.keras

AttributeError: 'numpy.ndarray' object has no attribute 'id'

大憨熊 提交于 2021-02-11 13:42:08
问题 I am creating a sklearn pipeline that consists of 3 steps: Transforms pandas dataframe into 3D array Transforms 3D array into recurrence plot (image) Trains an image classification model using Keras This is my initial data set: train_df - pandas dataframe id cycle s1 1 1 0.05 1 2 0.04 1 3 0.05 1 4 0.05 2 1 0.02 2 2 0.03 y_train array([[1., 0., 0.], [1., 0., 0.], ... [1., 0., 0.]], dtype=float32) When I run my current code (see below), I get the following error: AttributeError: 'numpy.ndarray'

Tensorflow @tf.function - Cannot get session inside Tensorflow graph function

两盒软妹~` 提交于 2021-02-11 02:50:30
问题 I'm trying to use the @tf.function directive with the Keras functional API, to create a TF graph in the training step of a simple neural network. I'm using Tensorflow v 2.1.0 installed with Python 3.7. However I obtain the runtime error as in title and I would appreciate any hint to understand the reason of that. The code is the following. import tensorflow as tf import numpy as np # import the CIFAR10 dataset and normalise the feature distributions (train_images, train_labels), (test_images,

Tensorflow @tf.function - Cannot get session inside Tensorflow graph function

无人久伴 提交于 2021-02-11 02:49:36
问题 I'm trying to use the @tf.function directive with the Keras functional API, to create a TF graph in the training step of a simple neural network. I'm using Tensorflow v 2.1.0 installed with Python 3.7. However I obtain the runtime error as in title and I would appreciate any hint to understand the reason of that. The code is the following. import tensorflow as tf import numpy as np # import the CIFAR10 dataset and normalise the feature distributions (train_images, train_labels), (test_images,

Tensorflow @tf.function - Cannot get session inside Tensorflow graph function

巧了我就是萌 提交于 2021-02-11 02:47:59
问题 I'm trying to use the @tf.function directive with the Keras functional API, to create a TF graph in the training step of a simple neural network. I'm using Tensorflow v 2.1.0 installed with Python 3.7. However I obtain the runtime error as in title and I would appreciate any hint to understand the reason of that. The code is the following. import tensorflow as tf import numpy as np # import the CIFAR10 dataset and normalise the feature distributions (train_images, train_labels), (test_images,

I'm struggling to implement tensorboard monitoring into the Mask_RCNN training process

你。 提交于 2021-02-10 14:33:42
问题 I've been using the balloon.py example script in the Matterport Mask R-CNN repo [https://github.com/matterport/Mask_RCNN/blob/master/samples/balloon/balloon.py] for learning how to implement tensorboard to monitor the training process. The training itself is going fine, but I've completely failed to implement tensorboard. So far I've added: # create Tensorboard logdir = os.path.join("logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S")) tensorboard_callback = tf.keras.callbacks

What's the difference between attrubutes 'trainable' and 'training' in BatchNormalization layer in Keras Tensorfolow?

回眸只為那壹抹淺笑 提交于 2021-02-10 12:52:46
问题 According to the official documents from tensorflow: About setting layer.trainable = False on a `BatchNormalization layer: The meaning of setting layer.trainable = False is to freeze the layer, i.e. its internal state will not change during training: its trainable weights will not be updated during fit() or train_on_batch(), and its state updates will not be run. Usually, this does not necessarily mean that the layer is run in inference mode (which is normally controlled by the training

Use TensorBoard with Keras Tuner

醉酒当歌 提交于 2021-02-06 09:21:46
问题 I ran into an apparent circular dependency trying to use log data for TensorBoard during a hyper-parameter search done with Keras Tuner, for a model built with TF2. The typical setup for the latter needs to set up the Tensorboard callback in the tuner's search() method, which wraps the model's fit() method. from kerastuner.tuners import RandomSearch tuner = RandomSearch(build_model, #this method builds the model hyperparameters=hp, objective='val_accuracy') tuner.search(x=train_x, y=train_y,

Use TensorBoard with Keras Tuner

旧街凉风 提交于 2021-02-06 09:21:36
问题 I ran into an apparent circular dependency trying to use log data for TensorBoard during a hyper-parameter search done with Keras Tuner, for a model built with TF2. The typical setup for the latter needs to set up the Tensorboard callback in the tuner's search() method, which wraps the model's fit() method. from kerastuner.tuners import RandomSearch tuner = RandomSearch(build_model, #this method builds the model hyperparameters=hp, objective='val_accuracy') tuner.search(x=train_x, y=train_y,

Keras input shape throws value error expected 4d but got an array with shape (60000, 28,28)

孤者浪人 提交于 2021-02-05 11:37:19
问题 (x_train, y_train), (x_test, y_test) = tf.keras.datasets.fashion_mnist.load_data() x_train = x_train.astype('float32') / 255 x_test = x_test.astype('float32') / 255 x_train.shape #Shape is (60000, 28, 28) Then the model made sure input shape is 28,28,1 since 60k is the sample. model2 = tf.keras.Sequential() # Must define the input shape in the first layer of the neural network model2.add(tf.keras.layers.Conv2D(filters=64, kernel_size=2, padding='same', activation='relu', input_shape=(28,28,1)

Yet another “Error when checking target: expected dense_2 to have shape (4,) but got array with shape (1,)”

三世轮回 提交于 2021-01-29 19:30:35
问题 I'm using Keras in Python 3. The issue I'm having seems to be similar to many others, and the best I can tell I might need to use Flatten(), though I am not seeing how to set the parameters correctly. I get the error: ValueError: Error when checking target: expected dense_2 to have shape (4,) but got array with shape (1,) My data is not of images (yet) but they are sequences I've turned in to data frames. model = Sequential() model.add(Dense(30, input_dim=16, activation='relu')) model.add