Output of Keras predict method has the wrong shape when using Google Colab's tpu strategy

ぃ、小莉子 提交于 2021-02-11 15:14:34

问题


I made the following architecture

Layer (type)                 Output Shape              Param #   
=================================================================
embedding_7 (Embedding)      (None, 50, 64)            512000    
_________________________________________________________________
bidirectional_5 (Bidirection (None, 200)               132000    
_________________________________________________________________
dense_9 (Dense)              (None, 1)                 201       
=================================================================
Total params: 644,201
Trainable params: 644,201
Non-trainable params: 0

With this code:

with tpu_strategy.scope():

  model = Sequential()
  model.add(Embedding(MAX_NB_WORDS, EMBEDDING_DIM, input_length=X.shape[1]))
  model.add(Bidirectional(LSTM(HIDDEN_DIM)))
  model.add(Dense(1, activation='sigmoid'))
  model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy',f1_m])

  print(model.summary())
  history = model.fit(X_train, y_train, epochs=EPOCHS,validation_data=(X_val, y_val),
                      callbacks=[EarlyStopping(monitor='val_f1_m', patience=5, min_delta=0.001, mode = 'max')],
                      class_weight=class_weight)

I can train the model and call the method model.evaluate(X_test,y_test) with no errors. But, when I call model.predict(X_test), the resulting array has the shape (24256, 1) when X_test has the shape (24255, 50). Why does this happen? Why am I getting one extra prediction? Shouldn't the resulting array of predictions be (24255, 1)?


EDIT

I was using Google Colab for this one. I made this small code to replicate the problem

import numpy as np
import tensorflow as tf

#Random numbers
X_fake = np.array([[1]*50]*6+[[0]*50]*6)
y_fake = np.array([1]*6+[0]*6)

def create_tpu_strategy():
  try:
    tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
    print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
  except ValueError:
    raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')

  tf.config.experimental_connect_to_cluster(tpu)
  tf.tpu.experimental.initialize_tpu_system(tpu)
  tpu_strategy = tf.distribute.experimental.TPUStrategy(tpu)
  return tpu_strategy

tpu_strategy = create_tpu_strategy()

with tpu_strategy.scope():
  model = tf.keras.Sequential([
    tf.keras.layers.Embedding(8000, 64, input_length=X_fake.shape[1]),
    tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dense(1, activation='sigmoid')
    ])

  model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),optimizer=tf.keras.optimizers.Adam(1e-4),metrics=['accuracy'])

print(model.summary())

model.fit(X_fake, y_fake, epochs=1)

preds = model.predict_classes(X_fake)

print(preds.shape,X_fake.shape)

And this is the output of the shapes:

(16, 1) (12, 50)

When I stopped using the TPU, the output was what I expected from the beginning:

(12, 1) (12, 50)

Now I'm not using TPU for my original code and it works fine. But, still, why does this happen? Am I initializing wrong my tpu strategy?


回答1:


I believe for model.predict and model.predict_classes expect your input size be a multiple of the number of TPU cores (8 in this case). Try to make your input size to be a multiple of 8 and it should work as expected.

  • For small input size you can directly call preds = model(X_fake).
  • For large input size you can make sure it is a multiple of 8.

This issue is already resolved in tf-nightly. If you try installing Tensorflow Nightly and switching TPU to tf-nightly then it will work:

!pip install cloud-tpu-client
!pip install tf-nightly

import tensorflow as tf
from cloud_tpu_client import Client
import numpy as np

# Change TPU to match Colab Tenserflow version
c = Client()
c.configure_tpu_version(tf.__version__, restart_type='ifNeeded')


#Random numbers
X_fake = np.array([[1]*50]*6+[[0]*50]*6)
y_fake = np.array([1]*6+[0]*6)

def create_tpu_strategy():
  try:
    tpu = tf.distribute.cluster_resolver.TPUClusterResolver(tpu='grpc://' + os.environ['COLAB_TPU_ADDR'])
    print('Running on TPU ', tpu.cluster_spec().as_dict()['worker'])
  except ValueError:
    raise BaseException('ERROR: Not connected to a TPU runtime; please see the previous cell in this notebook for instructions!')

  tf.config.experimental_connect_to_cluster(tpu)
  tf.tpu.experimental.initialize_tpu_system(tpu)
  tpu_strategy = tf.distribute.experimental.TPUStrategy(tpu)
  return tpu_strategy

tpu_strategy = create_tpu_strategy()

with tpu_strategy.scope():
  model = tf.keras.Sequential([
    tf.keras.layers.Embedding(8000, 64, input_length=X_fake.shape[1]),
    tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
    tf.keras.layers.Dense(64, activation='relu'),
    tf.keras.layers.Dense(1, activation='sigmoid')
    ])

  model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),optimizer=tf.keras.optimizers.Adam(1e-4),metrics=['accuracy'])

print(model.summary())

model.fit(X_fake, y_fake, epochs=1)

preds = model.predict_classes(X_fake)

print(preds.shape, X_fake.shape)

Then the output shape are (12, 1) (12, 50).



来源:https://stackoverflow.com/questions/62379562/output-of-keras-predict-method-has-the-wrong-shape-when-using-google-colabs-tpu

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!