TimeDistributed of a KerasLayer in Tensorflow 2.0

醉酒当歌 提交于 2020-05-15 19:22:05

问题


I'm trying to build a CNN + RNN using a pre-trained model from tensorflow-hub:

base_model = hub.KerasLayer('https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/4', input_shape=(244, 244, 3)
base_model.trainable = False

model = Sequential()
model.add(TimeDistributed(base_model, input_shape=(15, 244, 244, 3)))
model.add(LSTM(512))
model.add(Dense(256, activation='relu'))
model.add(Dense(3, activation='softmax'))

adam = Adam(learning_rate=learning_rate)
model.compile(loss='categorical_crossentropy' , optimizer=adam , metrics=['accuracy'])
model.summary()

and this what I get:

2020-01-29 16:1

6:37.585888: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2494000000 Hz
2020-01-29 16:16:37.586205: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x3b553f0 executing computations on platform Host. Devices:
2020-01-29 16:16:37.586231: I tensorflow/compiler/xla/service/service.cc:175]   StreamExecutor device (0): Host, Default Version
Traceback (most recent call last):
  File "./RNN.py", line 45, in <module>
    model.add(TimeDistributed(base_model, input_shape=(None, 244, 244, 3)))
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/tracking/base.py", line 457, in _method_wrapper
    result = method(self, *args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/sequential.py", line 178, in add
    layer(x)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py", line 842, in __call__
    outputs = call_fn(cast_inputs, *args, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/layers/wrappers.py", line 256, in call
    output_shape = self.compute_output_shape(input_shape).as_list()
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/layers/wrappers.py", line 210, in compute_output_shape
    child_output_shape = self.layer.compute_output_shape(child_input_shape)
  File "/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/keras/engine/base_layer.py", line 639, in compute_output_shape
    raise NotImplementedError
NotImplementedError

any suggestions? Is it possible to convert a KerasLayer to Conv2D,... layers?


回答1:


It doesn't seem like you can use the TimeDistributed layer for this problem. However, as you don't want Resnet to train and just need the output, you can do the following to avoid the TimeDistributed layer.

Instead of model.add(TimeDistributed(base_model, input_shape=(15, 244, 244, 3))), do

Option 1

# 2048 is the output size
model.add(
    Lambda(
        lambda x: tf.reshape(base_model(tf.reshape(x, [-1, 244, 244,3])),[-1, 15, 2048])
    , input_shape=(15, 244, 244, 3))
)

Option 2

If you don't want to depend too much on the output shape (this sacrifices performance though).

model.add(
    Lambda(
        lambda x: tf.stack([base_model(xx) for xx in tf.unstack(x, axis=1) ], axis=1)
    , input_shape=(15, 244, 244, 3))
)


来源:https://stackoverflow.com/questions/59970196/timedistributed-of-a-keraslayer-in-tensorflow-2-0

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!