Keras pretrain CNN with TimeDistributed

可紊 提交于 2019-12-13 13:23:45

问题


Here is my problem, I want to use one of the pretrain CNN network in a TimeDistributed layer. But I have some problem to implement it.

Here is my model:

def bnn_model(max_len):
    # sequence length and resnet input size
    x = Input(shape=(maxlen, 224, 224, 3))

    base_model = ResNet50.ResNet50(weights='imagenet',  include_top=False)

    for layer in base_model.layers:
        layer.trainable = False

    som = TimeDistributed(base_model)(x)

    #the ouput of the model is [1, 1, 2048], need to squeeze
    som = Lambda(lambda x: K.squeeze(K.squeeze(x,2),2))(som)

    bnn = Bidirectional(LSTM(300))(som)
    bnn = Dropout(0.5)(bnn)

    pred = Dense(1, activation='sigmoid')(bnn)

    model = Model(input=x, output=pred)

    model.compile(optimizer=Adam(lr=1.0e-5), loss="mse", metrics=["accuracy"])

    return model

When compiling the model I have no error. But when I start training I get the following error:

tensorflow/core/framework/op_kernel.cc:975] Invalid argument: You must feed a value for placeholder tensor 'input_2' with dtype float
[[Node: input_2 = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/gpu:0"]()]]

I checked and I do send float32 but for input1, input2 is the input present in the pretrain Resnet.

Just to have an overview here is the model summary. (Note: it's strange that it doesn't show what happen inside Resnet but never mind)

____________________________________________________________________________________________________
Layer (type)                     Output Shape          Param #     Connected to                     
====================================================================================================
input_1 (InputLayer)             (None, 179, 224, 224, 0                                            
____________________________________________________________________________________________________
timedistributed_1 (TimeDistribut (None, 179, 1, 1, 204 23587712    input_1[0][0]                    
____________________________________________________________________________________________________
lambda_1 (Lambda)                (None, 179, 2048)     0           timedistributed_1[0][0]          
____________________________________________________________________________________________________
bidirectional_1 (Bidirectional)  (None, 600)           5637600     lambda_1[0][0]                   
____________________________________________________________________________________________________
dropout_1 (Dropout)              (None, 600)           0           bidirectional_1[0][0]            
____________________________________________________________________________________________________
dense_1 (Dense)                  (None, 1)             601         dropout_1[0][0]                  
====================================================================================================
Total params: 29,225,913
Trainable params: 5,638,201
Non-trainable params: 23,587,712
____________________________________________________________________________________________________

I am guessing that I do not use the TimeDistributed correctly and I saw nobody trying to do this. I hope someone can guide me on this.

EDIT:

The problem comes from the fact that ResNet50.ResNet50(weights='imagenet', include_top=False) create its own input in the graph.

So I guess I need to do something like ResNet50.ResNet50(weights='imagenet', input_tensor=x, include_top=False) but I do not see how to couple it with TimeDistributed.

I tried

base_model = Lambda(lambda x : ResNet50.ResNet50(weights='imagenet',  input_tensor=x, include_top=False))
som = TimeDistributed(base_model)(in_ten)

But it does not work.


回答1:


My quick solution is a little bit ugly.

I just copied the code of ResNet and added TimeDistributed to all layers and then loaded the weights from a "basic" ResNet on my customized ResNet.

Note:

To be able to analyze sequence of images like this does take a huge amount of memory on the gpu.




回答2:


Considering you are using a pre-trained network from keras, you can replace it with your own pre-trained network too.

Here's a simple solution::

model_vgg=keras.applications.VGG16(input_shape=(256, 256, 3),
                                           include_top=False,
                                           weights='imagenet')
model_vgg.trainable = False
model_vgg.summary()

If you want to use any intermediate layers then, otherwise replace 'block2_pool' with last layer's name::

intermediate_model= Model(inputs=model_vgg.input, outputs=model_vgg.get_layer('block2_pool').output)
intermediate_model.summary()

Finally wrap it in a TimeDistributed Layer

input_tensor = Input(shape=(time_steps,height, width, channels))
timeDistributed_layer = TimeDistributed( intermediate_model )(input_tensor)

Now you can simply do::

my_time_model = Model( inputs = input_tensor, outputs = timeDistributed_layer )


来源:https://stackoverflow.com/questions/42313412/keras-pretrain-cnn-with-timedistributed

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!