Error in element wise weighted averaging between 2 layers in keras cnn

半世苍凉 提交于 2020-08-10 19:35:09

问题


I am getting error in element wise weighted averaging between 2 layers in cnn My base model is

model_base = Sequential()
# Conv Layer 1
model_base.add(layers.SeparableConv2D(32, (9, 9), activation='relu', input_shape=input_shape))
model_base.add(layers.MaxPooling2D(2, 2))
# model.add(layers.Dropout(0.25))

# Conv Layer 2
model_base.add(layers.SeparableConv2D(64, (9, 9), activation='relu'))
model_base.add(layers.MaxPooling2D(2, 2))
# model.add(layers.Dropout(0.25))

# Conv Layer 3
model_base.add(layers.SeparableConv2D(128, (9, 9), activation='relu'))
model_base.add(layers.MaxPooling2D(2, 2))
# model.add(layers.Dropout(0.25))

model_base.add(layers.Conv2D(256, (9, 9), activation='relu'))
# model.add(layers.MaxPooling2D(2, 2))
# Flatten the data for upcoming dense layer
#model_base.add(layers.Flatten())
#model_base.add(layers.Dropout(0.5))
#model_base.add(layers.Dense(512, activation='relu'))

print(model_base.summary())

I am taking out layer 2,4 and 6, doing a dot product, followed by activation and resizing. Now I would like to get element wise weighted average of a1 and l1. But not getting since the dimensions of the batches does not match. Can anyone help?



l1 = model_base.layers[2].output 
l1 = GlobalAveragePooling2D()(l1) 
c2 = model_base.layers[4].output
c2 = GlobalAveragePooling2D()(c2) 
c3 = model_base.layers[6].output
#c3 = GlobalAveragePooling2D()(c3) 
#c=c3.shape[-1]

l1 = Dense(512)(l1)
c2 = Dense(512)(c2) 

c13 = Lambda(lambda lam: K.squeeze(K.map_fn(lambda xy: K.dot(xy[0], xy[1]), elems=(lam[0], K.expand_dims(lam[1], -1)), dtype='float32'), 3), name='cdp1')([l1, c3])  # batch*x*y
c23 = Lambda(lambda lam: K.squeeze(K.map_fn(lambda xy: K.dot(xy[0], xy[1]), elems=(lam[0], K.expand_dims(lam[1], -1)), dtype='float32'), 3), name='cdp1')([c2, c3])  # batch*x*y

flatc13 = Flatten(name='flatc1')(c13)  # batch*xy
flatc23 = Flatten(name='flatc1')(c23)  # batch*xy

a1 = Activation('softmax', name='softmax1')(flatc13)
a2 = Activation('softmax', name='softmax1')(flatc23)



#a1 = Activation('softmax', name='softmax1')(c13)
#a2 = Activation('softmax', name='softmax1')(c23)

from keras.layers.core import Reshape


reshaped1 = Reshape((-1,512), name='reshape1')(l1)  # batch*xy*512
reshaped2 = Reshape((-1,512), name='reshape2')(c2)  # batch*xy*512
                                                                          
g1 = Lambda(lambda lam: K.squeeze(K.batch_dot(K.expand_dims(lam[0], 1), lam[1]), 1), name='g1')([reshaped1,a1])  # batch*512.```

回答1:


given your base_model this the correct way to build the code block below...

l1 = model_base.layers[2].output
l1 = GlobalAveragePooling2D()(l1) 
c2 = model_base.layers[4].output
c2 = GlobalAveragePooling2D()(c2) 
c3 = model_base.layers[6].output

c = c3.shape[-1] ### this is important for the dimesionality
l1 = Dense(c)(l1)
c2 = Dense(c)(c2) 

c13 = Lambda(lambda lam: K.squeeze(K.map_fn(lambda xy: K.dot(xy[0], xy[1]), 
                                            elems=(lam[0], K.expand_dims(lam[1], -1)), dtype='float32'), 3), name='cdp1')([c3, l1])  # batch*x*y

c23 = Lambda(lambda lam: K.squeeze(K.map_fn(lambda xy: K.dot(xy[0], xy[1]), 
                                            elems=(lam[0], K.expand_dims(lam[1], -1)), dtype='float32'), 3), name='cdp2')([c3, c2])  # batch*x*y

flatc13 = Flatten(name='flatc1')(c13)  # batch*xy
flatc23 = Flatten(name='flatc2')(c23)  # batch*xy

a1 = Activation('softmax', name='softmax1')(flatc13) # batch*xy
a2 = Activation('softmax', name='softmax2')(flatc23) # batch*xy

reshaped = Reshape((-1,c), name='reshape1')(c3)  # batch*xy*c

g1 = Lambda(lambda lam: K.squeeze(K.batch_dot(K.expand_dims(lam[0], 1), lam[1]), 1), 
            name='g1')([a1,reshaped])  # batch*c
g2 = Lambda(lambda lam: K.squeeze(K.batch_dot(K.expand_dims(lam[0], 1), lam[1]), 1), 
            name='g2')([a2,reshaped])  # batch*c

pay attention to the dimensionality (in your case you can't operate with 512 but with 256, this is handled automatically by the c variable). pay attention also to the order of the layer used in the Lambda operations (for example in c13 it's ([c3, l1]) and not ([l1, c3]))

here the running notebook: https://colab.research.google.com/drive/1m0pB5GlYRtIsOnHUTz6LxRQblcvtVU3Y?usp=sharing



来源:https://stackoverflow.com/questions/63041081/error-in-element-wise-weighted-averaging-between-2-layers-in-keras-cnn

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!