Can't replicate a matconvnet CNN architecture in Keras

心已入冬 提交于 2020-01-04 01:51:41

问题


I have the following architecture of a Convolutional Neural Network in matconvnet which I use to train on my own data:

function net = cnn_mnist_init(varargin)
% CNN_MNIST_LENET Initialize a CNN similar for MNIST
opts.batchNormalization = false ;
opts.networkType = 'simplenn' ;
opts = vl_argparse(opts, varargin) ;

f= 0.0125 ;
net.layers = {} ;
net.layers{end+1} = struct('name','conv1',...
                           'type', 'conv', ...
                           'weights', {{f*randn(3,3,1,64, 'single'), zeros(1, 64, 'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool1',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [3 3], ...
                           'stride', 1, ...
                           'pad', 0);
net.layers{end+1} = struct('name','conv2',...
                           'type', 'conv', ...
                           'weights', {{f*randn(5,5,64,128, 'single'),zeros(1,128,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool2',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [2 2], ...
                           'stride', 2, ...
                           'pad', 0) ;
net.layers{end+1} = struct('name','conv3',...
                           'type', 'conv', ...
                           'weights', {{f*randn(3,3,128,256, 'single'),zeros(1,256,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool3',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [3 3], ...
                           'stride', 1, ...
                           'pad', 0) ;
net.layers{end+1} = struct('name','conv4',...
                           'type', 'conv', ...
                           'weights', {{f*randn(5,5,256,512, 'single'),zeros(1,512,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','pool4',...
                           'type', 'pool', ...
                           'method', 'max', ...
                           'pool', [2 2], ...
                           'stride', 1, ...
                           'pad', 0) ;
net.layers{end+1} = struct('name','ip1',...
                           'type', 'conv', ...
                           'weights', {{f*randn(1,1,256,256, 'single'),  zeros(1,256,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','relu',...
                           'type', 'relu');
net.layers{end+1} = struct('name','classifier',...
                           'type', 'conv', ...
                           'weights', {{f*randn(1,1,256,2, 'single'), zeros(1,2,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0,...
                           'learningRate', [1 2]) ;
net.layers{end+1} = struct('name','loss',...
                           'type', 'softmaxloss') ;

% optionally switch to batch normalization
if opts.batchNormalization
  net = insertBnorm(net, 1) ;
  net = insertBnorm(net, 4) ;
  net = insertBnorm(net, 7) ;
  net = insertBnorm(net, 10) ;
  net = insertBnorm(net, 13) ;
end

% Meta parameters
net.meta.inputSize = [28 28 1] ;
net.meta.trainOpts.learningRate = [0.01*ones(1,10) 0.001*ones(1,10) 0.0001*ones(1,10)];
disp(net.meta.trainOpts.learningRate);
pause;
net.meta.trainOpts.numEpochs = length(net.meta.trainOpts.learningRate) ;
net.meta.trainOpts.batchSize = 256 ;
net.meta.trainOpts.momentum = 0.9 ;
net.meta.trainOpts.weightDecay = 0.0005 ;

% --------------------------------------------------------------------
function net = insertBnorm(net, l)
% --------------------------------------------------------------------
assert(isfield(net.layers{l}, 'weights'));
ndim = size(net.layers{l}.weights{1}, 4);
layer = struct('type', 'bnorm', ...
               'weights', {{ones(ndim, 1, 'single'), zeros(ndim, 1, 'single')}}, ...
               'learningRate', [1 1], ...
               'weightDecay', [0 0]) ;
net.layers{l}.biases = [] ;
net.layers = horzcat(net.layers(1:l), layer, net.layers(l+1:end)) ;

What I want to do is building the same architecture in Keras, that is what I tried so far:

model = Sequential()

model.add(Conv2D(64, (3, 3), strides=1, input_shape=input_shape))
model.add(MaxPooling2D(pool_size=(3, 3), strides=1))

model.add(Conv2D(128, (5, 5), strides=1))
model.add(MaxPooling2D(pool_size=(2, 2), strides=2))

model.add(Conv2D(256, (3, 3), strides=1))
model.add(MaxPooling2D(pool_size=(3, 3), strides=1))

model.add(Conv2D(512, (5, 5), strides=1))
model.add(MaxPooling2D(pool_size=(2, 2), strides=1))

model.add(Conv2D(256, (1, 1)))
convout1=Activation('relu')
model.add(convout1)

model.add(Flatten())
model.add(Dense(num_classes, activation='softmax'))

opt = keras.optimizers.rmsprop(lr=0.0001, decay=0.0005)  
model.compile(loss='categorical_crossentropy', optimizer=opt, metrics=['binary_accuracy'])

However, when I run the matconvnet network I have 87% accuracy and if I run the keras version I have 77% accuracy. If they are supposed to be the same network and the data is the same, where is the difference? What is wrong in my Keras architecture?


回答1:


In your MatConvNet version, you use SGD with momentum.

In Keras, you use rmsprop

With a different learning rule you should try different learning rates. Also sometimes momentum is helpful when training a CNN.

Could you try the SGD+momentum in Keras and let me know what happens?

Another thing that might be different is that the initialization. for example in MatConvNet you use gaussian initialization with f= 0.0125 as the standard deviation. In Keras I'm not sure about the default initialization.

In general, if you don't use batch normalization, the network is prone to many numerical issues. If you use batch normalization in both networks, I bet the results would be similar. Is there any reason you don't want to use batch normalization?



来源:https://stackoverflow.com/questions/44863192/cant-replicate-a-matconvnet-cnn-architecture-in-keras

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!