how to change softmaxlayer with regression in matconvnet

喜你入骨 提交于 2020-01-05 07:33:37

问题


I am trying to train MNIST data set with single output. It means when i give an 28*28 input (image) the model gives us a just number. For example i give '5', the model give me as a result 4.9,5, 5.002 or close to 5. So I have red some documents. People tells softmaxlayer have to be changed with regression layer. For doing do this. I am using matconvnet library and its mnist example. I have changed my network and written regression layer loss function. these are my codes:

net.layers = {} ;
net.layers{end+1} = struct('type', 'conv', ...
                           'weights', {{f*randn(5,5,1,20, 'single'), zeros(1, 20, 'single')}}, ...
                           'stride', 1, ...
                           'pad', 0) ;
net.layers{end+1} = struct('type', 'pool', ...
                           'method', 'max', ...
                           'pool', [2 2], ...
                           'stride', 2, ...
                           'pad', 0) ;
net.layers{end+1} = struct('type', 'conv', ...
                           'weights', {{f*randn(5,5,20,50, 'single'),zeros(1,50,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0) ;
net.layers{end+1} = struct('type', 'pool', ...
                           'method', 'max', ...
                           'pool', [2 2], ...
                           'stride', 2, ...
                           'pad', 0) ;
net.layers{end+1} = struct('type', 'conv', ...
                           'weights', {{f*randn(4,4,50,500, 'single'),  zeros(1,500,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0) ;
net.layers{end+1} = struct('type', 'relu') ;
net.layers{end+1} = struct('type', 'conv', ...
                           'weights', {{f*randn(1,1,500,1, 'single'), zeros(1,1,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0) ;                         
 net.layers{end+1} = struct('type', 'normloss');

this is regression loss function:

function Y = vl_normloss(X,c,dzdy)
size(X)%1 1 1 100
size(c)%1 100

if nargin <= 2

Y = 0.5*sum((squeeze(X)'-c).^2);
size(Y)%1 1
Y      % 1.7361e+03
else
size(Y)
Y = +((squeeze(X)'-c))*dzdy;
Y = reshape(Y,size(X));
end

I changed opts.errorFunction = 'multiclass' ; to 'none' Also i add

case 'normloss'
      res(i+1).x = vl_normloss(res(i).x,l.class) ;

to vl_simplenn script

But when i run train this error occurs

Error using vl_nnconv DEROUTPUT dimensions are incompatible with X and FILTERS.

Error in vl_simplenn (line 415) [res(i).dzdx, dzdw{1}, dzdw{2}] = ...

what i have to do for solving this problem? thank you


回答1:


I found the solution. I have made a mistake. There are 2 different line have to be changed in vl_simplenn script but i changed only one line. This code works.




回答2:


I've a question.

net.layers{end+1} = struct('type', 'conv', ...
                           'weights', {{f*randn(1,1,500,1, 'single'), zeros(1,1,'single')}}, ...
                           'stride', 1, ...
                           'pad', 0) ;                         
 net.layers{end+1} = struct('type', 'normloss');

In conv layer the output [1 1 500 1] shows a descriptor of 1x500? In the loss layer how you using that value? Shouldn't the loss layer softmax predict the class prob and then you find the highest prob corresponding class? Or in this case the output of [1 1 500 1] is the probability?



来源:https://stackoverflow.com/questions/41297783/how-to-change-softmaxlayer-with-regression-in-matconvnet

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!