Does it call forward()
in nn.Module
? I thought when we call the model, forward
method is being used.
Why do we need to specify train()
Here is the code of module.train():
def train(self, mode=True):
r"""Sets the module in training mode."""
self.training = mode
for module in self.children():
module.train(mode)
return self
And here is the module.eval.
def eval(self):
r"""Sets the module in evaluation mode."""
return self.train(False)
Modes train
and eval
are the only two modes we can set the module in, and they are exactly opposite.
That's just a self.training
flag and currently only dropout and bachnorm care about that flag.
By default, this flag is set to True
.
There are two ways of letting the model know your intention i.e do you want to train the model or do you want to use the model to evaluate.
In case of model.train()
the model knows it has to learn the layers and when we use model.eval()
it indicates the model that nothing new is to be learnt and the model is used for testing.
model.eval()
is also necessary because in pytorch if we are using batchnorm and during test if we want to just pass a single image, pytorch throws an error if model.eval()
is not specified.
model.train()
tells your model that you are training the model. So effectively layers like dropout, batchnorm etc. which behave different on the train and test procedures know what is going on and hence can behave accordingly.
More details:
It sets the mode to train
(see source code). You can call either model.eval()
or model.train(mode=False)
to tell that you are testing.
It is somewhat intuitive to expect train
function to train model but it does not do that. It just sets the mode.