I am not very familiar with Torch, and I primarily use Tensorflow. I, however, need to use a retrained inception model that was retrained in Torch. Due to the large amount of co
you basically need to do the same as in tensorflow. That is, when you store a network, only the parameters (i.e. the trainable objects in your network) will be stored, but not the "glue", that is all the logic you need to use a trained model.
So if you have a .pth.tar
file, you can load it, thereby overriding the parameter values of a model already defined.
That means that the general procedure of saving/loading a model is as follows:
nn.Module
object)torch.save
nn.Module
object to first instantiate a pytorch networktorch.load
Here's a discussion with some references on how to do this: pytorch forums
And here's a super short mwe:
# to store
torch.save({
'state_dict': model.state_dict(),
'optimizer' : optimizer.state_dict(),
}, 'filename.pth.tar')
# to load
checkpoint = torch.load('filename.pth.tar')
model.load_state_dict(checkpoint['state_dict'])
optimizer.load_state_dict(checkpoint['optimizer'])