问题
I have a problem with classifying fully connected deep neural net with 2 hidden layers for MNIST dataset in pytorch.
I want to use tanh as activations in both hidden layers, but in the end, I should use softmax.
For the loss, I am choosing nn.CrossEntropyLoss()
in pytorch, which (as I have found out) does not want to take one-hot encoded labels as true labels, but takes LongTensor of classes instead.
My model is nn.Sequential()
and when I am using softmax in the end, it gives me worse results in terms of accuracy on testing data. why?
import torch
import torch.nn as nn
inputs, n_hidden0, n_hidden1, out = 784, 128, 64, 10
n_epochs = 500
model = nn.Sequential(nn.Linear(inputs, n_hidden0, bias = True),
nn.Tanh(),
nn.Linear(n_hidden0, n_hidden1, bias = True),
nn.Tanh(),
nn.Linear(n_hidden1, out, bias = True),
nn.Softmax() # SHOULD THIS BE THERE?
)
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum = 0.5)
for epoch in range(n_epochs):
y_pred = model(X_train)
loss = criterion(y_pred, Y_train)
print('epoch: ', epoch+1,' loss: ', loss.item())
optimizer.zero_grad()
loss.backward()
optimizer.step()
Thanks for any help :)
回答1:
As stated in the torch.nn.CrossEntropyLoss() doc:
This criterion combines nn.LogSoftmax() and nn.NLLLoss() in one single class.
Therefore, you should not use softmax before.
来源:https://stackoverflow.com/questions/55675345/should-i-use-softmax-as-output-when-using-cross-entropy-loss-in-pytorch