问题
For a binary classification problem with batch_size = 1
, I have logit and label values using which I need to calculate loss.
logit: tensor([0.1198, 0.1911], device='cuda:0', grad_fn=<AddBackward0>)
label: tensor(1], device='cuda:0')
# calculate loss
loss_criterion = nn.CrossEntropyLoss()
loss_criterion.cuda()
loss = loss_criterion( b_logits, b_labels )
However, this always results in the following error,
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
What input dimensions is the CrossEntropyLoss actually asking for?
回答1:
You are passing wrong shape of tensors.shape
should be (from doc)
- Input:
(N,C)
whereC
= number of classes- Target:
(N)
where each value is0 ≤ targets[i] ≤ C−1
So here, b_logits
shape should be ([1,2])
instead of ([2])
to make it right shape you can use torch.view like b_logits.view(1,-1)
.
And b_labels
shape should be ([1])
.
Ex.:
b_logits = torch.tensor([0.1198, 0.1911], requires_grad=True)
b_labels = torch.tensor([1])
loss_criterion = nn.CrossEntropyLoss()
loss = loss_criterion( b_logits.view(1,-1), b_labels )
loss
tensor(0.6581, grad_fn=<NllLossBackward>)
来源:https://stackoverflow.com/questions/61501417/input-dimension-for-crossentropy-loss-in-pytorch