问题
So I'm struggling to understand some terminology about collections in Pytorch. I keep running into the same kinds of errors about the range of my tensors being incorrect, and when I try to Google for a solution often the explanations are further confusing.
Here is an example:
m = torch.nn.LogSoftmax(dim=1)
input = torch.tensor([0.3300, 0.3937, -0.3113, -0.2880])
output = m(input)
I don't see anything wrong with the above code, and I've defined my LogSoftmax
to accept a 1 dimensional input. So according to my experience with other programming languages the collection [0.3300, 0.3937, -0.3113, -0.2880]
is a single dimension.
The above triggers the following error for m(input)
:
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
What does that mean?
I passed in a one dimensional tensor, but then it tells me that it was expecting a range of [-1, 0], but got 1
.
- A range of what?
- Why is the error comparing a dimension of
1
to[-1, 0]
? - What do the two numbers
[-1, 0]
mean?
I searched for an explanation for this error, and I find things like this link which make no sense to me as a programmer:
https://github.com/pytorch/pytorch/issues/5554#issuecomment-370456868
So I was able to fix the above code by adding another dimension to my tensor data.
m = torch.nn.LogSoftmax(dim=1)
input = torch.tensor([[-0.3300, 0.3937, -0.3113, -0.2880]])
output = m(input)
So that works, but I don't understand how [-1,0]
explains a nested collection.
Further experiments showed that the following also works:
m = torch.nn.LogSoftmax(dim=1)
input = torch.tensor([[0.0, 0.1], [1.0, 0.1], [2.0, 0.1]])
output = m(input)
So dim=1
means a collection of collections, but I don't understand how that means [-1, 0]
.
When I try using LogSoftmax(dim=2)
m = torch.nn.LogSoftmax(dim=2)
input = torch.tensor([[0.0, 0.1], [1.0, 0.1], [2.0, 0.1]])
output = m(input)
The above gives me the following error:
IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 2)
Confusion again that dim=2
equals [-2, 1]
, because where did the 1
value come from?
I can fix the error above by nesting collections another level, but at this point I don't understand what values LogSoftmax
is expecting.
m = torch.nn.LogSoftmax(dim=2)
input = torch.tensor([[[0.0, 0.1]], [[1.0, 0.1]], [[2.0, 0.1]]])
output = m(input)
I am super confused by this terminology [-1, 0]
and [-2, 1]
?
If the first value is the nested depth, then why is it negative and what could the second number mean?
There is no error code associated with this error. So it's been difficult to find documentation on the subject. It appears to be an extremely common error people get confused by and nothing that I can find in the Pytorch documentation that talks specifically about it.
回答1:
When specifying a tensor's dimension as an argument for a function (e.g. m = torch.nn.LogSoftmax(dim=1)
) you can either use positive dimension indexing starting with 0 for the first dimension, 1 for the second etc.
Alternatively, you can use negative dimension indexing to start from the last dimension to the first: -1 indicate the last dimension, -2 the second from last etc.
Example:
If you have a 4D tensor of dimensions b
-by-c
-by-h
-by-w
then
- The "batch" dimension (the first) can be accessed as either
dim=0
ordim=-4
. - The "channel" dimension (the second) can be accessed as either
dim=1
ordim=-3
. - The "height"/"vertical" dimension (the third) can be accessed as either
dim=2
ordim=-2
. - The "width"/"horizontal" dimension (the fourth) can be accessed as either
dim=3
ordim=-1
.
Therefore, if you have a 4D tensor dim
argument can take values in the range [-4, 3]
.
In your case you have a 1D tensor and therefore dim
argument can be wither 0 or -1 (which in this deprecate case amounts to the same dimension).
来源:https://stackoverflow.com/questions/59704538/what-is-a-dimensional-range-of-1-0-in-pytorch