Trying to understand Pytorch's implementation of LSTM

不羁的心 提交于 2020-07-21 04:59:37

问题


I have a dataset containing 1000 examples where each example has 5 features (a,b,c,d,e). I want to feed 7 examples to an LSTM so it predicts the feature (a) of the 8th day.

Reading Pytorchs documentation of nn.LSTM() I came up with the following:

input_size = 5
hidden_size = 10
num_layers = 1
output_size = 1

lstm = nn.LSTM(input_size, hidden_size, num_layers)
fc = nn.Linear(hidden_size, output_size)

out, hidden = lstm(X)  # Where X's shape is ([7,1,5])
output = fc(out[-1])

output  # output's shape is ([7,1])

According to the docs:

The input of the nn.LSTM is "input of shape (seq_len, batch, input_size)" with "input_size – The number of expected features in the input x",

And the output is: "output of shape (seq_len, batch, num_directions * hidden_size): tensor containing the output features (h_t) from the last layer of the LSTM, for each t."

In this case, I thought seq_len would be the sequence of 7 examples, batchis 1 and input_size is 5. So the lstm would consume each example containing 5 features refeeding the hidden layer every iteration.

What am I missing?


回答1:


When I extend your code to a full example -- I also added some comments to may help -- I get the following:

import torch
import torch.nn as nn

input_size = 5
hidden_size = 10
num_layers = 1
output_size = 1

lstm = nn.LSTM(input_size, hidden_size, num_layers)
fc = nn.Linear(hidden_size, output_size)

X = [
    [[1,2,3,4,5]],
    [[1,2,3,4,5]],
    [[1,2,3,4,5]],
    [[1,2,3,4,5]],
    [[1,2,3,4,5]],
    [[1,2,3,4,5]],
    [[1,2,3,4,5]],
]

X = torch.tensor(X, dtype=torch.float32)

print(X.shape)         # (seq_len, batch_size, input_size) = (7, 1, 5)
out, hidden = lstm(X)  # Where X's shape is ([7,1,5])
print(out.shape)       # (seq_len, batch_size, hidden_size) = (7, 1, 10)
out = out[-1]          # Get output of last step
print(out.shape)       # (batch, hidden_size) = (1, 10)
out = fc(out)          # Push through linear layer
print(out.shape)       # (batch_size, output_size) = (1, 1)

This makes sense to me, given your batch_size = 1 and output_size = 1 (I assume, you're doing regression). I don't know where your output.shape = (7, 1) come from.

Are you sure that your X has the correct dimensions? Did you create nn.LSTM maybe with batch_first=True? There are lot of little things that can sneak in.



来源:https://stackoverflow.com/questions/55408365/trying-to-understand-pytorchs-implementation-of-lstm

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!