问题
I'm trying to convert the following Python code into its equivalent libtorch:
tfm = np.float32([[A[0, 0], A[1, 0], A[2, 0]],
[A[0, 1], A[1, 1], A[2, 1]]
])
In Pytorch we could simply use torch.stack
or simply use a torch.tensor()
like below:
tfm = torch.tensor([[A_tensor[0,0], A_tensor[1,0],0],
[A_tensor[0,1], A_tensor[1,1],0]
])
However, in libtorch, this doesn't hold, that is I can not simply do:
auto tfm = torch::tensor ({{A.index({0,0}), A.index({1,0}), A.index({2,0})},
{A.index({0,1}), A.index({1,1}), A.index({2,1})}
});
or even using a std::vector
doesn't work. the same thing goes to torch::stack. I'm currently using three torch::stack
to get this done:
auto x = torch::stack({ A.index({0,0}), A.index({1,0}), A.index({2,0}) });
auto y = torch::stack({ A.index({0,1}), A.index({1,1}), A.index({2,1}) });
tfm = torch::stack({ x,y });
So is there any better way for doing this? Can we do this using a one-liner?
回答1:
so C++ libtorch does not indeed allow tensor construction from a list of list of tensors like Pytorch (as far as I know), but you can still achieve this result with torch::stack
(implemented here if you're interested) and view
:
auto tfm = torch::stack( {A[0][0], A[1][0], A[2][0], A[0][1], A[1][1], A[2][1]} ).view(2,3);
来源:https://stackoverflow.com/questions/63546039/how-to-convert-a-list-of-tensors-into-a-torchtensor