Achieving batch matrix multiply using tensordot

前端 未结 1 466
野的像风
野的像风 2020-12-21 16:47

I\'m trying to achieve the same behaviour as np.matmul parallel matrix multiplication using just tensordot,dot and reshaping etc.

The library I am translating this t

相关标签:
1条回答
  • 2020-12-21 17:09

    We need to keep one aligned and keep that also at the output. So, tensordot/dot won't work here. More info on tensordot might explain it somehow on why it won't. But, we can use np.einsum, which in most cases (in my experience) is seen to be marginally faster than np.matmul.

    The implementation would look something like this -

    np.einsum('ijk,ik->ij',rotations, vectors)
    

    Also, it seems the desired output has one trailing singleton dim. So, append a new axis there with None/np.newaxis, like so -

    np.einsum('ijk,ik->ij',rotations, vectors)[...,None]
    
    0 讨论(0)
提交回复
热议问题