Python tensor product

末鹿安然 提交于 2019-12-22 10:37:07

问题


I have the following problem. For performance reasons I use numpy.tensordot and have thus my values stored in tensors and vectors. One of my calculations look like this:

<w_j> is the expectancy value of w_j and <sigma_i> the expectancy value of sigma_i. (Perhaps I should now have called is sigma, because it has nothing to do with standart deviation) Now for further calculations I also need the variance. To the get Variance I need to calculate:

Now when I implemented the first formula into python with numpy.tensordot I was really happy when it worked because this is quite abstract and I am not used to tensors. The code does look like this:

erc = numpy.tensordot(numpy.tensordot(re, ewp, axes=1), ewp, axes=1)

Now this works and my problem is to write down the correct form for the second formula. One of my attempts was:

serc = numpy.tensordot(numpy.tensordot(numpy.tensordot(numpy.tensordot
(numpy.tensordot(re, re, axes=1), ewp, axes=1), ewp, axes=1)
, ewp, axes=1), ewp, axes=1)

But this does give me a scalar instead of a vector. Another try was:

serc = numpy.einsum('m, m', numpy.einsum('lm, l -> m',
numpy.einsum('klm, k -> lm', numpy.einsum('jklm, j -> klm',
numpy.einsum('ijk, ilm -> jklm', re, re), ewp), ewp), ewp), ewp)

The vectors have lenght l and the dimension of the tensor is l * l * l. I hope my problem is understandable and thank you in advance!

Edit: The first formula can in python also written down like: erc2 = numpy.einsum('ik, k -> i', numpy.einsum('ijk, k -> ij', re, ewp), ewp)


回答1:


You could do that with a series of reductions, like so -

p1 = np.tensordot(re,ewp,axes=(1,0))
p2 = np.tensordot(p1,ewp,axes=(1,0))
out = p2**2

Explanation

First off, we could separate it out into two groups of operations :

Group 1: R(i,j,k) , < wj > , < wk > 
Group 2: R(i,l,m) , < wl > , < wl > 

The operations performed within these two groups are identical. So, one could compute for one group and derive the final output based off it.

Now, to compute R(i,j,k) , < wj >, < wk and end up with (i) , we need to perform element-wise multiplication along the second and third axes of R with w and then perform sum-reduction along those axes. Here, we are doing it in two steps with two tensordots -

[1] R(i,j,k) , < wj > to get p1(i,k)
[2] p1(i,k) , < wk > to get p2(i)

Thus, we end up with a vector p2. Similarly with the second group, the result would be an identical vector. So, to get to the final output, we just need to square that vector, i.e. p**2.



来源:https://stackoverflow.com/questions/40044714/python-tensor-product

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!