numpy-einsum

Outer subtraction with Numpy

放肆的年华 提交于 2021-02-05 07:45:27
问题 I simply want to do: C_i=\Sum_k (A_i -B_k)^2 I saw that this calculation is faster with a simple for loop than with the numpy.subtract.outer ! Anyway I feel that numpy.einsum would be the fastest. I could not understand numpy.einsum that well. Can anyone please help me out? Additionally, it would be great if someone explains how a general summation expression consisting of vector/matrices can be written with numpy.einsum ? I did not find solution for this particular problem on the web. Sorry

tensorflow einsum vs. matmul vs. tensordot

≯℡__Kan透↙ 提交于 2020-01-19 13:33:59
问题 In tensorflow, the functions tf.einsum , tf.matmul , and tf.tensordot can all be used for the same tasks. (I realize that tf.einsum and tf.tensordot have more general definitions; I also realize that tf.matmul has batch functionality.) In a situation where any of the three could be used, does one function tend to be fastest? Are there other recommendation rules? For example, suppose that A is a rank-2 tensor, and b is rank-1 tensor, and you want to compute the product c_j = A_ij b_j . Of the

tensorflow einsum vs. matmul vs. tensordot

前提是你 提交于 2020-01-19 13:32:20
问题 In tensorflow, the functions tf.einsum , tf.matmul , and tf.tensordot can all be used for the same tasks. (I realize that tf.einsum and tf.tensordot have more general definitions; I also realize that tf.matmul has batch functionality.) In a situation where any of the three could be used, does one function tend to be fastest? Are there other recommendation rules? For example, suppose that A is a rank-2 tensor, and b is rank-1 tensor, and you want to compute the product c_j = A_ij b_j . Of the

numpy.einsum for Julia? (2)

混江龙づ霸主 提交于 2019-12-23 10:53:20
问题 Coming from this question, I wonder if a more generalized einsum was possible. Let us assume, I had the problem using PyCall @pyimport numpy as np a = rand(10,10,10) b = rand(10,10) c = rand(10,10,10) Q = np.einsum("imk,ml,lkj->ij", a,b,c) Or something similar, how were I to solve this problem without looping through the sums? with best regards 回答1: Edit/Update: This is now a registered package, so you can Pkg.add("Einsum") and you should be good to go (see the example below to get started).

Python tensor product

末鹿安然 提交于 2019-12-22 10:37:07
问题 I have the following problem. For performance reasons I use numpy.tensordot and have thus my values stored in tensors and vectors. One of my calculations look like this: <w_j> is the expectancy value of w_j and <sigma_i> the expectancy value of sigma_i . (Perhaps I should now have called is sigma, because it has nothing to do with standart deviation) Now for further calculations I also need the variance. To the get Variance I need to calculate: Now when I implemented the first formula into

Pure NumPy 2D mean convolution derivative of input image

别来无恙 提交于 2019-12-18 09:24:02
问题 I have b 2d m x n greyscale images that I'm convolving with a p x q filter and then doing mean-pooling on. With pure numpy, I'd like to compute the derivative of the input image and the filter, but I'm having trouble computing the derivative of the input image: def conv2d_derivatives(x, f, dy): """ dimensions: b = batch size m = input image height n = input image width p = filter height q = filter width r = output height s = output width input: x = input image (b x m x n) f = filter (p x q)

Element-wise matrix multiplication for multi-dimensional array

有些话、适合烂在心里 提交于 2019-12-18 09:08:56
问题 I want to realize component-wise matrix multiplication in MATLAB, which can be done using numpy.einsum in Python as below: import numpy as np M = 2 N = 4 I = 2000 J = 300 A = np.random.randn(M, M, I) B = np.random.randn(M, M, N, J, I) C = np.random.randn(M, J, I) # using einsum D = np.einsum('mki, klnji, lji -> mnji', A, B, C) # naive for-loop E = np.zeros(M, N, J, I) for i in range(I): for j in range(J): for n in range(N): E[:,n,j,i] = B[:,:,i] @ A[:,:,n,j,i] @ C[:,j,i] print(np.sum(np.abs(D

Processing upper triangular elements only with NumPy einsum

自古美人都是妖i 提交于 2019-12-12 11:27:11
问题 I'm using numpy einsum to calculate the dot products of an array of column vectors pts, of shape (3,N), with itself, resulting on a matrix dotps, of shape (N,N), with all the dot products. This is the code I use: dotps = np.einsum('ij,ik->jk', pts, pts) This works, but I only need the values above the main diagonal. ie. the upper triangular part of the result without the diagonal. Is it possible to compute only these values with einsum? or in any other way that is faster than using einsum to

summing outer product of multiple vectors in einsum

六眼飞鱼酱① 提交于 2019-12-11 13:20:14
问题 I have read through the einsum manual and ajcr's basic introduction I have zero experience with einstein summation in a non-coding context, although I have tried to remedy that with some internet research (would provide links but don't have the reputation for more than two yet). I've also tried experimenting in python with einsum to see if I could get a better handle on things. And yet I'm still unclear on whether it is both possible and efficient to do as follows: on two arrays of arrays (a

Multiplying tensors containing images in numpy

巧了我就是萌 提交于 2019-12-11 12:36:34
问题 I have the following 3rd order tensors. Both tensors matrices the first tensor containing 100 10x9 matrices and the second containing 100 3x10 matrices (which I have just filled with ones for this example). My aim is to multiply the matrices as the line up one to one correspondance wise which would result in a tensor with shape: (100, 3, 9) This can be done with a for loop that just zips up both tensors and then takes the dot of each but I am looking to do this just with numpy operators. So