numpy-einsum

Using numpy einsum to compute inner product of column-vectors of a matrix

断了今生、忘了曾经 提交于 2019-12-01 23:08:16
Suppose I have a numpy matrix like this: [[ 1 2 3] [ 10 100 1000]] I would like to compute the inner product of each column with itself, so the result would be: [1*1 + 10*10 2*2 + 100*100 3*3 + 1000*1000] == [101, 10004, 1000009] I would like to know if this is possible using the einsum function (and to better understand it). So far, the closest result I could have is: import numpy as np arr = np.array([[1, 2, 3], [10, 100, 1000]]) res = np.einsum('ij,ik->jk', arr, arr) # [[ 101 1002 10003] # [ 1002 10004 100006] # [ 10003 100006 1000009]] The diagonal contains the expected result, but I would

How do I calculate all pairs of vector differences in numpy?

落爺英雄遲暮 提交于 2019-12-01 18:38:11
I know I can do np.subtract.outer(x, x) . If x has shape (n,) , then I end up with an array with shape (n, n) . However, I have an x with shape (n, 3) . I want to output something with shape (n, n, 3) . How do I do this? Maybe np.einsum ? You can use broadcasting after extending the dimensions with None / np.newaxis to form a 3D array version of x and subtracting the original 2D array version from it, like so - x[:, np.newaxis, :] - x Sample run - In [6]: x Out[6]: array([[6, 5, 3], [4, 3, 5], [0, 6, 7], [8, 4, 1]]) In [7]: x[:,None,:] - x Out[7]: array([[[ 0, 0, 0], [ 2, 2, -2], [ 6, -1, -4],

Why does `numpy.einsum` work faster with `float32` than `float16` or `uint16`? [duplicate]

China☆狼群 提交于 2019-12-01 11:48:19
This question already has an answer here: Python Numpy Data Types Performance 2 answers In my benchmark using numpy 1.12.0, calculating dot products with float32 ndarrays is much faster than the other data types: In [3]: f16 = np.random.random((500000, 128)).astype('float16') In [4]: f32 = np.random.random((500000, 128)).astype('float32') In [5]: uint = np.random.randint(1, 60000, (500000, 128)).astype('uint16') In [7]: %timeit np.einsum('ij,ij->i', f16, f16) 1 loop, best of 3: 320 ms per loop In [8]: %timeit np.einsum('ij,ij->i', f32, f32) The slowest run took 4.88 times longer than the

Why does `numpy.einsum` work faster with `float32` than `float16` or `uint16`? [duplicate]

我们两清 提交于 2019-12-01 11:18:50
问题 This question already has answers here : Python Numpy Data Types Performance (2 answers) Closed 2 years ago . In my benchmark using numpy 1.12.0, calculating dot products with float32 ndarrays is much faster than the other data types: In [3]: f16 = np.random.random((500000, 128)).astype('float16') In [4]: f32 = np.random.random((500000, 128)).astype('float32') In [5]: uint = np.random.randint(1, 60000, (500000, 128)).astype('uint16') In [7]: %timeit np.einsum('ij,ij->i', f16, f16) 1 loop,

Element-wise matrix multiplication for multi-dimensional array

允我心安 提交于 2019-11-29 16:05:49
I want to realize component-wise matrix multiplication in MATLAB, which can be done using numpy.einsum in Python as below: import numpy as np M = 2 N = 4 I = 2000 J = 300 A = np.random.randn(M, M, I) B = np.random.randn(M, M, N, J, I) C = np.random.randn(M, J, I) # using einsum D = np.einsum('mki, klnji, lji -> mnji', A, B, C) # naive for-loop E = np.zeros(M, N, J, I) for i in range(I): for j in range(J): for n in range(N): E[:,n,j,i] = B[:,:,i] @ A[:,:,n,j,i] @ C[:,j,i] print(np.sum(np.abs(D-E))) # expected small enough So far I use for-loop of i , j , and n , but I don't want to, at least

tensorflow einsum vs. matmul vs. tensordot

≡放荡痞女 提交于 2019-11-28 13:18:37
In tensorflow, the functions tf.einsum , tf.matmul , and tf.tensordot can all be used for the same tasks. (I realize that tf.einsum and tf.tensordot have more general definitions; I also realize that tf.matmul has batch functionality.) In a situation where any of the three could be used, does one function tend to be fastest? Are there other recommendation rules? For example, suppose that A is a rank-2 tensor, and b is rank-1 tensor, and you want to compute the product c_j = A_ij b_j . Of the three options: c = tf.einsum('ij,j->i', A, b) c = tf.matmul(A, tf.expand_dims(b,1)) c = tf.tensordot(A,

Ellipsis broadcasting in numpy.einsum

馋奶兔 提交于 2019-11-28 08:39:56
问题 I'm having a problem understanding why the following doesn't work: I have an array prefactor that can be three-dimensional or six-dimensional. I have an array dipoles that has four dimensions. The first three dimensions of dipoles match the last three dimensions of prefactor . As I don't know the shape of prefactor , I'm using an Ellipsis to account for the three optional dimensions in prefactor : numpy.einsum('...lmn,lmno->...o', prefactor, dipoles) (In the example here, prefactor.shape is

Understanding NumPy's einsum

本小妞迷上赌 提交于 2019-11-26 01:55:38
问题 I\'m struggling to understand exactly how einsum works. I\'ve looked at the documentation and a few examples, but it\'s not seeming to stick. Here\'s an example we went over in class: C = np.einsum(\"ij,jk->ki\", A, B) for two arrays A and B I think this would take A^T * B , but I\'m not sure (it\'s taking the transpose of one of them right?). Can anyone walk me through exactly what\'s happening here (and in general when using einsum )? 回答1: (Note: this answer is based on a short blog post