array-broadcasting

How can I use broadcasting with NumPy to speed up this correlation calculation?

喜欢而已 提交于 2020-07-08 21:29:28
问题 I'm trying to take advantage of NumPy broadcasting and backend array computations to significantly speed up this function. Unfortunately, it doesn't scale so well so I'm hoping to greatly improve the performance of this. Right now the code isn't properly utilizing broadcasting for the computations. I'm using WGCNA's bicor function as a gold standard as this is the fastest implementation I know of at the moment. The Python version outputs the same results as the R function. # =================

Using numpy isin element-wise between 2D and 1D arrays

 ̄綄美尐妖づ 提交于 2020-07-03 02:46:23
问题 I have quite a simple scenario where I'd like to test whether both elements of a two-dimensional array are (separately) members of a larger array - for example: full_array = np.array(['A','B','C','D','E','F']) sub_arrays = np.array([['A','C','F'], ['B','C','E']]) np.isin(full_array, sub_arrays) This gives me a single dimension output: array([ True, True, True, False, True, True]) showing whether elements of full_array are present in either of the two sub-arrays. I'd like instead a two

Access elements of a Tensor

自作多情 提交于 2020-06-27 18:28:53
问题 I have the following TensorFlow tensors. tensor1 = tf.constant(np.random.randint(0,255, (2,512,512,1)), dtype='int32') #All elements in range [0,255] tensor2 = tf.constant(np.random.randint(0,255, (2,512,512,1)), dtype='int32') #All elements in range [0,255] tensor3 = tf.keras.backend.flatten(tensor1) tensor4 = tf.keras.backend.flatten(tensor2) tensor5 = tf.constant(np.random.randint(0,255, (255,255)), dtype='int32') #All elements in range [0,255] I wish to use the values stored in tensor 3

Access elements of a Tensor

删除回忆录丶 提交于 2020-06-27 18:28:45
问题 I have the following TensorFlow tensors. tensor1 = tf.constant(np.random.randint(0,255, (2,512,512,1)), dtype='int32') #All elements in range [0,255] tensor2 = tf.constant(np.random.randint(0,255, (2,512,512,1)), dtype='int32') #All elements in range [0,255] tensor3 = tf.keras.backend.flatten(tensor1) tensor4 = tf.keras.backend.flatten(tensor2) tensor5 = tf.constant(np.random.randint(0,255, (255,255)), dtype='int32') #All elements in range [0,255] I wish to use the values stored in tensor 3

ValueError: operands could not be broadcast together with shapes (2501,201) (2501,)

家住魔仙堡 提交于 2020-06-01 05:38:38
问题 I am new to python so please be nice. I am trying to compare two Numpy arrays with the np.logical_or function. When I run the below code an error appears on the Percentile = np.logical_or(data2 > Per1, data2 < Per2) line stating ValueError: operands could not be broadcast together with shapes (2501,201) (2501,) data = 1st Array data2 = 2nd Array Per1 = np.percentile(data, 10, axis=1) Per2 = np.percentile(data, 90, axis=1) Percentile = np.logical_or(data2 > Per1, data2 < Per2) print(Percentile

numpy broadcasting with 3d arrays

大兔子大兔子 提交于 2020-05-18 21:51:26
问题 Is it possible to apply numpy broadcasting (with 1D arrays), x=np.arange(3)[:,np.newaxis] y=np.arange(3) x+y= array([[0, 1, 2], [1, 2, 3], [2, 3, 4]]) to 3d matricies similar to the one below, such that each element in a[i] is treated as a 1D vector like in the example above? a=np.zeros((2,2,2)) a[0]=1 b=a result=a+b resulting in result[0,0]=array([[2, 2], [2, 2]]) result[0,1]=array([[1, 1], [1, 1]]) result[1,0]=array([[1, 1], [1, 1]]) result[1,1]=array([[0, 0], [0, 0]]) 回答1: You can do this

numpy broadcasting with 3d arrays

↘锁芯ラ 提交于 2020-05-18 21:46:23
问题 Is it possible to apply numpy broadcasting (with 1D arrays), x=np.arange(3)[:,np.newaxis] y=np.arange(3) x+y= array([[0, 1, 2], [1, 2, 3], [2, 3, 4]]) to 3d matricies similar to the one below, such that each element in a[i] is treated as a 1D vector like in the example above? a=np.zeros((2,2,2)) a[0]=1 b=a result=a+b resulting in result[0,0]=array([[2, 2], [2, 2]]) result[0,1]=array([[1, 1], [1, 1]]) result[1,0]=array([[1, 1], [1, 1]]) result[1,1]=array([[0, 0], [0, 0]]) 回答1: You can do this

Cumulative Result of Matrix Multiplications

随声附和 提交于 2020-05-17 06:15:27
问题 Given a list of nxn matrices, I want to compute the cumulative product of these matrix multiplications - i.e. given matrices M0, M1, ...,Mm I want a result R, where R[0] = M1, R[1] = M0 x M1, R[2]= M0 x M1 x M2 and so on. Obviously, you can do this via for-loops or tail recursion, but I'm coding in python, where that runs at a snails pace. In Code: def matrix_mul_cum_sum(M): #M[1..m] is an m x n x n matrix if len(M) == 0: return [] result = [M[1]] for A in M[1:]: result.append(np.mat_mul

calculate difference between all combinations of entries in a vector

心已入冬 提交于 2020-05-17 06:06:22
问题 I have a numpy 1D array of z values, and I want to calculate the difference between all combinations of the entries, with the output as a square matrix. I know how to calculate this as a distance between all combinations of the points using cdist, but that does not give me the sign: So for example if my z vector is [1,5,8] import numpy as np from scipy.spatial.distance import cdist z=np.array([1, 5, 8]) z2=np.column_stack((z,np.zeros(3))) cdist(z2,z2) gives me: array([[0., 4., 7.], [4., 0., 3