numpy-ufunc

How to use numpy.frompyfunc to return an array of elements instead of array of arrays?

雨燕双飞 提交于 2019-12-23 21:30:23
问题 I am using the PLegendre function from the SHTOOLS package. It returns an array of Legendre polynomials for a particular argument. PLegendre(lmax,x) returns an array of Legendre polynomials P_0(x) to P_lmax(x). It works like this: In [1]: from pyshtools import PLegendre loading shtools documentation In [2]: import numpy as np In [3]: PLegendre(3,0.5) Out[3]: array([ 1. , 0.5 , -0.125 , -0.4375]) I would like to pass an array as a parameter, so I use frompyfunc. In [4]: legendre=np.frompyfunc

Numpy: Finding minimum and maximum values from associations through binning

限于喜欢 提交于 2019-12-23 19:53:22
问题 Prerequisite This is a question derived from this post. So, some of the introduction of the problem will be similar to that post. Problem Let's say result is a 2D array and values is a 1D array. values holds some values associated with each element in result . The mapping of an element in values to result is stored in x_mapping and y_mapping . A position in result can be associated with different values. Now, I have to find the minimum and maximum of the values grouped by associations. An

Use numpy.frompyfunc to add broadcasting to a python function with argument

♀尐吖头ヾ 提交于 2019-12-19 10:23:03
问题 From an array like db (which will be approximately (1e6, 300) ) and a mask = [1, 0, 1] vector, I define the target as a 1 in the first column. I want to create an out vector that consists of ones where the corresponding row in db matches the mask and target==1 , and zeros everywhere else. db = np.array([ # out for mask = [1, 0, 1] # target, vector # [1, 1, 0, 1], # 1 [0, 1, 1, 1], # 0 (fit to mask but target == 0) [0, 0, 1, 0], # 0 [1, 1, 0, 1], # 1 [0, 1, 1, 0], # 0 [1, 0, 0, 0], # 0 ]) I

What is the default of numpy functions, with where=False?

给你一囗甜甜゛ 提交于 2019-12-17 21:16:44
问题 The ufunc documentation states: where New in version 1.7. Accepts a boolean array which is broadcast together with the operands. Values of True indicate to calculate the ufunc at that position, values of False indicate to leave the value in the output alone. What is the default behavior, when out is not given? I observed some behavior, which doesn't really make sense to me: import numpy as np a,b = np.ones((2,2)) np.add(a,b,where = False) #returns 0 np.exp(a, where = False) #returns 1 np.sin

Equivalent for np.add.at in tensorflow

荒凉一梦 提交于 2019-12-13 15:53:35
问题 How do I convert a np.add.at statement into tensorflow? np.add.at(dW, self.x.ravel(), dout.reshape(-1, self.D)) Edit self.dW.shape is (V, D), self.D.shape is (N, D) and self.x.size is N 回答1: For np.add.at , you probably want to look at tf.SparseTensor, which represents a tensor by a list of values and a list of indices (which is more suitable for sparse data, hence the name). So for your example: np.add.at(dW, self.x.ravel(), dout.reshape(-1, self.D)) that would be (assuming dW , x and dout

How can a numpy.ufunc.reduceat indices be generated from Python Slice Object

﹥>﹥吖頭↗ 提交于 2019-12-12 04:21:55
问题 Say I have a slice like x[p:-q:n] or x[::n] I want to use this to generate the index to be passed into numpy.ufunc.reduceat(x, [p, p + n, p + 2 * n, ...]) or numpy.ufunc.reduceat(x, [0, n, 2 * n, ...]) . What is the easiest and efficient way to get it done? 回答1: Building on the comments: In [351]: x=np.arange(100) In [352]: np.r_[0:100:10] Out[352]: array([ 0, 10, 20, 30, 40, 50, 60, 70, 80, 90]) In [353]: np.add.reduceat(x,np.r_[0:100:10]) Out[353]: array([ 45, 145, 245, 345, 445, 545, 645,

make np.vectorize return scalar value on scalar input

荒凉一梦 提交于 2019-12-11 04:44:23
问题 The following code returns an array instead of expected float value. def f(x): return x+1 f = np.vectorize(f, otypes=[np.float]) >>> f(10.5) array(11.5) Is there a way to force it return simple scalar value if the input is scalar and not the weird array type? I find it weird it doesn't do it by default given that all other ufuncs like np.cos, np.sin etc do return regular scalars Edit : This the the code that works: import numpy as np import functools def as_scalar_if_possible(func):

Scaling of time to broadcast an operation on 3D arrays in numpy

狂风中的少年 提交于 2019-12-10 13:33:30
问题 I am trying to broadcast a simple operation of ">" over two 3D arrays. One has dimensions (m, 1, n) the other (1, m, n). If I change the value of the third dimension (n), I would naively expect that the speed of the computation would scale as n. However, when I try to measure this explicitly I find that there is an increase in computation time of about factor 10 when increasing n from 1 to 2, after which the scaling is linear. Why does the computation time increase so drastically when going

Numpy passing input array as `out` argument to ufunc

馋奶兔 提交于 2019-12-05 12:05:42
问题 Is it generally safe to provide the input array as the optional out argument to a ufunc in numpy, provided the type is correct? For example, I have verified that the following works: >>> import numpy as np >>> arr = np.array([1.2, 3.4, 4.5]) >>> np.floor(arr, arr) array([ 1., 3., 4.]) The array type must be either compatible or identical with the output (which is a float for numpy.floor() ), or this happens: >>> arr2 = np.array([1, 3, 4], dtype = np.uint8) >>> np.floor(arr2, arr2) Traceback

np.add.at indexing with array

╄→尐↘猪︶ㄣ 提交于 2019-12-03 07:34:13
I'm working on cs231n and I'm having a difficult time understanding how this indexing works. Given that x = [[0,4,1], [3,2,4]] dW = np.zeros(5,6) dout = [[[ 1.19034710e-01 -4.65005990e-01 8.93743168e-01 -9.78047129e-01 -8.88672957e-01 -4.66605091e-01] [ -1.38617461e-03 -2.64569728e-01 -3.83712733e-01 -2.61360826e-01 8.07072009e-01 -5.47607277e-01] [ -3.97087458e-01 -4.25187949e-02 2.57931759e-01 7.49565950e-01 1.37707667e+00 1.77392240e+00]] [[ -1.20692745e+00 -8.28111550e-01 6.53041092e-01 -2.31247762e+00 -1.72370321e+00 2.44308033e+00] [ -1.45191870e+00 -3.49328154e-01 6.15445782e-01 -2