numpy meshgrid operations problems

前端 未结 1 323
礼貌的吻别
礼貌的吻别 2021-01-15 14:47
Y, X = np.mgrid[-3:-3:10j, -3:3:10j]

I\'ve noticed that when applying certain operations on meshgrids like the one above I get an error because the

相关标签:
1条回答
  • 2021-01-15 15:32

    Your question (with the follow-on comment) can be taken at least two different ways:

    1. You have a function of multiple arguments, and you would like to be able to call that function in a manner that is syntactically similar to the broadcasted calls supported natively by numpy. Performance is not the issue, just the calling syntax of the function.

    2. You have a function of multiple arguments that is to be evaluated on a sequence of numpy arrays, but the function is not implemented in such a manner that it can exploit the contiguous memory layout of numpy arrays. Performance is the issue; you would be happy to loop over the numpy arrays and call your function in a boring, plain old for-loop style, except that doing so is too slow.

    For item 1. there is a convenience function provided by numpy called vectorize which takes a regular callable and returns a callable that can be called with numpy arrays as the arguments and will obey numpy's broadcasting rules.

    Consider this contrived example:

    def my_func(x, y):
        return x + 2*y
    

    Now suppose I need to evaluate this function everywhere in a 2-D grid. Here is the plain old boring way:

    Y, X  =  np.mgrid[0:10:1, 0:10:1]
    Z = np.zeros_like(Y)
    
    for i in range(Y.shape[0]):
        for j in range(Y.shape[1]):
            Z[i,j] = my_func(X[i,j], Y[i,j])
    

    If we had a few different functions like my_func, it might be nice to generalize this process into a function that "mapped" a given function over the 2-D arrays.

    import itertools
    def array_map(some_func, *arg_arrays):
        output = np.zeros_like(arg_arrays[0])
        coordinates = itertools.imap(range, output.shape)
        for coord in itertools.product(coordinates):
            args = [arg_array[coord] for arg_array in arg_arrays]
            output[coord] = some_func(*args)
        return output
    

    Now we can see that array_map(my_func, X, Y) acts just like the nested for-loop:

    In [451]: array_map(my_func, X, Y)
    Out[451]: 
    array([[ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9],
           [ 2,  3,  4,  5,  6,  7,  8,  9, 10, 11],
           [ 4,  5,  6,  7,  8,  9, 10, 11, 12, 13],
           [ 6,  7,  8,  9, 10, 11, 12, 13, 14, 15],
           [ 8,  9, 10, 11, 12, 13, 14, 15, 16, 17],
           [10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
           [12, 13, 14, 15, 16, 17, 18, 19, 20, 21],
           [14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
           [16, 17, 18, 19, 20, 21, 22, 23, 24, 25],
           [18, 19, 20, 21, 22, 23, 24, 25, 26, 27]])
    

    Now, wouldn't it be nice if we could call array_map(my_func) and leave off the extra array arguments? Instead just getting back a new function that was just waiting to do the required for-loops.

    We can do this with functools.partial -- so we can write a handy little vectorizer like this:

    import functools
    def vectorizer(regular_function):
        awesome_function = functools.partial(array_map, regular_function)
        return awesome_function
    

    and testing it out:

    In [453]: my_awesome_func = vectorizer(my_func)
    
    In [454]: my_awesome_func(X, Y)
    Out[454]: 
    array([[ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9],
           [ 2,  3,  4,  5,  6,  7,  8,  9, 10, 11],
           [ 4,  5,  6,  7,  8,  9, 10, 11, 12, 13],
           [ 6,  7,  8,  9, 10, 11, 12, 13, 14, 15],
           [ 8,  9, 10, 11, 12, 13, 14, 15, 16, 17],
           [10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
           [12, 13, 14, 15, 16, 17, 18, 19, 20, 21],
           [14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
           [16, 17, 18, 19, 20, 21, 22, 23, 24, 25],
           [18, 19, 20, 21, 22, 23, 24, 25, 26, 27]])
    

    Now my_awesome_func behaves as if you are able to call it directly on top of ndarrays!

    I've overlooked many extra little performance details, bounds checking, etc., while making this toy version called vectorizer ... but luckily in numpy there is vectorize which already does just this!

    In [455]: my_vectorize_func = np.vectorize(my_func)
    
    In [456]: my_vectorize_func(X, Y)
    Out[456]: 
    array([[ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9],
           [ 2,  3,  4,  5,  6,  7,  8,  9, 10, 11],
           [ 4,  5,  6,  7,  8,  9, 10, 11, 12, 13],
           [ 6,  7,  8,  9, 10, 11, 12, 13, 14, 15],
           [ 8,  9, 10, 11, 12, 13, 14, 15, 16, 17],
           [10, 11, 12, 13, 14, 15, 16, 17, 18, 19],
           [12, 13, 14, 15, 16, 17, 18, 19, 20, 21],
           [14, 15, 16, 17, 18, 19, 20, 21, 22, 23],
           [16, 17, 18, 19, 20, 21, 22, 23, 24, 25],
           [18, 19, 20, 21, 22, 23, 24, 25, 26, 27]])
    

    Once again, as stressed in my earlier comments to the OP and in the documentation for vectorize -- this is not a speed optimization. In fact, the extra function calling overhead will be slower in some cases than just writing a for-loop directly. But, for cases when speed is not a problem, this method does allow you to make your custom functions adhere to the same calling conventions as numpy -- which can improve the uniformity of your library's interface and make the code more consistent and more readable.

    A whole lot of other stuff has already been written about item 2. If your problem is that you need to optimize your functions to leverage contiguous blocks of memory and by-passing repeated dynamic type checking (the main features that numpy arrays add to Python lists) then here are a few links you may find helpful:

    1. < http://pandas.pydata.org/pandas-docs/stable/enhancingperf.html >
    2. < http://csl.name/C-functions-from-Python/ >
    3. < https://jakevdp.github.io/blog/2014/05/09/why-python-is-slow >
    4. < nbviewer.ipython.org/url/jakevdp.github.io/downloads/notebooks/NumbaCython.ipynb >
    0 讨论(0)
提交回复
热议问题