问题
I'm using numpy v1.18.2
in some simulations, and using inbuilt functions such as np.unique
, np.diff
and np.interp
. I'm using these functions on standard objects, i.e lists or numpy arrays.
When I checked with cProfile
, I saw that these functions make a call to an built-in method numpy.core._multiarray_umath.implement_array_function
and that this method accounts for 32.5%
of my runtime! To my understanding this is a wrapper that performs some checks to make sure the that the arguments passed to the function are compatible with the function.
I have two questions:
- Is this function (
implement_array_function
) actually taking up so much time or is it actually the operations I'm doing (np.unique
,np.diff
,np.interp
) that is actually taking up all this time? That is, am I misinterpreting the cProfile output? I was confused by the hierarchical output of snakeviz. Please see snakeviz output here and details for the function here. - Is there any way to disable it/bypass it, since the inputs need not be checked each time as the arguments I pass to these numpy functions are already controlled in my code? I am hoping that this will give me a performance improvement.
I already saw this question (what is numpy.core._multiarray_umath.implement_array_function and why it costs lots of time?), but I was not able to understand what exactly the function is or does. I also tried to understand NEP 18, but couldn't make out how to exactly solve the issue. Please fill in any gaps in my knowledge and correct any misunderstandings. Also I'd appreciate if someone can explain this to me like I'm 5 (r/explainlikeimfive/) instead of assuming developer level knowledge of python.
来源:https://stackoverflow.com/questions/61983372/is-built-in-method-numpy-core-multiarray-umath-implement-array-function-a-per