I have been working with Python and I set up the following code situation:
import timeit
setting = """
import functools
def f(a,b,c):
pass
g = functools.partial(f,c=3)
h = functools.partial(f,b=5,c=3)
i = functools.partial(f,a=4,b=5,c=3)
"""
print timeit.timeit('f(4,5,3)', setup = setting, number=100000)
print timeit.timeit('g(4,5)', setup = setting, number=100000)
print timeit.timeit('h(4)', setup = setting, number=100000)
print timeit.timeit('i()', setup = setting, number=100000)
I get the following as a result:
f: 0.181384086609
g: 0.39066195488
h: 0.425783157349
i: 0.391901016235
Why do the calls to the partial functions take longer? Is the partial function just forwarding the parameters to the original function or is it mapping the static arguments throughout? And also, is there a function in Python to return the body of a function filled in given that all the parameters are predefined, like with function i?
Why do the calls to the partial functions take longer?
The code with partial
takes about two times longer because of the additional function call. Function calls are expensive:
Function call overhead in Python is relatively high, especially compared with the execution speed of a builtin function.
-
Is the partial function just forwarding the parameters to the original function or is it mapping the static arguments throughout?
As far as i know - yes, it just forwards the arguments to the original function.
-
And also, is there a function in Python to return the body of a function filled in given that all the parameters are predefined, like with function i?
No, i am not aware of such built-in function in Python. But i think it's possible to do what you want, as functions are objects which can be copied and modified.
Here is a prototype:
import timeit
import types
# http://stackoverflow.com/questions/6527633/how-can-i-make-a-deepcopy-of-a-function-in-python
def copy_func(f, name=None):
return types.FunctionType(f.func_code, f.func_globals, name or f.func_name,
f.func_defaults, f.func_closure)
def f(a, b, c):
return a + b + c
i = copy_func(f, 'i')
i.func_defaults = (4, 5, 3)
print timeit.timeit('f(4,5,3)', setup = 'from __main__ import f', number=100000)
print timeit.timeit('i()', setup = 'from __main__ import i', number=100000)
which gives:
0.0257439613342
0.0221881866455
Calls to a function with partially applied arguments are more expensive because you double the number of function calls. The effect of functools.partial()
is similar to this example:
def apply_one_of_two(f, a):
def g(b):
return f(a, b)
return g
That means that apply_one_of_two()
returns a function and when it's called than this results in the additional call of the original funciton f
.
Since Python usually doesn't optimize this away it directly translates into additional runtime efforts.
But this isn't the only factor to consider in your microbenchmark. You also switch from positional to keyword arguments in your partial invocations, which introduces additional overhead.
When you reverse the argument ordering in your original function you don't need keyword arguments in the partial calls and then the runtime difference somewhat decreases, e.g.:
import timeit
setting = """
import functools
def f(a,b,c):
pass
g = functools.partial(f, 4)
h = functools.partial(f, 4, 5)
i = functools.partial(f, 4, 5, 3)
"""
print(timeit.timeit('f(4, 5, 3)', setup = setting, number=100000))
print(timeit.timeit('g(5, 3)', setup = setting, number=100000))
print(timeit.timeit('h(3)', setup = setting, number=100000))
print(timeit.timeit('i()', setup = setting, number=100000))
Output (on an Intel Skylake i7 under Fedora 27/Python 3.6):
0.010069019044749439
0.01681053702486679
0.018060395028442144
0.011366961000021547
来源:https://stackoverflow.com/questions/17388438/python-functools-partial-efficiency