I have been working with Python and I set up the following code situation:
import timeit
setting = \"\"\"
import functools
def f(a,b,c):
pass
g = func
Calls to a function with partially applied arguments are more expensive because you double the number of function calls. The effect of functools.partial()
is similar to this example:
def apply_one_of_two(f, a):
def g(b):
return f(a, b)
return g
That means that apply_one_of_two()
returns a function and when it's called than this results in the additional call of the original funciton f
.
Since Python usually doesn't optimize this away it directly translates into additional runtime efforts.
But this isn't the only factor to consider in your microbenchmark. You also switch from positional to keyword arguments in your partial invocations, which introduces additional overhead.
When you reverse the argument ordering in your original function you don't need keyword arguments in the partial calls and then the runtime difference somewhat decreases, e.g.:
import timeit
setting = """
import functools
def f(a,b,c):
pass
g = functools.partial(f, 4)
h = functools.partial(f, 4, 5)
i = functools.partial(f, 4, 5, 3)
"""
print(timeit.timeit('f(4, 5, 3)', setup = setting, number=100000))
print(timeit.timeit('g(5, 3)', setup = setting, number=100000))
print(timeit.timeit('h(3)', setup = setting, number=100000))
print(timeit.timeit('i()', setup = setting, number=100000))
Output (on an Intel Skylake i7 under Fedora 27/Python 3.6):
0.010069019044749439
0.01681053702486679
0.018060395028442144
0.011366961000021547