Consider the following list comprehension
[ (x,f(x)) for x in iterable if f(x) ]
This filters the iterable based a condition f
and
There is no where
statement but you can "emulate" it using for
:
a=[0]
def f(x):
a[0] += 1
return 2*x
print [ (x, y) for x in range(5) for y in [f(x)] if y != 2 ]
print "The function was executed %s times" % a[0]
Execution:
$ python 2.py
[(0, 0), (2, 4), (3, 6), (4, 8)]
The function was executed 5 times
As you can see, the functions is executed 5 times, not 10 or 9.
This for
construction:
for y in [f(x)]
imitate where clause.
You seek to have let
-statement semantics in python list comprehensions, whose scope is available to both the ___ for..in
(map) and the if ___
(filter) part of the comprehension, and whose scope depends on the ..for ___ in...
.
Your solution, modified:
Your (as you admit unreadable) solution of [ (x,fx) for x,fx in ( (y,f(y) for y in iterable ) if fx ]
is the most straightforward way to write the optimization.
Main idea: lift x into the tuple (x,f(x)).
Some would argue the most "pythonic" way to do things would be the original [(x,f(x)) for x in iterable if f(x)]
and accept the inefficiencies.
You can however factor out the ((y,fy) for y in iterable)
into a function, if you plan to do this a lot. This is bad because if you ever wish to have access to more variables than x,fx
(e.g. x,fx,ffx
), then you will need to rewrite all your list comprehensions. Therefore this isn't a great solution unless you know for sure you only need x,fx
and plan to reuse this pattern.
Generator expression:
Main idea: use a more complicated alternative to generator expressions: one where python will let you write multiple lines.
You could just use a generator expression, which python plays nicely with:
def xfx(iterable):
for x in iterable:
fx = f(x)
if fx:
yield (x,fx)
xfx(exampleIterable)
This is how I would personally do it.
Memoization/caching:
Main idea: You could also use(abuse?) side-effects and make f
have a global memoization cache, so you don't repeat operations.
This can have a bit of overhead, and requires a policy of how large the cache should be and when it should be garbage-collected. Thus this should only be used if you'd have other uses for memoizing f, or if f is very expensive. But it would let you write...
[ (x,f(x)) for x in iterable if f(x) ]
...like you originally wanted without the performance hit of doing the expensive operations in f
twice, even if you technically call it twice. You can add a @memoized
decorator to f
: example (without maximum cache size). This will work as long as x is hashable (e.g. a number, a tuple, a frozenset, etc.).
Dummy values:
Main idea: capture fx=f(x) in a closure and modify the behavior of the list comprehension.
filterTrue(
(lambda fx=f(x): (x,fx) if fx else None)() for x in iterable
)
where filterTrue(iterable) is filter(None, iterable). You would have to modify this if your list type (a 2-tuple) was actually capable of being None
.
Nothing says you must use comprehensions. In fact most style guides I've seen request that you limit them to simple constructs, anyway.
You could use a generator expression, instead.
def fun(iterable):
for x in iterable:
y = f(x)
if y:
yield x, y
print list(fun(iterable))
Map and Zip ?
fnRes = map(f, iterable)
[(x,fx) for x,fx in zip(iterable, fnRes) if fx)]