问题
I'm doing a little deep learning, and I want to grab the values of all hidden layers. So I end up writing functions like this:
def forward_pass(x, ws, bs):
activations = []
u = x
for w, b in zip(ws, bs):
u = np.maximum(0, u.dot(w)+b)
activations.append(u)
return activations
If I didn't have to get the intermediate values, I'd use the much less verbose form:
out = reduce(lambda u, (w, b): np.maximum(0, u.dot(w)+b), zip(ws, bs), x)
Bam. All one line, nice and compact. But I can't keep any of the intermediate values.
So, what is there any way to have my cake (nice compact one-liner) and eat it too (return intermediate values)?
回答1:
In general, itertools.accumulate() will do what reduce() does but will give you the intermediate values as well. That said, accumulate does not support start value so it make not be applicable in your case.
Example:
>>> import operator, functools, itertools
>>> functools.reduce(operator.mul, range(1, 11))
3628800
>>> list(itertools.accumulate(range(1, 11), operator.mul))
[1, 2, 6, 24, 120, 720, 5040, 40320, 362880, 3628800]
回答2:
The dot
tells me you are using one or more numpy arrays. So I'll try:
In [28]: b=np.array([1,2,3])
In [29]: x=np.arange(9).reshape(3,3)
In [30]: ws=[x,x,x]
In [31]: forward_pass(x,ws,bs)
Out[31]:
[array([[ 16, 19, 22],
[ 43, 55, 67],
[ 70, 91, 112]]),
array([[ 191, 248, 305],
[ 569, 734, 899],
[ 947, 1220, 1493]]),
array([[ 2577, 3321, 4065],
[ 7599, 9801, 12003],
[12621, 16281, 19941]])]
In py3 I have to write the reduce
solution as:
In [32]: functools.reduce(lambda u, wb: np.maximum(0,
u.dot(wb[0])+wb[1]), zip(ws, bs), x)
Out[32]:
array([[ 2577, 3321, 4065],
[ 7599, 9801, 12003],
[12621, 16281, 19941]])
That intermediate value u
that is passed from one evaluation to the next makes a list comprehension tricky.
accumulate
uses the first item as the start. I can work around that with a function like
def foo(u, wb):
if u[0] is None: u=x # x from global
return np.maximum(0, u.dot(wb[0])+wb[1])
Then I need to add extra start values to ws
and bs
:
In [56]: list(itertools.accumulate(zip([None,x,x,x], np.array([0,1,2,3])), foo))
Out[56]:
[(None, 0),
array([[ 16, 19, 22],
[ 43, 55, 67],
[ 70, 91, 112]]),
array([[ 191, 248, 305],
[ 569, 734, 899],
[ 947, 1220, 1493]]),
array([[ 2577, 3321, 4065],
[ 7599, 9801, 12003],
[12621, 16281, 19941]])]
Here's a list comprehension version, using an external u
:
In [66]: u=x.copy()
In [67]: def foo1(wb):
...: v = np.maximum(0, u.dot(wb[0])+wb[1])
...: u[:]=v
...: return v
...:
In [68]: [foo1(wb) for wb in zip(ws,bs)]
Out[68]:
[array([[ 16, 19, 22],
[ 43, 55, 67],
[ 70, 91, 112]]),
array([[ 191, 248, 305],
[ 569, 734, 899],
[ 947, 1220, 1493]]),
array([[ 2577, 3321, 4065],
[ 7599, 9801, 12003],
[12621, 16281, 19941]])]
No real advantage over the original loop with append
.
numpy.ufunc
have an accumulate
method, but that isn't easy to use with custom Python functions. So there is a np.maximum.accumulate
, but I'm not sure how that could be used in this case. (also np.cumsum
which is np.add.accumulate
).
回答3:
In Python 2.x, there is no clean one-liner for this.
In Python 3, there is itertools.accumulate, but it is still not really clean because it doesn't accept an "initial" input, as reduce does.
Here is a function that, while not as nice as a built-in comprehension syntax, does the job.
def reducemap(func, sequence, initial=None, include_zeroth = False):
"""
A version of reduce that also returns the intermediate values.
:param func: A function of the form x_i_plus_1 = f(x_i, params_i)
Where:
x_i is the value passed through the reduce.
params_i is the i'th element of sequence
x_i_plus_i is the value that will be passed to the next step
:param sequence: A list of parameters to feed at each step of the reduce.
:param initial: Optionally, an initial value (else the first element of the sequence will be taken as the initial)
:param include_zeroth: Include the initial value in the returned list.
:return: A list of length: len(sequence), (or len(sequence)+1 if include_zeroth is True) containing the computed result of each iteration.
"""
if initial is None:
val = sequence[0]
sequence = sequence[1:]
else:
val = initial
results = [val] if include_zeroth else []
for s in sequence:
val = func(val, s)
results.append(val)
return results
Tests:
assert reducemap(lambda a, b: a+b, [1, 2, -4, 3, 6, -7], initial=0) == [1, 3, -1, 2, 8, 1]
assert reducemap(lambda a, b: a+b, [1, 2, -4, 3, 6, -7]) == [3, -1, 2, 8, 1]
assert reducemap(lambda a, b: a+b, [1, 2, -4, 3, 6, -7], include_zeroth=True) == [1, 3, -1, 2, 8, 1]
回答4:
You can actually do this using the somewhat weird pattern of result = [y for y in [initial] for x in inputs for y in [f(x, y)]]
. Note that the first and third for
are not really loops but assignments - we can use for var in [value]
in a comprehension to assign value
to the var
. For example:
def forward_pass(x, ws, bs):
activations = []
u = x
for w, b in zip(ws, bs):
u = np.maximum(0, u.dot(w)+b)
activations.append(u)
return activations
Would be equivalent to:
def forward_pass(x, ws, bs):
return [u for u in [x] for w, b in zip(ws, bs) for u in [np.maximum(0, u.dot(w)+b)]]
Python 3.8+:
Python 3.8 introduces the "walrus" operator :=
, which gives us another option:
def forward_pass(x, ws, bs):
u = x
return [u:=np.maximum(0, u.dot(w)+b) for w, b in zip(ws, bs)]
来源:https://stackoverflow.com/questions/39688133/cleanest-way-to-combine-reduce-and-map-in-python