Trying to understand code that computes the gradient wrt to the input for LogSoftMax in Torch

空扰寡人 提交于 2019-12-10 12:02:18

问题


Code comes from: https://github.com/torch/nn/blob/master/lib/THNN/generic/LogSoftMax.c

I don't see how this code is computing the gradient w.r.t to the input for the module LogSoftMax. What I'm confused about is what the two for loops are doing.

for (t = 0; t < nframe; t++)
{
sum = 0;
gradInput_data = gradInput_data0 + dim*t;
output_data = output_data0 + dim*t;
gradOutput_data = gradOutput_data0 + dim*t;

for (d = 0; d < dim; d++)
  sum += gradOutput_data[d];

for (d = 0; d < dim; d++)
  gradInput_data[d] = gradOutput_data[d] - exp(output_data[d])*sum;
 }
}

回答1:


At forward time we have (with x = input vector, y = output vector, f = logsoftmax, i = i-th component):

yi = f(xi)
   = log( exp(xi) / sum_j(exp(xj)) )
   = xi - log( sum_j(exp(xj)) )

When computing the jacobian Jf of f you have (i-th row):

dyi/dxi = 1 - exp(xi) / sum_j(exp(xj))

And for k different than i:

dyi/dxk = - exp(xk) / sum_j(exp(xj))

This gives for Jf:

1-E(x1)     -E(x2)     -E(x3)    ...
 -E(x1)    1-E(x2)     -E(x3)    ...
 -E(x1)     -E(x2)    1-E(x3)    ...
...

With E(xi) = exp(xi) / sum_j(exp(xj))

If we name gradInput the gradient w.r.t input and gradOutput the gradient w.r.t output the backpropagation gives (chain rule):

gradInputi = sum_j( gradOutputj . dyj/dxi )

This is equivalent to:

gradInput = transpose(Jf) . gradOutput

Which finally gives for the i-th component:

gradInputi = gradOutputi - E(xi) . sum_j( gradOutputj )

So the first loop pre-computes sum_j( gradOutputj ) and the last one the above term, i.e. i-th component of grad. input - except there is a missing 1 / sum_j(exp(xj)) for the exponential term in the Torch implementation (the above calculus should probably be double checked even though it sounds correct and explains the current implementation).

UPDATE: there is no problem with the missing 1 / sum_j(exp(xj)) term. Since the the jacobian is computed on the output value, and since this formerly computed output is precisely a log-softmax distribution it comes that the sum-exp of this distribution is 1:

sum_j(exp(outputj)) = sum_j(exp( log(exp(inputj) / sum_k(exp(inputk) ))
                    = sum_j(         exp(inputj) / sum_k(exp(inputk)  )
                    = 1

So there is no need to explicit this term in the implementation, which gives (for x = output):

gradInputi = gradOutputi - exp(outputi) . sum_j( gradOutputj )


来源:https://stackoverflow.com/questions/35304393/trying-to-understand-code-that-computes-the-gradient-wrt-to-the-input-for-logsof

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!