How to do operations on hidden vector of the decoder on every timestep and append it to the input of the next lstm unit

前端 未结 0 448
渐次进展
渐次进展 2021-01-03 06:50

To implement attention in encoder-decoder, we have to take the hidden vector of an LSTM unit of the decoder, do several operation on it, to compute the attention weights. No

相关标签:
回答
  • 消灭零回复
提交回复
热议问题