问题
I understand the whole structure of transformer as in the figure below, but one thing confused me is the bottom of the decoder part which has the input of right-shifting outputs.
For example, when training the model with a pair of two language sentences, let's say the input is the sentence "I love you", and the corresponding French is the "je t'aime". How does the model train? So the input of encoder is "I love you", for the decoder, there are two things, one is "je t'aime" which should be fed into MASK Multi-head Attention, another is the output (K and V) for Multi-head Attention, So the output of probabilities is which word? Also what's the shifted right for the decoder input?
来源:https://stackoverflow.com/questions/58564631/how-to-train-the-self-attention-model