pytorch - connection between loss.backward() and optimizer.step()

前端 未结 5 482
谎友^
谎友^ 2020-12-23 13:04

Where is an explicit connection between the optimizer and the loss?

How does the optimizer know where to get the gradients of the loss wit

5条回答
  •  醉梦人生
    2020-12-23 13:28

    Without delving too deep into the internals of pytorch, I can offer a simplistic answer:

    Recall that when initializing optimizer you explicitly tell it what parameters (tensors) of the model it should be updating. The gradients are "stored" by the tensors themselves (they have a grad and a requires_grad attributes) once you call backward() on the loss. After computing the gradients for all tensors in the model, calling optimizer.step() makes the optimizer iterate over all parameters (tensors) it is supposed to update and use their internally stored grad to update their values.

    More info on computational graphs and the additional "grad" information stored in pytorch tensors can be found in this answer.

提交回复
热议问题