autograd

how to apply gradients manually in pytorch

 ̄綄美尐妖づ 提交于 2019-12-01 07:13:50
问题 Starting to learn pytorch and was trying to do something very simple, trying to move a randomly initialized vector of size 5 to a target vector of value [1,2,3,4,5]. But my distance is not decreasing!! And my vector x just goes crazy. No idea what I am missing. import torch import numpy as np from torch.autograd import Variable # regress a vector to the goal vector [1,2,3,4,5] dtype = torch.cuda.FloatTensor # Uncomment this to run on GPU x = Variable(torch.rand(5).type(dtype), requires_grad

backward function in PyTorch

十年热恋 提交于 2019-12-01 06:09:41
问题 i have some question about pytorch's backward function i don't think i'm getting the right output import numpy as np import torch from torch.autograd import Variable a = Variable(torch.FloatTensor([[1,2,3],[4,5,6]]), requires_grad=True) out = a * a out.backward(a) print(a.grad) the output is tensor([[ 2., 8., 18.], [32., 50., 72.]]) maybe it's 2*a*a but i think the output suppose to be tensor([[ 2., 4., 6.], [8., 10., 12.]]) 2*a. cause d(x^2)/dx=2x 回答1: Please read carefully the documentation

Why does autograd not produce gradient for intermediate variables?

末鹿安然 提交于 2019-11-29 18:54:18
问题 trying to wrap my head around how gradients are represented and how autograd works: import torch from torch.autograd import Variable x = Variable(torch.Tensor([2]), requires_grad=True) y = x * x z = y * y z.backward() print(x.grad) #Variable containing: #32 #[torch.FloatTensor of size 1] print(y.grad) #None Why does it not produce a gradient for y ? If y.grad = dz/dy , then shouldn't it at least produce a variable like y.grad = 2*y ? 回答1: By default, gradients are only retained for leaf