can't find the inplace operation: one of the variables needed for gradient computation has been modified by an inplace operation

后端 未结 3 1997
孤独总比滥情好
孤独总比滥情好 2021-01-15 11:24

I am trying to compute a loss on the jacobian of the network (i.e. to perform double backprop), and I get the following error: RuntimeError: one of the variables needed for

相关标签:
3条回答
  • 2021-01-15 11:49

    grad_output.zero_() is in-place and so is grad_output[:, i-1] = 0. In-place means "modify a tensor instead of returning a new one, which has the modifications applied". An example solution which is not in-place is torch.where. An example use to zero out the 1st column

    import torch
    t = torch.randn(3, 3)
    ixs = torch.arange(3, dtype=torch.int64)
    zeroed = torch.where(ixs[None, :] == 1, torch.tensor(0.), t)
    
    zeroed
    tensor([[-0.6616,  0.0000,  0.7329],
            [ 0.8961,  0.0000, -0.1978],
            [ 0.0798,  0.0000, -1.2041]])
    
    t
    tensor([[-0.6616, -1.6422,  0.7329],
            [ 0.8961, -0.9623, -0.1978],
            [ 0.0798, -0.7733, -1.2041]])
    

    Notice how t retains the values it had before and zeroed has the values you want.

    0 讨论(0)
  • 2021-01-15 11:52

    You can make use of set_detect_anomaly function available in autograd package to exactly find which line is responsible for the error.

    Here is the link which describes the same problem and a solution using the abovementioned function.

    0 讨论(0)
  • 2021-01-15 11:56

    Thanks! I replaced the problematic code of the inplace operation in grad_output with:

                inputs_reg = Variable(data, requires_grad=True)
                output_reg = self.model.forward(inputs_reg)
                num_classes = output.size()[1]
    
                jacobian_list = []
                grad_output = torch.zeros(*output_reg.size())
    
                if inputs_reg.is_cuda:
                    grad_output = grad_output.cuda()
    
                for i in range(5):
                    zero_gradients(inputs_reg)
    
                    grad_output_curr = grad_output.clone()
                    grad_output_curr[:, i] = 1
                    jacobian_list.append(torch.autograd.grad(outputs=output_reg,
                                                             inputs=inputs_reg,
                                                             grad_outputs=grad_output_curr,
                                                             only_inputs=True,
                                                             retain_graph=True,
                                                             create_graph=True)[0])
    
                jacobian = torch.stack(jacobian_list, dim=0)
                loss3 = jacobian.norm()
                loss3.backward()
    
    0 讨论(0)
提交回复
热议问题