can't find the inplace operation: one of the variables needed for gradient computation has been modified by an inplace operation

后端 未结 3 1998
孤独总比滥情好
孤独总比滥情好 2021-01-15 11:24

I am trying to compute a loss on the jacobian of the network (i.e. to perform double backprop), and I get the following error: RuntimeError: one of the variables needed for

3条回答
  •  迷失自我
    2021-01-15 11:49

    grad_output.zero_() is in-place and so is grad_output[:, i-1] = 0. In-place means "modify a tensor instead of returning a new one, which has the modifications applied". An example solution which is not in-place is torch.where. An example use to zero out the 1st column

    import torch
    t = torch.randn(3, 3)
    ixs = torch.arange(3, dtype=torch.int64)
    zeroed = torch.where(ixs[None, :] == 1, torch.tensor(0.), t)
    
    zeroed
    tensor([[-0.6616,  0.0000,  0.7329],
            [ 0.8961,  0.0000, -0.1978],
            [ 0.0798,  0.0000, -1.2041]])
    
    t
    tensor([[-0.6616, -1.6422,  0.7329],
            [ 0.8961, -0.9623, -0.1978],
            [ 0.0798, -0.7733, -1.2041]])
    

    Notice how t retains the values it had before and zeroed has the values you want.

提交回复
热议问题