Why the loss function can be apply on different size tensors

后端 未结 1 529
别跟我提以往
别跟我提以往 2021-01-16 01:19

For example, I have a net that take tensor [N, 7](N is the samples num) as input and tensor [N, 4] as output, the “4” represents the different classes’ probabilities.

<
相关标签:
1条回答
  • 2021-01-16 01:52

    In PyTorch the implementation is:

    Link to the Documentation: https://pytorch.org/docs/stable/nn.html#torch.nn.CrossEntropyLoss


    So implementing this formula in pytorch you get:

    import torch
    import torch.nn.functional as F
    
    output = torch.tensor([ 0.1998, -0.2261, -0.0388,  0.1457])
    target = torch.LongTensor([1])
    
    # implementing the formula above
    print('manual  cross-entropy:', (-output[target] + torch.log(torch.sum(torch.exp(output))))[0])
    
    # calling build in cross entropy function to check the result
    print('pytorch cross-entropy:', F.cross_entropy(output.unsqueeze(0), target))
    

    Output:

    manual  cross-entropy: tensor(1.6462)
    pytorch cross-entropy: tensor(1.6462)
    

    I hope this helps!

    0 讨论(0)
提交回复
热议问题