TensorFlow Sigmoid Cross Entropy with Logits for 1D data

后端 未结 1 1781
臣服心动
臣服心动 2021-01-23 20:47

Context

Suppose we have some 1D data (e.g. time series), where all series have fixed length l:

        # [ 0,  1,  2,  3,  4,  5,  6,  7,  8,          


        
相关标签:
1条回答
  • 2021-01-23 21:51

    Both tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(...)) and tf.losses.sigmoid_cross_entropy(...) (with default arguments) are computing the same thing. The problem is in your tests where you use == to compare two floating-point numbers. Instead, use np.isclose method to check whether two floating-point numbers are equal or not:

    # loss _should_(?) be the same for 'channels_first' and 'channels_last' data_format
    # test example_1
    e1 = np.isclose(l1, t_l1.T).all()
    # test example 2
    e2 = np.isclose(l2, t_l2.T).all()
    
    # loss calculated for each example and then batched together should be the same 
    # as the loss calculated on the batched examples
    ea = np.isclose(np.array([l1, l2]), bl).all()
    t_ea = np.isclose(np.array([t_l1, t_l2]), t_bl).all()
    
    # loss calculated on the batched examples for 'channels_first' should be the same
    # as loss calculated on the batched examples for 'channels_last'
    eb = np.isclose(bl, np.transpose(t_bl, (0, 2, 1))).all()
    
    
    e1, e2, ea, t_ea, eb
    # (True, True, True, True, True)
    

    And:

    l_e1 = np.isclose(tf_l1, rm_l1)
    l_e2 = np.isclose(tf_l2, rm_l2)
    l_eb = np.isclose(tf_bl, rm_bl)
    
    l_t_e1 = np.isclose(tf_t_l1, rm_t_l1)
    l_t_e2 = np.isclose(tf_t_l2, rm_t_l2)
    l_t_eb = np.isclose(tf_t_bl, rm_t_bl)
    
    l_e1, l_e2, l_eb, l_t_e1, l_t_e2, l_t_eb
    # (True, True, True, True, True, True)
    
    0 讨论(0)
提交回复
热议问题