Keras Loss Function with Additional Dynamic Parameter

前端 未结 1 843
终归单人心
终归单人心 2020-12-01 05:16

I\'m working on implementing prioritized experience replay for a deep-q network, and part of the specification is to multiply gradients by what\'s know as importance samplin

相关标签:
1条回答
  • 2020-12-01 05:35

    OK. Here is an example.

    from keras.layers import Input, Dense, Conv2D, MaxPool2D, Flatten
    from keras.models import Model
    from keras.losses import categorical_crossentropy
    
    def sample_loss( y_true, y_pred, is_weight ) :
        return is_weight * categorical_crossentropy( y_true, y_pred ) 
    
    x = Input(shape=(32,32,3), name='image_in')
    y_true = Input( shape=(10,), name='y_true' )
    is_weight = Input(shape=(1,), name='is_weight')
    f = Conv2D(16,(3,3),padding='same')(x)
    f = MaxPool2D((2,2),padding='same')(f)
    f = Conv2D(32,(3,3),padding='same')(f)
    f = MaxPool2D((2,2),padding='same')(f)
    f = Conv2D(64,(3,3),padding='same')(f)
    f = MaxPool2D((2,2),padding='same')(f)
    f = Flatten()(f)
    y_pred = Dense(10, activation='softmax', name='y_pred' )(f)
    model = Model( inputs=[x, y_true, is_weight], outputs=y_pred, name='train_only' )
    model.add_loss( sample_loss( y_true, y_pred, is_weight ) )
    model.compile( loss=None, optimizer='sgd' )
    print model.summary()
    

    Note, since you've add loss through add_loss(), you don't have to do it through compile( loss=xxx ).

    With regards to train a model, nothing is special except you move y_true to your input end. See below

    import numpy as np 
    a = np.random.randn(8,32,32,3)
    a_true = np.random.randn(8,10)
    a_is_weight = np.random.randint(0,2,size=(8,1))
    model.fit( [a, a_true, a_is_weight] )
    

    Finally, you can make a testing model (which share all weights in model) for easier use, i.e.

    test_model = Model( inputs=x, outputs=y_pred, name='test_only' )
    a_pred = test_model.predict( a )
    
    0 讨论(0)
提交回复
热议问题