Custom loss function in Keras, how to deal with placeholders

痴心易碎 提交于 2019-12-05 02:59:29

问题


I am trying to generate a custom loss function in TF/Keras,the loss function works if it is run in a session and passed constants, however, it stops working when compiled into a Keras.

The cost function (thanks to Lior for converting it to TF)

def ginicTF(actual,pred):

    n = int(actual.get_shape()[-1])

    inds =  K.reverse(tf.nn.top_k(pred,n)[1],axes=[0]) 
    a_s = K.gather(actual,inds) 
    a_c = K.cumsum(a_s)
    giniSum = K.sum(a_c)/K.sum(a_s) - (n+1)/2.0

    return giniSum / n

def gini_normalizedTF(a,p):
    return -ginicTF(a, p) / ginicTF(a, a)

#Test the cost function

sess = tf.InteractiveSession()

p = [0.9, 0.3, 0.8, 0.75, 0.65, 0.6, 0.78, 0.7, 0.05, 0.4, 0.4, 0.05, 0.5, 0.1, 0.1]
a = [1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0]    

ac = tf.placeholder(shape=(len(a),),dtype=K.floatx())
pr = tf.placeholder(shape=(len(p),),dtype=K.floatx())

print(gini_normalizedTF(ac,pr).eval(feed_dict={ac:a,pr:p}))

this prints -0.62962962963, which is the correct value.

Now let's put this into Keras MLP

def makeModel(n_feat):

    model = Sequential()

    #hidden layer #1
    model.add(layers.Dense(12, input_shape=(n_feat,)))
    model.add(layers.Activation('selu'))
    model.add(layers.Dropout(0.2))

    #output layer
    model.add(layers.Dense(1))
    model.add(layers.Activation('softmax'))

    model.compile(loss=gini_normalizedTF,  optimizer='sgd', metrics=['binary_accuracy'])

    return model

model=makeModel(n_feats)
model.fit(x=Mout,y=targets,epochs=n_epochs,validation_split=0.2,batch_size=batch_size)

This generates error

<ipython-input-62-6ade7307336f> in ginicTF(actual, pred)
      9 def ginicTF(actual,pred):
     10 
---> 11     n = int(actual.get_shape()[-1])
     12 
     13     inds =  K.reverse(tf.nn.top_k(pred,n)[1],axes=[0])

TypeError: __int__ returned non-int (type NoneType)

I tried going around it by giving a default value of n/etc but this doesn't seem to be going anywhere.

Can someone explain the nature of this problem and how I can remedy it?

Thank you!

Edit:

Updated things to keep it as tensor and then cast

def ginicTF(actual,pred):


    nT = K.shape(actual)[-1]
    n = K.cast(nT,dtype='int32')
    inds =  K.reverse(tf.nn.top_k(pred,n)[1],axes=[0]) 
    a_s = K.gather(actual,inds) 
    a_c = K.cumsum(a_s)
    n = K.cast(nT,dtype=K.floatx())
    giniSum =  K.cast(K.sum(a_c)/K.sum(a_s),dtype=K.floatx()) - (n+1)/2.0

    return giniSum / n

def gini_normalizedTF(a,p):
    return ginicTF(a, p) / ginicTF(a, a)

Still has the issue of getting "none" when used as a cost function

来源:https://stackoverflow.com/questions/46674293/custom-loss-function-in-keras-how-to-deal-with-placeholders

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!