Tensorflow LSTM Regularization

前端 未结 2 680
清歌不尽
清歌不尽 2021-01-25 17:02

I was wondering how one can implement l1 or l2 regularization within an LSTM in TensorFlow? TF doesn\'t give you access to the internal weights of the LSTM, so I\'m not certain

相关标签:
2条回答
  • 2021-01-25 17:26

    The answers in the link you mentioned are the correct way to do it. Iterate through tf.trainable_variables and find the variables associated with your LSTM.

    An alternative, more complicated and possibly more brittle approach is to re-enter the LSTM's variable_scope, set reuse_variables=True, and call get_variable(). But really, the original solution is faster and less brittle.

    0 讨论(0)
  • 2021-01-25 17:27

    TL;DR; Save all the parameters in a list, and add their L^n norm to the objective function before making gradient for optimisation

    1) In the function where you define the inference

    net = [v for v in tf.trainable_variables()]
    return *, net
    

    2) Add the L^n norm in the cost and calculate the gradient from the cost

    weight_reg = tf.add_n([0.001 * tf.nn.l2_loss(var) for var in net]) #L2
    
    cost = Your original objective w/o regulariser + weight_reg
    
    param_gradients = tf.gradients(cost, net)
    
    optimiser = tf.train.AdamOptimizer(0.001).apply_gradients(zip(param_gradients, net))
    

    3) Run the optimiser when you want via

    _ = sess.run(optimiser, feed_dict={input_var: data})
    
    0 讨论(0)
提交回复
热议问题