Keras: how to output learning rate onto tensorboard

后端 未结 4 492
無奈伤痛
無奈伤痛 2021-02-05 10:29

I add a callback to decay learning rate:

 keras.callbacks.ReduceLROnPlateau(monitor=\'val_loss\', factor=0.5, patience=100, 
                                   v         


        
相关标签:
4条回答
  • 2021-02-05 11:03

    According to the author of Keras, the proper way is to subclass the TensorBoard callback:

    from keras import backend as K
    from keras.callbacks import TensorBoard
    
    class LRTensorBoard(TensorBoard):
        def __init__(self, log_dir, **kwargs):  # add other arguments to __init__ if you need
            super().__init__(log_dir=log_dir, **kwargs)
    
        def on_epoch_end(self, epoch, logs=None):
            logs = logs or {}
            logs.update({'lr': K.eval(self.model.optimizer.lr)})
            super().on_epoch_end(epoch, logs)
    

    Then pass it as part of the callbacks argument to model.fit (credit Finncent Price):

    model.fit(x=..., y=..., callbacks=[LRTensorBoard(log_dir="/tmp/tb_log")])
    
    0 讨论(0)
  • 2021-02-05 11:13
    class XTensorBoard(TensorBoard):
        def on_epoch_begin(self, epoch, logs=None):
            # get values
            lr = float(K.get_value(self.model.optimizer.lr))
            decay = float(K.get_value(self.model.optimizer.decay))
            # computer lr
            lr = lr * (1. / (1 + decay * epoch))
            K.set_value(self.model.optimizer.lr, lr)
    
        def on_epoch_end(self, epoch, logs=None):
            logs = logs or {}
            logs['lr'] = K.get_value(self.model.optimizer.lr)
            super().on_epoch_end(epoch, logs)
    
    callbacks_list = [XTensorBoard('./logs')]
    model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=20, batch_size=32, verbose=2, callbacks=callbacks_list)
    

    lr curve in tensorboard

    0 讨论(0)
  • 2021-02-05 11:14

    You gave the optimizer's code twice, instead of TensorBoard Callback. Anyway, I didn`t find the way to display the learning rate on TensorBoard. I am plotting it after the training finished, taking data from History object:

    nb_epoch = len(history1.history['loss'])
    learning_rate=history1.history['lr']
    xc=range(nb_epoch)
    plt.figure(3,figsize=(7,5))
    plt.plot(xc,learning_rate)
    plt.xlabel('num of Epochs')
    plt.ylabel('learning rate')
    plt.title('Learning rate')
    plt.grid(True)
    plt.style.use(['seaborn-ticks'])
    

    The chart looks like this: LR plot

    Sorry, that is not exactly what you are asking about, but perhaps could help.

    0 讨论(0)
  • 2021-02-05 11:19

    Note that with the current nightly version of tf (2.5 - probably earlier) learning rates using LearningRateSchedule are automatically added to tensorboard's logs. The following solution is only necessary if you're adapting the learning rate some other way - e.g. via ReduceLROnPlateau or LearningRateScheduler (different to LearningRateSchedule) callbacks.

    While extending tf.keras.callbacks.TensorBoard is a viable option, I prefer composition over subclassing.

    class LearningRateLogger(tf.keras.callbacks.Callback):
        def __init__(self):
            super().__init__()
            self._supports_tf_logs = True
    
        def on_epoch_end(self, epoch, logs=None):
            if logs is None or "learning_rate" in logs:
                return
            logs["learning_rate"] = self.model.optimizer.lr
    

    This allows us to compose multiple similar callbacks, and use the logged learning rate in multiple other callbacks (e.g. if you add a CSVLogger it should also write the learning rate values to file).

    Then in model.fit

    model.fit(
        callbacks=[
            LearningRateLogger(),
            # other callbacks that update `logs`
            tf.keras.callbacks.TensorBoard(path),
            # other callbacks that use updated logs, e.g. CSVLogger
        ],
        **kwargs
    )
    
    0 讨论(0)
提交回复
热议问题