How to return history of validation loss in Keras

后端 未结 10 884
忘掉有多难
忘掉有多难 2020-12-02 16:37

Using Anaconda Python 2.7 Windows 10.

I am training a language model using the Keras exmaple:

print(\'Build model...\')
model = Sequential()
model.ad         


        
相关标签:
10条回答
  • 2020-12-02 17:11

    For plotting the loss directly the following works:

    model_ = model.fit(X, Y, epochs= ..., verbose=1 )
    plt.plot(list(model_.history.values())[0],'k-o')
    
    0 讨论(0)
  • 2020-12-02 17:13

    Those who got still error like me:

    Convert model.fit_generator() to model.fit()

    0 讨论(0)
  • 2020-12-02 17:14

    I have also found that you can use verbose=2 to make keras print out the Losses:

    history = model.fit(X, Y, validation_split=0.33, nb_epoch=150, batch_size=10, verbose=2)
    

    And that would print nice lines like this:

    Epoch 1/1
     - 5s - loss: 0.6046 - acc: 0.9999 - val_loss: 0.4403 - val_acc: 0.9999
    

    According to their documentation:

    verbose: 0, 1, or 2. Verbosity mode. 0 = silent, 1 = progress bar, 2 = one line per epoch.
    
    0 讨论(0)
  • 2020-12-02 17:16

    Actually, you can also do it with the iteration method. Because sometimes we might need to use the iteration method instead of the built-in epochs method to visualize the training results after each iteration.

    history = [] #Creating a empty list for holding the loss later
    for iteration in range(1, 3):
        print()
        print('-' * 50)
        print('Iteration', iteration)
        result = model.fit(X, y, batch_size=128, nb_epoch=1) #Obtaining the loss after each training
        history.append(result.history['loss']) #Now append the loss after the training to the list.
        start_index = random.randint(0, len(text) - maxlen - 1)
    print(history)
    

    This way allows you to get the loss you want while maintaining your iteration method.

    0 讨论(0)
  • 2020-12-02 17:21

    It's been solved.

    The losses only save to the History over the epochs. I was running iterations instead of using the Keras built in epochs option.

    so instead of doing 4 iterations I now have

    model.fit(......, nb_epoch = 4)
    

    Now it returns the loss for each epoch run:

    print(hist.history)
    {'loss': [1.4358016599558268, 1.399221191623641, 1.381293383180471, h1.3758836857303727]}
    
    0 讨论(0)
  • 2020-12-02 17:22

    The following simple code works great for me:

        seqModel =model.fit(x_train, y_train,
              batch_size      = batch_size,
              epochs          = num_epochs,
              validation_data = (x_test, y_test),
              shuffle         = True,
              verbose=0, callbacks=[TQDMNotebookCallback()]) #for visualization
    

    Make sure you assign the fit function to an output variable. Then you can access that variable very easily

    # visualizing losses and accuracy
    train_loss = seqModel.history['loss']
    val_loss   = seqModel.history['val_loss']
    train_acc  = seqModel.history['acc']
    val_acc    = seqModel.history['val_acc']
    xc         = range(num_epochs)
    
    plt.figure()
    plt.plot(xc, train_loss)
    plt.plot(xc, val_loss)
    

    Hope this helps. source: https://keras.io/getting-started/faq/#how-can-i-record-the-training-validation-loss-accuracy-at-each-epoch

    0 讨论(0)
提交回复
热议问题