Is there a way to plot both the training losses and validation losses on the same graph?
It\'s easy to have two separate scalar summaries for each of them i
The work-around I have been doing is to use two SummaryWriter
with different log dir for training set and cross-validation set respectively. And you will see something like this:
Tensorboard is really nice tool but by its declarative nature can make it difficult to get it to do exactly what you want.
I recommend you checkout Losswise (https://losswise.com) for plotting and keeping track of loss functions as an alternative to Tensorboard. With Losswise you specify exactly what should be graphed together:
import losswise
losswise.set_api_key("project api key")
session = losswise.Session(tag='my_special_lstm', max_iter=10)
loss_graph = session.graph('loss', kind='min')
# train an iteration of your model...
loss_graph.append(x, {'train_loss': train_loss, 'validation_loss': validation_loss})
# keep training model...
session.done()
And then you get something that looks like:
Notice how the data is fed to a particular graph explicitly via the loss_graph.append
call, the data for which then appears in your project's dashboard.
In addition, for the above example Losswise would automatically generate a table with columns for min(training_loss)
and min(validation_loss)
so you can easily compare summary statistics across your experiments. Very useful for comparing results across a large number of experiments.