Is there any way to use TensorBoard when training a TensorFlow model on Google Colab?
You can directly connect to tensorboard in google colab using the recent upgrade from google colab.
https://medium.com/@today.rafi/tensorboard-in-google-colab-bd49fa554f9b
Using summary_writer to write log at every epoch in a folder then running the following magic worked for me.
%load_ext tensorboard
%tensorboard --logdir=./logs
Try this, it's working for me
%load_ext tensorboard
import datetime
logdir = os.path.join("logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(logdir, histogram_freq=1)
model.fit(x=x_train,
y=y_train,
epochs=5,
validation_data=(x_test, y_test),
callbacks=[tensorboard_callback])
2.0 Compatible Answer: Yes, you can use Tensorboard in Google Colab. Please find the below code which shows the complete example.
!pip install tensorflow==2.0
import tensorflow as tf
# The function to be traced.
@tf.function
def my_func(x, y):
# A simple hand-rolled layer.
return tf.nn.relu(tf.matmul(x, y))
# Set up logging.
logdir = './logs/func'
writer = tf.summary.create_file_writer(logdir)
# Sample data for your function.
x = tf.random.uniform((3, 3))
y = tf.random.uniform((3, 3))
# Bracket the function call with
# tf.summary.trace_on() and tf.summary.trace_export().
tf.summary.trace_on(graph=True, profiler=True)
# Call only one tf.function when tracing.
z = my_func(x, y)
with writer.as_default():
tf.summary.trace_export(
name="my_func_trace",
step=0,
profiler_outdir=logdir)
%load_ext tensorboard
%tensorboard --logdir ./logs/func
For the working copy of Google Colab, please refer this link. For more information, please go through this link.
I tried to show TensorBoard on google colab today,
# in case of CPU, you can this line
# !pip install -q tf-nightly-2.0-preview
# in case of GPU, you can use this line
!pip install -q tf-nightly-gpu-2.0-preview
# %load_ext tensorboard.notebook # not working on 22 Apr
%load_ext tensorboard # you need to use this line instead
import tensorflow as tf
'################
do training
'################
# show tensorboard
%tensorboard --logdir logs/fit
here is actual example made by google. https://colab.research.google.com/github/tensorflow/tensorboard/blob/master/docs/r2/get_started.ipynb
EDIT: You probably want to give the official %tensorboard magic a go, available from TensorFlow 1.13 onward.
Prior to the existence of the %tensorboard
magic, the standard way to
achieve this was to proxy network traffic to the Colab VM using
ngrok. A Colab example can be found here.
These are the steps (the code snippets represent cells of type "code" in colab):
Get TensorBoard running in the background.
Inspired by this answer.
LOG_DIR = '/tmp/log'
get_ipython().system_raw(
'tensorboard --logdir {} --host 0.0.0.0 --port 6006 &'
.format(LOG_DIR)
)
Download and unzip ngrok.
Replace the link passed to wget
with the correct download link for your OS.
! wget https://bin.equinox.io/c/4VmDzA7iaHb/ngrok-stable-linux-amd64.zip
! unzip ngrok-stable-linux-amd64.zip
Launch ngrok background process...
get_ipython().system_raw('./ngrok http 6006 &')
...and retrieve public url. Source
! curl -s http://localhost:4040/api/tunnels | python3 -c \
"import sys, json; print(json.load(sys.stdin)['tunnels'][0]['public_url'])"