tensorboard

AttributeError: module 'tensorflow.python.summary.summary' has no attribute 'FileWriter'

寵の児 提交于 2020-01-24 07:24:48
问题 I'm getting this error, although everywhere I've looked file_writer = tf.summary.FileWriter('/path/to/logs', sess.graph) is mentioned as the correct implementation of this and this. Here is the error: Traceback (most recent call last): File "tfvgg.py", line 304, in writer = tf.summary.FileWriter("/tmp/tfvgg", sess.graph) AttributeError: module 'tensorflow.python.summary.summary' has no attribute 'FileWriter' Here is the code I'm using: # init sess = tf.Session() writer = tf.summary.FileWriter

Add extra dimension to an axes

一个人想着一个人 提交于 2020-01-16 09:27:29
问题 I have a batch of segmentation masks of shape [5,1,100,100] ( batch_size x dims x ht x wd ) which I have to display in tensorboardX with an RGB image batch [5,3,100,100] . I want to add two dummy dimensions to the second axes of the segmentation mask to make it [5,3,100,100] so there will not be any dimension mismatch error when I pass it to torch.utils.make_grid . I have tried unsqueeze , expand and view but I am not able to do it. Any suggestions? 回答1: You can use expand , repeat , or

Tensorboard v1.0 - Histogram tab interpretation

℡╲_俬逩灬. 提交于 2020-01-15 09:48:51
问题 I am learning to visualize tensor via tensorboard, however, I don't know how to interpret chart in Histogram tab. I used below code to visualize: sess = tf.Session() tf.summary.histogram('test', tf.constant([1, 1, 2, 2, 3, 4, 4, 4, 4])) summary = tf.summary.merge_all() train_writer = tf.summary.FileWriter('../tmp/train', sess.graph) for i in range(10): sum = sess.run(summary) train_writer.add_summary(sum, i) I got this chart from tensorboard: Histogram mode: offset Histogram mode: overlay I

Tensorboard error after upgrading to 1.4: trying to access flag before flags were parsed

余生颓废 提交于 2020-01-15 08:05:18
问题 Since upgrading to TF 1.4 I am getting this error when I try to run tensorboard : Traceback (most recent call last): File "/opt/python/3.6.3/bin/tensorboard", line 11, in <module> sys.exit(main()) File "/opt/python/3.6.3/lib/python3.6/site-packages/tensorboard/main.py", line 39, in main return program.main(default.get_plugins(), File "/opt/python/3.6.3/lib/python3.6/site-packages/tensorboard/default.py", line 71, in get_plugins debugger = debugger_plugin_loader.get_debugger_plugin() File "

How to add learning rate to summaries?

依然范特西╮ 提交于 2020-01-15 05:17:22
问题 How do I monitor learning rate of AdamOptimizer? In TensorBoard: Visualizing Learning is said that I need Collect these by attaching scalar_summary ops to the nodes that output the learning rate and loss respectively. How can I do this? 回答1: I think something like following inside the graph would work fine: with tf.name_scope("learning_rate"): global_step = tf.Variable(0) decay_steps = 1000 # setup your decay step decay_rate = .95 # setup your decay rate learning_rate = tf.train.exponential

What the meaning of the plot of tensorboard when using Queues?

老子叫甜甜 提交于 2020-01-13 20:28:30
问题 I use tensorboard to monitor my training process and the plot are so good, but there are some plots that confuse me. First Using_Queues_Lib.py :(it Using Queues and MultiThreads to read binary data,reference cifar 10 example) from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from six.moves import xrange # pylint: disable=redefined-builtin import tensorflow as tf NUM_EXAMPLES_PER_EPOCH_FOR_TRAIN = 50000 REAL32_BYTES=4 def

Get Gradients with Keras Tensorflow 2.0

不打扰是莪最后的温柔 提交于 2020-01-13 11:40:12
问题 I would like to keep track of the gradients over tensorboard. However, since session run statements are not a thing anymore and the write_grads argument of tf.keras.callbacks.TensorBoard is depricated , I would like to know how to keep track of gradients during training with Keras or tensorflow 2.0 . My current approach is to create a new callback class for this purpose, but without success. Maybe someone else knows how to accomplish this kind of advanced stuff. The code created for testing

TensorFlow: Opening log data written by SummaryWriter

那年仲夏 提交于 2020-01-12 07:04:22
问题 After following this tutorial on summaries and TensorBoard, I've been able to successfully save and look at data with TensorBoard. Is it possible to open this data with something other than TensorBoard? By the way, my application is to do off-policy learning. I'm currently saving each state-action-reward tuple using SummaryWriter. I know I could manually store/train on this data, but I thought it'd be nice to use TensorFlow's built in logging features to store/load this data. 回答1: As of March

TensorFlow: Opening log data written by SummaryWriter

谁说我不能喝 提交于 2020-01-12 07:02:09
问题 After following this tutorial on summaries and TensorBoard, I've been able to successfully save and look at data with TensorBoard. Is it possible to open this data with something other than TensorBoard? By the way, my application is to do off-policy learning. I'm currently saving each state-action-reward tuple using SummaryWriter. I know I could manually store/train on this data, but I thought it'd be nice to use TensorFlow's built in logging features to store/load this data. 回答1: As of March

Show more images in Tensorboard - Tensorflow object detection

有些话、适合烂在心里 提交于 2020-01-12 05:24:41
问题 I am using Tensorflow's object detection framework. Training and evaluation jobs are going well, but in tensorboard I am only able to see 10 images for the evaluation job. Is there a way to increase this number to look at more images? I tried changing the config file: eval_config: { num_examples: 1000 max_evals: 50 } eval_input_reader: { tf_record_input_reader { input_path: "xxx/eval.record" } label_map_path: "xxx/label_map.pbtxt" shuffle: false num_readers: 1 } I thought the max_eval