tensorboard

Convolutional Neural Network - Dropout kills performance

生来就可爱ヽ(ⅴ<●) 提交于 2020-01-07 03:58:12
问题 I'm building a convolutional neural network using Tensorflow (I'm new with both), in order to recognize letters. I've got a very weird behavior with the dropout layer : if I don't put it (ie. keep_proba at 1), it performs quite well and learns (see Tensorboard screenshots of accuracy and loss below, with training in blue and testing in orange). However, when I put the dropout layer during the training phase (I tried at 0.8 and 0.5), the network learns nothing : loss falls quickly around 3 or

Tensorboard doesn't show scalars anymore

久未见 提交于 2020-01-05 08:08:33
问题 I decided to update tensorboard because it wasn't showing the graph, on the graph panel all I could see was a blank page with no error message. Now that I have updated the graph, is the only thing my tensorboard shows. Now I cannot see scalars or histograms. I have the: No scalar data was found. message, and the same for histograms etc. This is the relevant parts of my code: def train_model(self): with tf.Session(graph=self.graph) as session: session.run(tf.global_variables_initializer())#Now

How can I code feed_dict

爷,独闯天下 提交于 2020-01-04 06:34:51
问题 Codes which produces AE x = tf.placeholder(tf.float32, [None, 784]) keep_prob = tf.placeholder("float") for step in range(2000): batch_xs, batch_ys = mnist.train.next_batch(BATCH_SIZE) sess.run(train_step, feed_dict={x: batch_xs, keep_prob: (1 - DROP_OUT_RATE) }) # feed_dict if step % 10 == 0: summary_op = tf.merge_all_summaries() summary_str = sess.run(summary_op, feed_dict={x: batch_xs, keep_prob: 1.0}) summary_writer.add_summary(summary_str, step) if step % 100 == 0: print(loss,eval

How can I code feed_dict

ⅰ亾dé卋堺 提交于 2020-01-04 06:34:02
问题 Codes which produces AE x = tf.placeholder(tf.float32, [None, 784]) keep_prob = tf.placeholder("float") for step in range(2000): batch_xs, batch_ys = mnist.train.next_batch(BATCH_SIZE) sess.run(train_step, feed_dict={x: batch_xs, keep_prob: (1 - DROP_OUT_RATE) }) # feed_dict if step % 10 == 0: summary_op = tf.merge_all_summaries() summary_str = sess.run(summary_op, feed_dict={x: batch_xs, keep_prob: 1.0}) summary_writer.add_summary(summary_str, step) if step % 100 == 0: print(loss,eval

ValueError: Duplicate plugins for name projector

蹲街弑〆低调 提交于 2020-01-03 08:39:31
问题 Running tensorboard --logdir log_dir I get an error: Traceback (most recent call last): File "/home/user/.local/bin/tensorboard", line 11, in <module> sys.exit(run_main()) File "/home/user/.local/lib/python3.6/site-packages/tensorboard/main.py", line 64, in run_main app.run(tensorboard.main, flags_parser=tensorboard.configure) File "/home/user/.local/lib/python3.6/site-packages/absl/app.py", line 300, in run _run_main(main, args) File "/home/user/.local/lib/python3.6/site-packages/absl/app.py

Unable to use summary.merge in tensorboard for separate training and evaluation summaries

不羁的心 提交于 2020-01-01 08:37:27
问题 I am trying to use tensorboard to watch the learning of a convolutional neural net. I am doing good with the tf.summary.merge_all function to create a merged summary. However, I would like to have tracking on accuracy and loss both for training and test data. This post is useful:Logging training and validation loss in tensorboard. To make things easier to handle, I would like to merge my summaries into two merged summaries, one for training and one for validation.(I will add more stuff

TensorBoard doesn't show all data points

旧街凉风 提交于 2019-12-31 22:25:35
问题 I was running a very long training (reinforcement learning with 20M steps) and writing summary every 10k steps. In between step 4M and 6M, I saw 2 peaks in my TensorBoard scalar chart for game score, then I let it run and went to sleep. In the morning, it was running at about step 12M, but the peaks between step 4M and 6M that I saw earlier disappeared from the chart. I tried to zoom in and found out that TensorBoard (randomly?) skipped some of the data points. I also tried to export the data

Tensorflow scalar Summary to human-readable text

谁说胖子不能爱 提交于 2019-12-31 03:59:16
问题 I want to inspect all values for a scalar in my event file. I don't want the aggregate statistics as returned by tensorboard --inspect --event_file <summary_file> --tag <scalar_tag> I want all information sufficient for reconstructing the scalar graph (i.e. the unsummarized ordered (x,y) pairs). How can I do this either with tensorboard or the TF Python API? 回答1: You could use a tf.train.summary_iterator, e.g. my_pairs = [] for e in tf.train.summary_iterator(my_event_file_path): for v in e

Save Tensorflow graph for viewing in Tensorboard without summary operations

主宰稳场 提交于 2019-12-30 03:49:08
问题 I have a rather complicated Tensorflow graph that I'd like to visualize for optimization purposes. Is there a function that I can call that will simply save the graph for viewing in Tensorboard without needing to annotate variables? I Tried this: merged = tf.merge_all_summaries() writer = tf.train.SummaryWriter("/Users/Name/Desktop/tf_logs", session.graph_def) But no output was produced. This is using the 0.6 wheel. This appears to be related: Graph visualisaton is not showing in tensorboard

Unable to open Tensorboard in browser

只愿长相守 提交于 2019-12-29 22:45:30
问题 I am following google cloud machine learning tutorial and I am unable to Launch TensorBoard I've followed the steps in the above tutorial (also set up my environment using docker container) until typing the below command in the terminal tensorboard --logdir=data/ --port=8080 Where the terminal outputs the below prompt Starting TensorBoard 29 on port 8080 (You can navigate to http://172.17.0.2:8080) When I visit http://172.17.0.2:8080 in my browser I see nothing (the server where this page is