Tensorflow summery merge error : Shape [-1,784] has negative dimensions

前端 未结 3 493
终归单人心
终归单人心 2021-01-12 04:24

I am trying to get summary of a training process of the neural net below.

import tensorflow as tf 
import numpy as n         


        
相关标签:
3条回答
  • 2021-01-12 05:12

    From one comment of the deleted answer, from the original poster:

    I actually build a neural net under with tf.Graph() as g. I removed the interactive session and started session as with tf.Session(g) as sess. It fixed the problem.

    The graph g was not marked as the default graph that way, thus the session (tf.InteractiveSession in the original code) would use another graph instead.

    Note that I stumbled upon here because of the same error message. In my case, I had accidentally something like this:

    input_data = tf.placeholder(tf.float32, shape=(None, 50))
    input_data = tf.tanh(input_data)
    session.run(..., feed_dict={input_data: ...})
    

    I.e. I didn't feed the placeholder. It seems that some other tensor operations can then result in this confusing error as internally an undefined dimension is represented as -1.

    0 讨论(0)
  • 2021-01-12 05:15

    This has may have to do with the InteractiveSession initialization.

    I initialized it at the beginning and then it worked - then initialized the global variables within the session.

    I am unable to reproduce the error with the old code, which makes it unpredictable or caching settings somewhere.

    import tensorflow as tf
    sess = tf.InteractiveSession()
    
    
    from tensorflow.examples.tutorials.mnist import input_data
    
    mnist = input_data.read_data_sets("MNIST_data/", one_hot=True)
    
    x = tf.placeholder(tf.float32, [None, 784])
    
    W = tf.Variable(tf.zeros([784,10]))
    
    b = tf.Variable(tf.zeros([10]))
    
    y = tf.nn.softmax(tf.matmul(x, W)+b)
    
    y_ = tf.placeholder(tf.float32, [None,10])
    
    
    
    cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))
    train_step = tf.train.GradientDescentOptimizer(0.05).minimize(cross_entropy)
    sess.run(tf.global_variables_initializer())
    
    
    for _ in range(1000):
        batch_xs, batch_ys = mnist.train.next_batch(100)
        #print batch_xs.shape, batch_ys.shape
        sess.run(train_step, feed_dict={x: batch_xs, y_: batch_ys})
    
    0 讨论(0)
  • 2021-01-12 05:25

    I was also having this problem. Searching around the basic consensus is to check for problems somewhere else in your code.

    What fixed it for me was I was doing a sess.run(summary_op) without feeding in data for my placeholders.

    Tensorflow seems to be a bit strange with placeholders, often they won't mind you not feeding them if you're trying to evaluate part of the graph that is independent of them. Here though, it did.

    0 讨论(0)
提交回复
热议问题