tensorflow, image segmentation convnet InvalidArgumentError: Input to reshape is a tensor with 28800000 values, but the requested shape has 57600

笑着哭i 提交于 2020-01-06 06:46:07

问题


I am trying to segment images from the BRATS challenge. I am using U-net in a combination of these two repositories:

https://github.com/zsdonghao/u-net-brain-tumor

https://github.com/jakeret/tf_unet

When I try to output the prediction statistics a mismatch shape error come up:

InvalidArgumentError: Input to reshape is a tensor with 28800000 values, but the requested shape has 57600 [[Node: Reshape_2 = Reshape[T=DT_FLOAT, Tshape=DT_INT32, _device="/job:localhost/replica:0/task:0/device:CPU:0"](_arg_Cast_0_0, Reshape_2/shape)]]

I am using image slices 240x240, with a batch_verification_size = 500

Then,

  • this is shape test_x: (500, 240, 240, 1)
  • this is shape test_y: (500, 240, 240, 1)
  • this is shape test x: (500, 240, 240, 1)
  • this is shape test y: (500, 240, 240, 1)
  • this is shape batch x: (500, 240, 240, 1)
  • this is shape batch y: (500, 240, 240, 1)
  • this is shape prediction: (500, 240, 240, 1)
  • this is cost : Tensor("add_88:0", shape=(), dtype=float32)
  • this is cost : Tensor("Mean_2:0",shape=(), dtype=float32)
  • this is shape prediction: (?, ?, ?, 1)
  • this is shape batch x: (500, 240, 240, 1)
  • this is shape batch y: (500, 240, 240, 1)

240 x 240 x 500 = 28800000 I don't know why is requesting 57600

It looks like the error is emerging from output_minibatch_stats function:

summary_str, loss, acc, predictions = sess.run([self.summary_op, 
                                                self.net.cost, self.net.accuracy, 
self.net.predicter], 
feed_dict={self.net.x: batch_x,
self.net.y: batch_y,
self.net.keep_prob: 1.})

Therefore something is wrong in sess.run tf function. Below is some code where the error come up. Anybody got any idea what would happen?

def store_prediction(self, sess, batch_x, batch_y, name):
    print('track 1')
            prediction = sess.run(self.net.predicter, feed_dict={self.net.x: batch_x, 
                                                                 self.net.y: batch_y, 
                                                                 self.net.keep_prob: 1.})
            print('track 2')
            pred_shape = prediction.shape



loss = sess.run(self.net.cost, feed_dict={self.net.x: batch_x, 
                                                       self.net.y: batch_y, `
                                                       self.net.keep_prob: 1.})
        print('track 3')
        logging.info("Verification error= {:.1f}%, loss= {:.4f}".format(error_rate(prediction,
                                                                          util.crop_to_shape(batch_y,
                                                                                             prediction.shape)),
                                                                          loss))
        print('track 4')
        print('this is shape batch x: ' + str(batch_x.shape))
        print('this is shape batch y: ' + str(batch_y.shape))
        print('this is shape prediction: ' + str(prediction.shape))
        #img = util.combine_img_prediction(batch_x, batch_y, prediction)
        print('track 5')
        #util.save_image(img, "%s/%s.jpg"%(self.prediction_path, name))

        return pred_shape

    def output_epoch_stats(self, epoch, total_loss, training_iters, lr):
        logging.info("Epoch {:}, Average loss: {:.4f}, learning rate: {:.4f}".format(epoch, (total_loss / training_iters), lr))

    def output_minibatch_stats(self, sess, summary_writer, step, batch_x, batch_y):
        print('this is shape cost : ' + str(self.net.cost.shape))
        print('this is cost : ' + str(self.net.cost))
        print('this is  acc : ' + str(self.net.accuracy.shape))
        print('this is cost : ' + str(self.net.accuracy))
        print('this is shape prediction: ' + str(self.net.predicter.shape))
        print('this is shape batch x: ' + str(batch_x.shape))
        print('this is shape batch y: ' + str(batch_y.shape))


        # Calculate batch loss and accuracy
        summary_str, loss, acc, predictions = sess.run([self.summary_op, 
                                                            self.net.cost, 
                                                            self.net.accuracy, 
                                                            self.net.predicter], 
                                                           feed_dict={self.net.x: batch_x,
                                                                      self.net.y: batch_y,
                                                                      self.net.keep_prob: 1.})
        print('track 6')
        summary_writer.add_summary(summary_str, step)
        print('track 7')
        summary_writer.flush()
        logging.info("Iter {:}, Minibatch Loss= {:.4f}, Training Accuracy= {:.4f}, Minibatch error= {:.1f}%".format(step,
                                                                                                            loss,
                                                                                                            acc,
                                                                                                            error_rate(predictions, batch_y)))
        print('track 8')

回答1:


You set your batch size as 1 in your tensorflow pipeline during training but feeding in 500 batch size in your testing data. Thats why the network requests only a tensor of shape 57600. You can either set your training batch size 500 or testing batch size as 1.



来源:https://stackoverflow.com/questions/50552806/tensorflow-image-segmentation-convnet-invalidargumenterror-input-to-reshape-is

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!