I am trying a simple demo code of tensorflow from github link.
I\'m currently using python version 3.5.2
Z:\\downloads\\tensorflow_demo-master\\
you can modify the function:
def _read32(bytestream):
dt = numpy.dtype(numpy.uint32).newbyteorder('>')
return numpy.frombuffer(bytestream.read(4), dtype=dt)
new version:
def _read32(bytestream):
dt = numpy.dtype(numpy.uint32).newbyteorder('>')
return numpy.frombuffer(bytestream.read(4), dtype=dt)[0]
add [0]
in the end.
This appears to be an issue with the latest version of Numpy. A recent change made it an error to treat a single-element array as a scalar for the purposes of indexing.
The code link you have provided uses a separate file named input_data.py
to download data from MNIST using the following two lines in board.py
import input_data
mnist = input_data.read_data_sets("/tmp/data/",one_hot=True)
Since MNIST data is so frequently used for demonstration purposes, Tensorflow provides a way to automatically download it.
Replace the above two lines in board.py
with the following two lines and the error will disappear.
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets('MNIST_data', one_hot=True)
This file is likely corrupt:
Z:/downloads/MNIST dataset\train-images-idx3-ubyte.gz
Let's analyze the error you posted.
This, indicates that code is currently working with the file in question:
Extracting Z:/downloads/MNIST dataset\train-images-idx3-ubyte.gz
Traceback
indicates that a stack trace follows:
Traceback (most recent call last):
This, indicates that you read your data sets from 'Z:/downloads/MNIST dataset'
:
File "board.py", line 3, in <module>
mnist = input_data.read_data_sets(r'Z:/downloads/MNIST dataset', one_hot=True)
This, indicates that the code is extracting images:
File "Z:\downloads\tensorflow_demo-master\tensorflow_demo-master\input_data.py", line 150, in read_data_sets
train_images = extract_images(local_file)
This, indicates that the code is expected to read rows * cols * num_images
bytes:
File "Z:\downloads\tensorflow_demo-master\tensorflow_demo-master\input_data.py", line 40, in extract_images
buf = bytestream.read(rows * cols * num_images)
This is the line that errors:
File "C:\Users\surak\AppData\Local\Programs\Python\Python35\lib\gzip.py", line 274, in read
return self._buffer.read(size)
TypeError: only integer scalar arrays can be converted to a scalar index
I expect size
is the problematic value and was calculated on the previous line of the stacktrace.
I can see at least two ways to proceed.
Delete the offending file and see if the problem goes away. This would allow you to verify that the file is somehow corrupt.
Use a debugger to step into the code and then inspect the values used to calculate the offending variable. Use the knowledge gained to proceed from there.