I have been trying to feed 1750 * 1750 images into Tensorflow, but I do not know how to label and feed the data after I convert the images into a Tensor using the tf.image.decod
Depending on what you are trying to do, there are several directions to consider.
If you just wish to run inference on an arbitrary JPEG file (i.e. labels are not required), then you can follow the example of classify_image.py which feeds in a JPEG image into a pre-trained Inception network:
github.com/tensorflow/models/blob/master/tutorials/image/imagenet/classify_image.py
If you do wish to train (or fine-tune) a model on a small custom data set of JPEG images, then take a look at this example for how to train a model off a small set of JPEG images.
github.com/tensorflow/tensorflow/blob/master/tensorflow/examples/image_retraining/retrain.py
If you do wish to train (or fine-tune) a model on a large custom data set of JPEG images, then reading many individual JPEG files will be inefficient and slow down training tremendously.
I would suggest following the procedure of described in the inception/ model library that converts a directory of JPEG images into sharded RecordIO containing serialized JPEG images.
github.com/tensorflow/models/blob/master/research/inception/inception/data/build_image_data.py
Instructions for running the conversion script are available here:
github.com/tensorflow/models/blob/master/research/inception/README.md#how-to-construct-a-new-dataset-for-retraining
After running the conversion, you may then employ/copy the image preprocessing pipeline used by the inception/ model.
github.com/tensorflow/models/blob/master/research/inception/inception/image_processing.py