问题
I tried the following, expecting to see the grayscale version of source image:
from PIL import Image
import numpy as np
img = Image.open("img.png").convert('L')
arr = np.array(img.getdata())
field = np.resize(arr, (img.size[1], img.size[0]))
out = field
img = Image.fromarray(out, mode='L')
img.show()
But for some reason, the whole image is pretty much a lot of dots with black in between. Why does it happen?
回答1:
When you are creating the numpy
array using the image data from your Pillow object, be advised that the default precision of the array is int32
. I'm assuming that your data is actually uint8
as most images seen in practice are this way. Therefore, you must explicitly ensure that the array is the same type as what was seen in your image. Simply put, ensure that the array is uint8
after you get the image data, so that would be the fourth line in your code1.
arr = np.array(img.getdata(), dtype=np.uint8) # Note the dtype input
1. Take note that I've added two more lines in your code at the beginning to import the necessary packages for this code to work (albeit with an image offline).
来源:https://stackoverflow.com/questions/39102051/read-the-picture-as-a-grayscale-numpy-array-and-save-it-back