Something weird is going on with cv2.imshow. I was writing a piece of code and wondering why one of my operations wasn\'t working (as diagnosed by observing cv2.imshow). In
When use cv2.imshow
, you should know:
imshow(winname, mat) -> None
. The function may scale the image, depending on its depth:
. - If the image is 8-bit unsigned, it is displayed as is.
. - If the image is 16-bit unsigned or 32-bit integer, the pixels are divided by 256.
That is, the value range [0,255\*256] is mapped to [0,255].
. - If the image is 32-bit or 64-bit floating-point, the pixel values are multiplied by 255. That is, the
. value range [0,1] is mapped to [0,255].
The function distaceTransform
return type float
. So when directly display the dist, it first multiply 255, then map to [0,255]. So the result just like binary image. (0*255=>0, 1*255=>255, ...*255=>255)
.
To display correctly:
(1) you can clip the float dist to [0,255] and change the datatype to np.uint8
by cv2.convertScaleAbs
dist1 = cv2.convertScaleAbs(dist)
(2) you can also normalize float dist to [0,255] and change datatype by cv2.normalize
dist2 = cv2.normalize(dist, None, 255,0, cv2.NORM_MINMAX, cv2.CV_8UC1)
Here is an example with panda:
The result:
Full code:
#!/ust/bin/python3
# 2018.01.19 10:24:58 CST
img = cv2.imread("panda.png", 0)
dist = cv2.distanceTransform(src=img,distanceType=cv2.DIST_L2,maskSize=5)
dist1 = cv2.convertScaleAbs(dist)
dist2 = cv2.normalize(dist, None, 255,0, cv2.NORM_MINMAX, cv2.CV_8UC1)
cv2.imshow("dist", dist)
cv2.imshow("dist1", dist1)
cv2.imshow("dist2", dist2)
cv2.waitKey()