问题
I am doing image processing in a scientific context. Whenever I need to save an image to the hard drive, I want to be able to reopen it at a later time and get exactly the data that I had before saving it. I exclusively use the PNG format, having always been under the impression that it is a lossless format. Is this always correct, provided I am not using the wrong bit-depth? Should encoder and decoder play no role at all? Specifically, the images I save
- are present as 2D numpy arrays
- have integer values from 0 to 255
- are encoded with the OpenCV
imwrite()
function, e.g.cv2.imwrite("image.png", array)
回答1:
PNG is a lossless format by design:
Since PNG's compression is fully lossless--and since it supports up to 48-bit truecolor or 16-bit grayscale--saving, restoring and re-saving an image will not degrade its quality, unlike standard JPEG (even at its highest quality settings).
The encoder and decoder should not matter, in regards of reading the images correctly. (Assuming, of course, they're not buggy).
And unlike TIFF, the PNG specification leaves no room for implementors to pick and choose what features they'll support; the result is that a PNG image saved in one app is readable in any other PNG-supporting application.
回答2:
While png is lossless, this does not mean it is uncompressed by default.
I specify compression using the IMWRITE_PNG_COMPRESSION
flag. It varies between 0
(no compression) and 9
(maximum compression). So if you want uncompressed png:
cv2.imwrite(filename, data, [cv2.IMWRITE_PNG_COMPRESSION, 0])
The more you compress, the longer it takes to save.
Link to docs
来源:https://stackoverflow.com/questions/47884976/am-i-creating-lossless-png-images