Is there any way to save a numpy array as a 16 bit image (tif, png) using any of the commonly available python packages? This is the only way that I could get to work in the
As mentioned, PyPNG is very useful. For Enthought users it can be installed as e.g.:
conda install -c eaton-lab pypng
I'd use the from_array
method of the shelf:
import png
import numpy as np
bit_depth = 16
my_array = np.ones((800, 800, 3))
png.from_array(my_array*2**bit_depth-1, 'RGB;%s'%bit_depth).save('foo.png')
Mode uses PIL style format, e.g. 'L', 'LA', 'RGB' or 'RGBA', followed by ';16' or ';8' too set bit depth. If bit depth is omitted, the dtype of the array is used.
Read more here.
This explanation of png and numpngw is very helpful! But, there is one small "mistake" I thought I should mention. In the conversion to 16 bit unsigned integers, the y.max() should have been y.min(). For the picture of random colors, it didn't really matter but for a real picture, we need to do it right. Here's the corrected line of code...
z = (65535*((y - y.min())/y.ptp())).astype(np.uint16)
You can convert your 16 bit array to a two channel image (or even 24 bit array to a 3 channel image). Something like this works fine and only numpy is required:
import numpy as np
arr = np.random.randint(0, 2 ** 16, (128, 128), dtype=np.uint16) # 16-bit array
print(arr.min(), arr.max(), arr.dtype)
img_bgr = np.zeros((*arr.shape, 3), np.int)
img_bgr[:, :, 0] = arr // 256
img_bgr[:, :, 1] = arr % 256
cv2.imwrite('arr.png', img_bgr)
# Read image and check if our array is restored without losing precision
img_bgr_read = cv2.imread('arr.png')
B, G, R = np.split(img_bgr_read, [1, 2], 2)
arr_read = (B * 256 + G).astype(np.uint16).squeeze()
print(np.allclose(arr, arr_read), np.max(np.abs(arr_read - arr)))
Result:
0 65523 uint16
True 0
Created a custom script to do this using just numpy and OpenCV: (Still feels like a huge overkill though...)
import numpy as np
import cv2
def save_gray_deep_bits(filepath, float_array, bitdepth=16):
assert bitdepth in [8,16,24]
arr = np.squeeze(float_array)
assert len(arr.shape) == 2
assert '.png' in filepath
bit_iterations = int(bitdepth/8)
img_bgr = np.zeros((*arr.shape, 3), np.uint8)
encoded = np.zeros(arr.shape, np.uint8)
for i in range(bit_iterations):
residual = float_array - encoded
plane_i = (residual*(256**i)).astype(np.uint8)
img_bgr[:,:,i] = plane_i
encoded += plane_i
cv2.imwrite(filepath, img_bgr)
return img_bgr
def bgr_to_gray_deep_bits(bgr_array, bitdepth=16):
gray = np.zeros((bgr_array.shape[0], bgr_array.shape[1]), dtype = np.float32)
for i in range(int(bitdepth/8)):
gray += bgr_array[:,:,i] / float(256**i)
return gray
def load_gray_deep_bits(filepath, bitdepth=16):
bgr_image = cv2.imread('test.png').astype(np.float64)
gray_reconstructed = bgr_to_gray_deep_bits(bgr_image, bitdepth = bd)
return gray_reconstructed
bd = 24
gray_image_full_precision = np.random.rand(1024, 1024)*255.
save_gray_deep_bits('test.png', gray_image_full_precision, bitdepth = bd)
# Read image and check if our array is restored without losing precision
bgr_image = cv2.imread('test.png').astype(np.float64)
gray_reconstructed = bgr_to_gray_deep_bits(bgr_image, bitdepth = bd)
avg_residual = np.mean(np.abs(gray_reconstructed - gray_image_full_precision))
print("avg pixel residual: %.3f" %avg_residual)
One alternative is to use pypng. You'll still have to install another package, but it is pure Python, so that should be easy. (There is actually a Cython file in the pypng source, but its use is optional.)
Here's an example of using pypng to write numpy arrays to PNG:
import png
import numpy as np
# The following import is just for creating an interesting array
# of data. It is not necessary for writing a PNG file with PyPNG.
from scipy.ndimage import gaussian_filter
# Make an image in a numpy array for this demonstration.
nrows = 240
ncols = 320
np.random.seed(12345)
x = np.random.randn(nrows, ncols, 3)
# y is our floating point demonstration data.
y = gaussian_filter(x, (16, 16, 0))
# Convert y to 16 bit unsigned integers.
z = (65535*((y - y.min())/y.ptp())).astype(np.uint16)
# Use pypng to write z as a color PNG.
with open('foo_color.png', 'wb') as f:
writer = png.Writer(width=z.shape[1], height=z.shape[0], bitdepth=16)
# Convert z to the Python list of lists expected by
# the png writer.
z2list = z.reshape(-1, z.shape[1]*z.shape[2]).tolist()
writer.write(f, z2list)
# Here's a grayscale example.
zgray = z[:, :, 0]
# Use pypng to write zgray as a grayscale PNG.
with open('foo_gray.png', 'wb') as f:
writer = png.Writer(width=z.shape[1], height=z.shape[0], bitdepth=16, greyscale=True)
zgray2list = zgray.tolist()
writer.write(f, zgray2list)
Here's the color output:
and here's the grayscale output:
Update: I recently created a github repository for a module called numpngw that provides a function for writing a numpy array to a PNG file. The repository has a setup.py
file for installing it as a package, but the essential code is in a single file, numpngw.py
, that could be copied to any convenient location. The only dependency of numpngw
is numpy.
Here's a script that generates the same 16 bit images as those shown above:
import numpy as np
import numpngw
# The following import is just for creating an interesting array
# of data. It is not necessary for writing a PNG file with PyPNG.
from scipy.ndimage import gaussian_filter
# Make an image in a numpy array for this demonstration.
nrows = 240
ncols = 320
np.random.seed(12345)
x = np.random.randn(nrows, ncols, 3)
# y is our floating point demonstration data.
y = gaussian_filter(x, (16, 16, 0))
# Convert y to 16 bit unsigned integers.
z = (65535*((y - y.min())/y.ptp())).astype(np.uint16)
# Use numpngw to write z as a color PNG.
numpngw.write_png('foo_color.png', z)
# Here's a grayscale example.
zgray = z[:, :, 0]
# Use numpngw to write zgray as a grayscale PNG.
numpngw.write_png('foo_gray.png', zgray)