问题
I am working on a sensor-based Python application built on a PyQt4 GUI. The sensor is generating 16-bit measurements... 256 16-bit "pixels" per "line". A square "image" is acquired by obtaining 256 lines, resulting in a (256,256) Numpy array of 16-bit numbers. I simply want to display this as a grayscale image. The sensor loop is running in a QThread and emits a QImage signal. The signal connects to a slot that renders the data in the main GUI by packing it into a 32-bit RGB image. Of course, to pack the 16bit grayscale pixels into a 32-bit RGB image, I am forced to scale the 16-bit pixels to 8-bit and a substantial amount of dynamic range is lost. A MWE is provided that shows my current strategy (this is obviously not my larger threaded sensor-based application... it simply extracts the salient portions). Please note that I am a Python beginner and I'm doing my best to keep up...
#!/usr/bin/python
# -*- coding: utf-8 -*-
"""
Grayscale to RGB32 QPixmap tests
"""
import sys
import numpy as np
from PyQt4 import QtGui, QtCore
class PixmapTest(QtGui.QWidget):
def __init__(self):
super(PixmapTest, self).__init__()
self.initUI()
def initUI(self):
imglayout = QtGui.QHBoxLayout(self)
img_16bit = np.random.randint(0,65535,size=(256,256)).astype(np.uint32)
img_16bit_to_8bit = (img_16bit / 65535.0 * 255).astype(np.uint32)
packed_img_array = (255 << 24 | (img_16bit_to_8bit) << 16 | (img_16bit_to_8bit) << 8 | (img_16bit_to_8bit)).flatten()
img = QtGui.QImage(packed_img_array, 256, 256, QtGui.QImage.Format_RGB32)
pixmap = QtGui.QPixmap(img.scaledToWidth(img.width()*2))
imglabel = QtGui.QLabel(self)
imglabel.setPixmap(pixmap)
imglayout.addWidget(imglabel)
self.setLayout(imglayout)
self.move(300, 200)
self.setWindowTitle('QPixmap Test')
self.show()
def main():
app = QtGui.QApplication(sys.argv)
form = PixmapTest()
sys.exit(app.exec_())
if __name__ == '__main__':
main()
Specifically, my questions are:
Is there a better way? The solution has to remain "lightweight" (i.e., PyQt4 QImage/QPixmap). I can't use Matplotlib or anything heavyweight as it is too slow. The closer to native Python/Numpy the better. I realize this is ultimately a limitation of the QImage class, but I was hoping there was a clever solution I'm just not seeing that lets me keep the current signal/slot "wiring" I have.
Through experimentation, I've found that I have to declare all arrays that ultimately get processed to end up in the QImage as np.uint32 (though np.int32 seems to work as well). It doesn't work if I simply declare the penultimate array as uint32/int32. I don't understand why.
I've played around with altering luminosity with
Y' = 0.2126 * R + 0.7152 * G + 0.0722 * B
and other similar conversions. Probably "polishing a turd" here, but I thought I'd include this because other answers on SX seem to indicate this is important. Notwithstanding the loss of dynamic range, it seems to work to simply assign the same value to R, G, B as in my MWE.
As requested in a comment below, here is a histogram of some sample data from the sensor to illustrate the dynamic range:
回答1:
Here I use some function data for demo:
y, x = np.mgrid[-10:10:256j, -10:10:256j]
data = ((np.sin(y**2 + x**2) + 2) * 1000).astype(np.uint16)
img_8bit = (data / 256.0).astype(np.uint8) # use the high 8bit
img_8bit = ((data - data.min()) / (data.ptp() / 255.0)).astype(np.uint8) # map the data range to 0 - 255
img = QtGui.QImage(img_8bit.repeat(4), 256, 256, QtGui.QImage.Format_RGB32)
When use the high 8bit, it looks like:
When map min & max value to (0, 255), it looks like:
To convert the 8bit image to 32bit, you can just call img_8bit.repeat(4)
, this will repeat every byte 4 times, so the memory can be viewed as an uint32 buffer. Since you create the QImage
by Format_RGB32
not Format_ARGB32
, the most significant byte is not used.
来源:https://stackoverflow.com/questions/15672743/convert-16-bit-grayscale-to-qimage