pixel

What is pixel and points in iPhone?

别等时光非礼了梦想. 提交于 2019-11-27 03:21:14
from UIImage reference, size The dimensions of the image, taking orientation into account. (read-only) @property(nonatomic, readonly) CGSize size Discussion In iOS 4.0 and later, this value reflects the logical size of the image and is measured in points. In iOS 3.x and earlier, this value always reflects the dimensions of the image measured in pixels. What's the difference between pixel and points? CodaFi A pixel on iOS is the full resolution of the device, which means if I have an image that is 100x100 pixels in length, then the phone will render it 100x100 pixels on a standard non-retina

Rotating a 2D pixel array by 90 degrees

主宰稳场 提交于 2019-11-27 02:19:09
问题 I have an array of pixel data for an image. The image I am getting is already rotated to 270 degrees. So I am trying to rotate it again by 90 degrees to have the correct image. I've tried a transpose algorithm, by changing data[x][y] to data[y][x] , but I don't think that's the correct way. Can anyone guide me what can I do to have it rotated? 回答1: This can be done without using any extra space, so called In-place matrix transposition (not exact the same). Remember to do some mirroring after

In Python, how can I draw to a pixel on the screen directly?

ぐ巨炮叔叔 提交于 2019-11-27 02:19:09
问题 I'm wanting to do something like the following: ... pixel[0,0] = [ 254, 0, 0 ] # Draw R at pixel x0y0 pixel[2,1] = [ 0, 254, 0 ] # Draw G at pixel x2y1 pixel[4,2] = [ 0, 0, 254 ] # Draw B at pixel x4y2 ... I hope to display many different configurations of pixels and colours in a short space of time -- writing to an intermediary file would be too expensive. How should I best go about achieving this goal in Python? 回答1: Direct answer: This can only be done with OS-specific APIs. Some OSes does

Pixel to Centimeter?

非 Y 不嫁゛ 提交于 2019-11-27 00:53:28
I just want to know if the pixel unit is something that doesn't change, and if we can convert from pixels to let's say centimeters ? Mark Ransom Similar to this question which asks about points instead of centimeters. There are 72 points per inch and there are 2.54 centimeters per inch, so just substitute 2.54 for 72 in the answer to that question. I'll quote and correct my answer here: There are 2.54 centimeters per inch; if it is sufficient to assume 96 pixels per inch, the formula is rather simple: centimeters = pixels * 2.54 / 96 There is a way to get the configured pixels per inch of your

Cycle through pixels with opencv

谁都会走 提交于 2019-11-27 00:14:29
问题 How would I be able to cycle through an image using opencv as if it were a 2d array to get the rgb values of each pixel? Also, would a mat be preferable over an iplimage for this operation? 回答1: If you use C++, use the C++ interface of opencv and then you can access the members via http://docs.opencv.org/2.4/doc/tutorials/core/how_to_scan_images/how_to_scan_images.html#the-efficient-way or using cv::Mat::at(), for example. 回答2: cv::Mat is preferred over IplImage because it simplifies your

图像的二值化原理和实现

与世无争的帅哥 提交于 2019-11-27 00:11:16
1、 图像的二值化的基本原理 图像的二值化处理就是讲图像上的 点的灰度置为 0 或 255 ,也就是讲整个图像呈现出明显的黑白效果。即将 256 个亮度等级的灰度图像通过适当的阀值选取而获得仍然可以反映图像整体和局部特征的二值化图像。在数字图像处理中,二值图像占有非常重要的地位,特别是在实用的图像处理中,以二值图像处理实现而构成的系统是很多的,要进行二值图像的处理与分析,首先要把灰度图像二值化,得到二值化图像,这样子有利于再对图像做进一步处理时,图像的集合性质只与像素值为 0 或 255 的点的位置有关,不再涉及像素的多级值,使处理变得简单,而且数据的处理和压缩量小。为了得到理想的二值图像,一般采用封闭、连通的边界定义不交叠的区域。 所有灰度大于或等于阀值的像素被判定为属于特定物体,其灰度值为 255 表示,否则这些像素点被排除在物体区域以外,灰度值为 0 ,表示背景或者例外的物体区域。 如果某特定物体在内部有均匀一致的灰度值,并且其处在一个具有其他等级灰度值的均匀背景下,使用阀值法就可以得到比较的分割效果。如果物体同背景的差别表现不在灰度值上(比如纹理不同),可以将这个差别特征转换为灰度的差别,然后利用阀值选取技术来分割该图像。动态调节阀值实现图像的二值化可动态观察其分割图像的具体结果。 2、 图像的二值化的程序实现 以下程序是用QT实现的 bool convertGray:

Pixel access in OpenCV 2.2

风流意气都作罢 提交于 2019-11-26 22:58:05
Hi I want to use opencv to tell me the pixel values of a blank and white image so the output would look like this 10001 00040 11110 00100 Here is my current code but I'm not sure how to access the results of the CV_GET_CURRENT call.. any help ? IplImage readpix(IplImage* m_image) { cout << "Image width : " << m_image->width << "\n"; cout << "Image height : " << m_image->height << "\n"; cout << "-----------------------------------------\n"; CvPixelPosition8u position; CV_INIT_PIXEL_POS(position, (unsigned char*)(m_image->imageData), m_image->widthStep, cvSize(m_image->width, m_image->height), 0

How to access pixel values of CV_32F/CV_64F Mat?

梦想的初衷 提交于 2019-11-26 21:36:34
问题 I was working on homography and whenever I try to check the values of H matrix (type CV_64F) using H.at<float>(i, j) I get random numbers(sometimes garbage value). I want to access pixel values of float matrix. Is there any way to do it? Mat A = Mat::eye(3, 3, CV_64F); float B; for(int i=0; i<A.rows; i++) { for(int j=0; j<A.cols; j++) { printf("%f\n", A.at<float>(i, j)); } } imshow("identity", A); waitKey(0); This shows correct image of an identity matrix but while trying to access pixel

OpenGL Scale Single Pixel Line

筅森魡賤 提交于 2019-11-26 21:10:23
I would like to make a game that is internally 320x240, but renders to the screen at whole number multiples of this (640x480, 960,720, etc). I am going for retro 2D pixel graphics. I have achieved this by setting the internal resolution via glOrtho(): glOrtho(0, 320, 240, 0, 0, 1); And then I scale up the output resolution by a factor of 3, like this: glViewport(0,0,960,720); window = SDL_CreateWindow("Title", SDL_WINDOWPOS_CENTERED, SDL_WINDOWPOS_CENTERED, 960, 720, SDL_WINDOW_OPENGL); I draw rectangles like this: glBegin(GL_LINE_LOOP); glVertex2f(rect_x, rect_y); glVertex2f(rect_x + rect_w,

关于calibration(零星)

倾然丶 夕夏残阳落幕 提交于 2019-11-26 20:49:12
结构光深度检测的原理,本质就是双目视觉。用projector替代一个camera,投影的pattern就是双目视觉所需的其中一张图片。需要对camera和projector进行calibration。 以camera为例说一下自己对calibration的理解。 先将practical camera看做一个pin hole camera model,像平面与pin hole的距离为d。首先是要明确两个坐标系,一个是camera coordinate system(下边计为{C}),或者说是world coordinate system,以(calibrate出来的camera的)光心,也就是pin hole为{C}的origin,垂直于像平面指向外为z正半轴;第二个是image coordinate system(下边计为{I}),像平面的某个角为{I}的origin。原则上{I}是一个2D coordinate system,不过也可以看成3D的。 如上图所示,从左往右看过去,场景左上方的物体会通过光心O投射到像平面右下方,将像平面关于原点对称重新构造后,物体像就会在左上方,也就是我们通常见到保存好的图像,因此通常图像处理所涉及的坐标系原点在左上角。有一点要注意的是,这个像平面并不是实际的camera的像平面,并且可以看到这个像平面的一些特别之处,如{C}与{I}的坐标轴分别平行