yuv

OpenGL Convert NV12 to RGB24 using shader [closed]

对着背影说爱祢 提交于 2020-02-25 01:23:08
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 5 days ago . I tried to write an application to display YUV image in OpenGL. I was successfully converted YUV to RGB in C++ using this snippet (source) static long int crv_tab[256]; static long int cbu_tab[256]; static long int cgu_tab[256]; static long int cgv_tab[256]; static long int tab_76309[256]; static unsigned char

使用D3D渲染YUV视频数据

社会主义新天地 提交于 2020-02-13 09:12:31
源代码下载 在PC机上,对于YUV格式的视频如YV12,YUY2等的显示方法,一般是采用DIRECTDRAW,使用显卡的OVERLAY表面显示。OVERLAY技术主要是为了解决在PC上播放VCD而在显卡上实现的一个基于硬件的技术。OVERLAY的出现,很好的解决了在PC上播放VCD所遇到的困难。早期PC处理能力有限,播放VCD时,不但要做视频解码工作,还需要做YUV到RGB的颜色空间转换,软件实现非常耗费资源,于是,YUV OVERLAY表面出现了,颜色空间转换被转移到显卡上去实现,显卡做这些工作是具有天生优势的。 随着显卡技术的发展,OVERLAY的局限性也越来越充分的暴露出来。一般显卡只支持一个OVERLAY表面,用OVERLAY实现多画面比较困难,视频和文本的叠加也有困难,当然,要实现一些特效就更难了。更重要的是,OVERLAY技术在显卡上是属于2D模块,在高品质3D游戏的推动下,现在的显卡的功能和性能,主要体现在3D模块上,厂商投入最大的,也是在GPU的3D模块上。OVERLAY技术无法利用和发挥显卡GPU的3D性能。微软早就停止了对DIRECTDRAW的支持,鼓励开发人员转向DIRECT3D,所以OVERLAY也无法使用新的API。 早期的3D渲染,主要是使用CPU做的,显卡做的较少。后来,显卡GPU的处理能力越来越强,承担的3D渲染功能也越来越多

基于Python3.6的OpenCV图片色彩空间的转换

試著忘記壹切 提交于 2020-02-02 02:57:13
不同的色彩空间中对图片的色彩体现有很大不同 #色彩空间的相互转换:最常见的是HSV与RGB,YUV与RGB的相互转换 #常见色彩空间有: #RGB:最常用 #HSV:对指定色彩铭感,用于查找表达特定颜色 #HIS: #YCrCb:在人体肤色识别运用较多 #YUV:Android开发中运用较多 以下是对图片进行所有色彩空间的演示: import cv2 as cv ###导入openc包 def color_space_demo ( image ) : gray = cv.cvtColor ( image,cv.COLOR_BGR2GRAY ) cv.imshow ( "gray" ,gray ) hsv = cv.cvtColor ( image,cv.COLOR_BGR2HSV ) cv.imshow ( "hsv" ,hsv ) yuv = cv.cvtColor ( image,cv.COLOR_BGR2YUV ) cv.imshow ( "yuv" ,yuv ) Ycrcb = cv.cvtColor ( image,cv.COLOR_BGR2YCrCb ) cv.imshow ( "Ycrcb" ,Ycrcb ) HIS = cv.cvtColor ( image,cv.COLOR_BGR2HLS ) cv.imshow ( "HIS" ,HIS ) print ( "-

Color conversion from DXGI_FORMAT_B8G8R8A8_UNORM to NV12 in GPU using DirectX11 pixel shaders

陌路散爱 提交于 2020-01-29 04:51:06
问题 I'm working on a code to capture the desktop using Desktop duplication and encode the same to h264 using Intel hardwareMFT. The encoder only accepts NV12 format as input. I have got a DXGI_FORMAT_B8G8R8A8_UNORM to NV12 converter(https://github.com/NVIDIA/video-sdk-samples/blob/master/nvEncDXGIOutputDuplicationSample/Preproc.cpp) that works fine, and is based on DirectX VideoProcessor. The problem is that the VideoProcessor on certain intel graphics hardware supports conversions only from DXGI

how to adjust image saturation in YUV color space

谁说我不能喝 提交于 2020-01-23 09:56:13
问题 I want to know how to adjust image saturation in YUV color space, specially, the U component and the V component? 回答1: you probably want to scale the U and V components (using a center point of 128) ex: U = (U - 128) * Scale_factor) + 128; V = (V - 128) * Scale_factor) + 128; (and remember to clamp the values back to a valid range) 来源: https://stackoverflow.com/questions/8427786/how-to-adjust-image-saturation-in-yuv-color-space

how to adjust image saturation in YUV color space

一曲冷凌霜 提交于 2020-01-23 09:55:39
问题 I want to know how to adjust image saturation in YUV color space, specially, the U component and the V component? 回答1: you probably want to scale the U and V components (using a center point of 128) ex: U = (U - 128) * Scale_factor) + 128; V = (V - 128) * Scale_factor) + 128; (and remember to clamp the values back to a valid range) 来源: https://stackoverflow.com/questions/8427786/how-to-adjust-image-saturation-in-yuv-color-space

camera2 captured picture - conversion from YUV_420_888 to NV21

。_饼干妹妹 提交于 2020-01-19 03:57:27
问题 Via the camera2 API we are receiving an Image object of the format YUV_420_888 . We are using then the following function for conversion to NV21 : private static byte[] YUV_420_888toNV21(Image image) { byte[] nv21; ByteBuffer yBuffer = image.getPlanes()[0].getBuffer(); ByteBuffer uBuffer = image.getPlanes()[1].getBuffer(); ByteBuffer vBuffer = image.getPlanes()[2].getBuffer(); int ySize = yBuffer.remaining(); int uSize = uBuffer.remaining(); int vSize = vBuffer.remaining(); nv21 = new byte

Overlaying/merging two (and more) YUV images in OpenCV

这一生的挚爱 提交于 2020-01-16 18:47:11
问题 I investigated and stripped down my previous question (Is there a way to avoid conversion from YUV to BGR?). I want to overlay few images (format is YUV) on the resulting, bigger image (think about it like it is a canvas) and send it via network library (OPAL) forward without converting it to to BGR. Here is the code: Mat tYUV; Mat tClonedYUV; Mat tBGR; Mat tMergedFrame; int tMergedFrameWidth = 1000; int tMergedFrameHeight = 800; int tMergedFrameHalfWidth = tMergedFrameWidth / 2; tYUV = Mat

CVOpenGLESTextureCacheCreateTextureFromImage failed

谁都会走 提交于 2020-01-15 06:22:32
问题 I had extract Y U V data from video frame separately and saved them in data[0],data[1],data[2];The frame size is 640*480;Now I creat the pixelBuffer as below: void *pYUV[3] = {data[0], data[1], data[2]}; size_t planeWidth = {640, 320, 320}; size_t planeHeight = {480, 240, 240}; size_t planeBytesPerRow = {640, 320, 320}; CVReturn renturn = CVPixelBufferCreateWithPlanarBytes(kCFAllocatorDefault, 640, 480, kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange, nil, nil, 3, pYUV, planeWidth,

一文读懂rawRGB、RGB和YUV数据格式与转换

二次信任 提交于 2020-01-14 23:00:11
rawRGB 图像采集的过程为:光照在成像物体被反射 -> 镜头汇聚 -> Sensor光电转换-> ADC转换为rawRGB 因为sensor上每个像素只采集特定颜色的光的强度,因此sensor每个像素只能为R或G或B,形成的数据就成为了rawRGB数据。 rawRGB数据是sensor的经过光电转换后通过ADC采样后直接输出数据,是未经处理过的数据,表示sensor接受到的各种光的强度。 对于不同的sensor,在其内部形成的rawRGB数据格式也是有区别的。rawRGB数据排列格式有四种如下表(这里的格式是对于2*2像素矩阵而言的): 假设一个sensor的像素是8*8(分辨率为8*8),那么这个sensor就有8*8个感光点,每个感光点就是一个晶体管。那么对于上表中四种排列格式的rawRGB数据如下图所示: 由上图可以看出,每一种格式的rawRGB数据的G分量都是B、R分量的两倍,是因为人眼对于绿色的更加敏感,所以加重了其在感光点的权重,增加了对绿色信息的采样。 对于sensor输出的rawRGB数据,需要送到ISP(图像信号处理)中处理,得到RGB数据,一般采用插值处理。在进行ISP处理时,ISP需要知道sensor输出的rawRGB数据的顺序与大小,其中顺序一般通过配置ISP的pattern寄存器来实现,大小一般配置在ISP的输入格式控制寄存器中。 那么