yuv

OpenCV for Android: Convert Camera preview from YUV to RGB with Imgproc.cvtColor

我是研究僧i 提交于 2019-12-06 11:27:57
问题 I get a runtime error if I try to convert camera preview YUV byte array to a RGB(A) byte array with Imgproc.cvtColor( mYUV_Mat, mRgba_Mat, Imgproc.COLOR_YUV420sp2RGBA, 4 ) in onPreviewFrame(byte[] data, Camera camera): Preview.java: mCamera.setPreviewCallback(new PreviewCallback() { public void onPreviewFrame(byte[] data, Camera camera) { // Pass YUV data to draw-on-top companion System.arraycopy(data, 0, mDrawOnTop.mYUVData, 0, data.length); mDrawOnTop.invalidate(); } }); DrawOnTop.java:

Will all phones support YUV 420 (Semi) Planar color format in h.264 encoder?

◇◆丶佛笑我妖孽 提交于 2019-12-06 08:33:15
Preambule: This may sound like a very specific question, but this is actually a go / no go to build an API 16+ Android application using MediaCodec that is compatible with most phone. I have an application with a h.264 MediaCodec that receives data from a buffer - and not a surface since I'm doing a lot of manipulations on the image. When creating the Encoder , I iterate through the list of possible encoders from the phone to make sure I'm using a proprietary encoder if any. This part is not a problem. The problem is that each encoder has its color format preference. This may lead to color

Convert yuv sequence to bmp images

左心房为你撑大大i 提交于 2019-12-06 06:23:51
I have yuv sequences and I want to convert them to bmp images. I want to save it to a folder on my computer. I used the yuv2bmp m file in this link . Although the Yuv file is only 44MB, Matlab threw a memory error. How can I overcome this problem? Could you help me please? Best Regards... As this question doesn't have a fast answer I put here some links that may be helpful to you. But all of then refers more to implementation in C, not Matlab. Converting Between YUV and RGB Some sample code in C That one in Delphi is pretty good . This web site indeed is very nice web site for those that like

Android MediaCodec output format: GLES External Texture (YUV / NV12) to GLES Texture (RGB)

梦想的初衷 提交于 2019-12-06 04:42:41
I am currently trying to develop a video player on Android, but am struggling with color formats. Context: I extract and decode a video through the standard combinaison of MediaExtractor/MediaCodec . Because I need the extracted frames to be available as OpenGLES textures (RGB) , I setup my decoder ( MediaCodec ) so that it feeds an external GLES texture ( GL_TEXTURE_EXTERNAL_OES ) through a SurfaceTexture. I know the data output by my HW decoder is in the NV12 ( YUV420SemiPlanar ) format, and I need to convert it to RGB by rendering it (with a fragment shader doing the conversion). MediaCodec

Converting to YUV / YCbCr colour space - many versions

扶醉桌前 提交于 2019-12-06 03:42:30
问题 There are many different colour conversions to YUV but they all have different results! Which one is officially correct? This is the output from my test program. I have input R=128 G=50 B=50 (max value is 255). The table shows the converted YUV values, and the re-converted RGB values (which don't match the original). ./ColourConversion.exe 128 50 50 Y U V R G B Name =============================================================================== 0 0 0 128 50 50 a) Original RGB Values 79 116

RGB to YUV using GLSL [closed]

旧街凉风 提交于 2019-12-06 03:04:41
I am looking for sample GLSL fragment shader code that can convert RGB frame (say pixel format as ARGB) to YUV (say YUV420). Imagine an RGB frame of size 1920x1080. I like to use fragment shader to convert it to YUV frame. Can you point me to code that can be compiled and run on UBuntu box? miguelao For future reference, a bunch of colorspace conversions in GLSL shaders can be found in Gstreamer gst-plugins-gl codebase :) user1118321 First, you should know that your question is poorly phrased. Nobody is going to write you sample code. You can use a search engine (or even search this site) for

OpenGL Colorspace Conversion

蹲街弑〆低调 提交于 2019-12-06 02:35:31
Does anyone know how to create a texture with a YUV colorspace so that we can get hardware based YUV to RGB colorspace conversion without having to use a fragment shader? I'm using an NVidia 9400 and I don't see an obvious GL extension that seems to do the trick. I've found examples how to use a fragment shader, but the project I'm working on currently only supports OpenGL 1.1 and I don't have time to convert it to 2.0 and perform all the regression testing necessary. This is also targeting Linux. On other platforms I've been using a MESA extension but it doesn't function on the Nvidia card.

texture for YUV420 to RGB conversion in OpenGL ES

ε祈祈猫儿з 提交于 2019-12-06 01:54:23
问题 I have to convert and display YUV420P images to RGB colorspace using the AMD GPU on a Freescale iMX53 processor (OpenGL ES 2.0, EGL). Linux OS, no X11. To achieve this I should be able to create an appropriate image holding the YUV420P data: this could be either a YUV420P/YV12 image type or 3 simple 8-bit images, one for each component (Y, U, V). glTexImage2D is excluded, because it's slow, the YUV420P frames are the results of a real time video decoding @25FPS and with glTexImage2D we can't

how to adjust image saturation in YUV color space

六月ゝ 毕业季﹏ 提交于 2019-12-05 22:38:53
I want to know how to adjust image saturation in YUV color space, specially, the U component and the V component? you probably want to scale the U and V components (using a center point of 128) ex: U = (U - 128) * Scale_factor) + 128; V = (V - 128) * Scale_factor) + 128; (and remember to clamp the values back to a valid range) 来源: https://stackoverflow.com/questions/8427786/how-to-adjust-image-saturation-in-yuv-color-space

Incorrect transformation of frames from YUV_420_888 format to NV21 within an image reader

為{幸葍}努か 提交于 2019-12-05 21:51:26
I configured my code in order to get a stream of YUV_420_888 frames from my device's camera using an imageReader object and the rest of the well known camera2 API. Now I need to transform these frames to NV21 pixel format and call a native function which expect a frame in this format to perform certain computations. This is the code I am using inside the imagereader callback to rearrange the bytes of the frame: ImageReader.OnImageAvailableListener readerListener = new ImageReader.OnImageAvailableListener() { @Override public void onImageAvailable(ImageReader mReader) { Image image = null;