yuv

Convert YV12 to NV21 (YUV YCrCb 4:2:0)

匆匆过客 提交于 2019-12-08 08:42:13
问题 How can you convert: YV12 (FOOURCC code: 0x32315659) to NV21 (FOURCC code: 0x3132564E) (YCrCb 4:2:0 Planar) These are both common formats for Android video handling, but there is no example online converting directly between the two. You can go through RGB but I assume that will be too inefficient. Ideally in C# or Java , but can convert code from whatever else... The input is a byte[] , and the width and height are known. I have been trying to follow the Wikipedia Article but cannot get it

How to properly save frames from mp4 as png files using ExtractMpegFrames.java?

社会主义新天地 提交于 2019-12-08 05:26:28
问题 I am trying to get all frames from an mp4 file using the ExtractMpegFrames.java class found here http://bigflake.com/mediacodec/ExtractMpegFramesTest.java.txt What I currently do is create a temp file (File.createTempFile) in a directory that stores all the frames, create a FileOutputStream and do bm.compress(Bitmap.CompressFormat.PNG, 100, fOut) where fOut is the OutputStream with the file. Currently, the saved images look like this: https://imgur.com/a/XpsV2 Using the Camera2 Api, I record

Convert yuv sequence to bmp images

走远了吗. 提交于 2019-12-08 01:45:00
问题 I have yuv sequences and I want to convert them to bmp images. I want to save it to a folder on my computer. I used the yuv2bmp m file in this link . Although the Yuv file is only 44MB, Matlab threw a memory error. How can I overcome this problem? Could you help me please? Best Regards... 回答1: As this question doesn't have a fast answer I put here some links that may be helpful to you. But all of then refers more to implementation in C, not Matlab. Converting Between YUV and RGB Some sample

OpenCV Error reading YUV buffer

假如想象 提交于 2019-12-07 15:03:21
问题 I'm trying to get high resolution uncompressed images through the camera of an Android phone (Samsung Galaxy S3), using OpenCV v2.4. I set width and height using VideoCapture.set(Highgui.CV_CAP_PROP_FRAME_WIDTH, width) and same for height, but whenever I go to a mid-high resolution, the following error starts appearing: ERROR reading YUV buffer: width=1600, height=1200, size=2880000, receivedSize=1036800 I'm guessing it means that the buffer is not big enough to store all the data, which I

OpenGL Colorspace Conversion

你离开我真会死。 提交于 2019-12-07 12:29:01
问题 Does anyone know how to create a texture with a YUV colorspace so that we can get hardware based YUV to RGB colorspace conversion without having to use a fragment shader? I'm using an NVidia 9400 and I don't see an obvious GL extension that seems to do the trick. I've found examples how to use a fragment shader, but the project I'm working on currently only supports OpenGL 1.1 and I don't have time to convert it to 2.0 and perform all the regression testing necessary. This is also targeting

Android MediaCodec output format: GLES External Texture (YUV / NV12) to GLES Texture (RGB)

坚强是说给别人听的谎言 提交于 2019-12-07 12:24:15
问题 I am currently trying to develop a video player on Android, but am struggling with color formats. Context: I extract and decode a video through the standard combinaison of MediaExtractor/MediaCodec . Because I need the extracted frames to be available as OpenGLES textures (RGB) , I setup my decoder ( MediaCodec ) so that it feeds an external GLES texture ( GL_TEXTURE_EXTERNAL_OES ) through a SurfaceTexture. I know the data output by my HW decoder is in the NV12 ( YUV420SemiPlanar ) format,

Why doesn't the decoder of MediaCodec output a unified YUV format(like YUV420P)?

雨燕双飞 提交于 2019-12-07 08:02:29
问题 "The MediaCodec decoders may produce data in ByteBuffers using one of the above formats or in a proprietary format. For example, devices based on Qualcomm SoCs commonly use OMX_QCOM_COLOR_FormatYUV420PackedSemiPlanar32m (#2141391876 / 0x7FA30C04)." This make it difficult even not possible to deal with the output buffer.Why not use a unified YUV format?And why there are so many YUV color formats? @fadden,I find it possible to decode to Surface and get the RGB buffer(like http://bigflake.com

Out of Memory when using compresstojpeg on multiple YuvImage one at a time

霸气de小男生 提交于 2019-12-07 07:33:36
问题 I am building an app that buffers N camera frames and when the user taps a button it saves the photo using all the saved frames applying an effect. I am saving the photo and processing the frames on an AsyncTask . When I execute it, I remove everything from the screen and leave only a TextView to display the progress of saving the photo. Currently the AsyncTask doInBackground looks like this: protected Void doInBackground(Integer... params) { int w = mBuffer.get(0).getWidth(); int h = mBuffer

How to properly save frames from mp4 as png files using ExtractMpegFrames.java?

十年热恋 提交于 2019-12-07 04:01:23
I am trying to get all frames from an mp4 file using the ExtractMpegFrames.java class found here http://bigflake.com/mediacodec/ExtractMpegFramesTest.java.txt What I currently do is create a temp file (File.createTempFile) in a directory that stores all the frames, create a FileOutputStream and do bm.compress(Bitmap.CompressFormat.PNG, 100, fOut) where fOut is the OutputStream with the file. Currently, the saved images look like this: https://imgur.com/a/XpsV2 Using the Camera2 Api, I record a video and save it as an mp4. According to VLC, the color space for the video is Planar 4:2:0 YUV Full

RGB to YCbCr Conversion problems

懵懂的女人 提交于 2019-12-07 02:57:37
问题 i need convert RGB image to YCbCr colour space, but have some colour shift problems, i used all formulas and got the same result. Formula in python cbcr[0] = int(0.299*rgb[0] + 0.587*rgb[1] + 0.114*rgb[2]) #Y cbcr[1] = int(-0.1687*rgb[0] - 0.3313*rgb[1] + 0.5*rgb[2] + 128) #Cb cbcr[2] = int( 0.5*rgb[0] - 0.4187*rgb[1] - 0.0813*rgb[2] + 128) #Cr I know that i should get the same image with different way to record data, but i got wrong colour result. http://i.imgur.com/zHuv8yq.png Original http