yuv

How to deal with RGB to YUV conversion

不羁岁月 提交于 2019-12-17 18:18:28
问题 The formula says: Y = 0.299 * R + 0.587 * G + 0.114 * B; U = -0.14713 * R - 0.28886 * G + 0.436 * B; V = 0.615 * R - 0.51499 * G - 0.10001 * B; What if, for example, the U variable becomes negative? U = -0.14713 * R - 0.28886 * G + 0.436 * B; Assume maximum values for R and G (ones) and B = 0 So, I am interested in implementing this convetion function in OpenCV, So, how to deal with negative values? Using float image? anyway please explain me, may be I don't understand something.. 回答1: You

Converting preview frame to bitmap

时光毁灭记忆、已成空白 提交于 2019-12-17 06:46:08
问题 I know the subject was on the board many times, but i can not get it work anyhow... I want to save view frames from preview to jpeg files. It looks more or less(code is simplified- without additional logic, exception etc) like this... public void onPreviewFrame(byte[] data, Camera camera) { int width = camera.getParameters().getPreviewSize().width; int height = camera.getParameters().getPreviewSize().height; final int[] rgb = decodeYUV420SP(data, width, height); Bitmap bmp = Bitmap

Convert bitmap array to YUV (YCbCr NV21)

混江龙づ霸主 提交于 2019-12-17 04:26:12
问题 How to convert Bitmap returned by BitmapFactory.decodeFile() to YUV format (simillar to what camera's onPreviewFrame() returns in byte array)? 回答1: Here is some code that actually works: // untested function byte [] getNV21(int inputWidth, int inputHeight, Bitmap scaled) { int [] argb = new int[inputWidth * inputHeight]; scaled.getPixels(argb, 0, inputWidth, 0, 0, inputWidth, inputHeight); byte [] yuv = new byte[inputWidth*inputHeight*3/2]; encodeYUV420SP(yuv, argb, inputWidth, inputHeight);

Will all phones support YUV 420 (Semi) Planar color format in h.264 encoder?

感情迁移 提交于 2019-12-13 12:33:28
问题 Preambule: This may sound like a very specific question, but this is actually a go / no go to build an API 16+ Android application using MediaCodec that is compatible with most phone. I have an application with a h.264 MediaCodec that receives data from a buffer - and not a surface since I'm doing a lot of manipulations on the image. When creating the Encoder , I iterate through the list of possible encoders from the phone to make sure I'm using a proprietary encoder if any. This part is not

How can I create a YUV422 frame from a JPEG or other image on Ubuntu

会有一股神秘感。 提交于 2019-12-13 11:50:12
问题 I want to create a sample YUV422 Frame on Ubuntu from any image so I can code a YUV422 to RGB888 function for the sake of learning. I'd really like to be able to use a trusted tool to create a sample and convert back to a jpeg. I've tried ImageMagick but am clearly doing something wrong: convert -size 640x480 -depth 24 test.jpg -colorspace YUV -size 640x480 -depth 16 -sampling-factor 4:2:2 tmp422.yuv convert -colorspace YUV -size 640x480 -depth 16 -sampling-factor 4:2:2 tmp422.yuv -size

How to extract Y,U, and V components from a given yuv file using Matlab? Each component is used for further pixels level manipulation

跟風遠走 提交于 2019-12-13 05:45:47
问题 Hey guys. I'm currently playing with YUV file. Do you have any suggestion on how to extract y,u,v components from a yuv video? I found a piece of program which was shown below. But I don't know which part is the valid components that I want. Thanks. % function mov = loadFileYuv(fileName, width, height, idxFrame) function [mov,imgRgb] = loadFileYuv(fileName, width, height, idxFrame) % load RGB movie [0, 255] from YUV 4:2:0 file fileId = fopen(fileName, 'r'); subSampleMat = [1, 1; 1, 1];

grab frame from video in Android

蓝咒 提交于 2019-12-13 05:37:25
问题 I've been looking at different ways of grabbing a YUV frame from a video stream but most of what I've seen rely on getting the width and height from previewSize. However, a cell phone can shoot video at 720p but a lot of phones can only display it at a lower resolution (ie 800x480) so is it possible to grab a screen shot that's closer to 1920x1080 (if video is being shot at 720p)? Or Am i forced to use the preview resolution (800x400 on some phones)? Thanks 回答1: Yes, you can. * * Conditions

I need to create a bitmap in c code in NDK

与世无争的帅哥 提交于 2019-12-13 05:13:33
问题 So my issue is that I get for a video call the frames in my c code as a byte array of I420. Which I then convert to NV21 and send the byte array to create the bitmap. But because I need to create a YUV Image from the byte array, and then a bitmap from that, I have a conversion overhead and that is causing delays and loss in quality. I am wondering if there is another way to do this. Somehow so that I can create the bitmap directly in the c code, and maybe even add it to the bitmap, or a

Video from pipe->YUV with libAV->RGB with sws_scale->Draw with Qt

孤者浪人 提交于 2019-12-13 04:23:17
问题 I need to decode video from pipe or socket, then convert it set of images and draw with Qt(4.8.5!!). I'm using default example of libAV and adding to it what i need. Here is my code: AVCodec *codec; AVCodecContext *codecContext= NULL; int frameNumber, got_picture, len; FILE *f; AVFrame *avFrame, *avFrameYUV, *avFrameRGB; uint8_t inbuf[INBUF_SIZE + FF_INPUT_BUFFER_PADDING_SIZE]; AVPacket avpkt; av_init_packet(&avpkt); f = fopen("/tmp/test.mpg", "rb"); if (!f) { fprintf(stderr, "could not open

YUV to RGB conversion. RGB file structure?

♀尐吖头ヾ 提交于 2019-12-12 11:20:08
问题 I have a problem when converting YUV file to RGB file. When I am done I can't open .rgb file with GIMP or other viewers, but I succeed to open downloaded .rgb files. Please tell me what the structure of RGB file? (Does it contain a header?) Is it: 1) all R values, then all G values, then all B values? 2) loop (one R one G one B) * pixels count? void convert_YUV_to_RGB (unsigned char Y1, unsigned char Y2, unsigned char Y3, unsigned char Y4, unsigned char U, unsigned char V) { /* //V1 for