I am a beginner to Opencv. In my new opencv project I have to capture video frames from a camera device and need to give them to the opencv for processing, But now my came
I asked a similar question recently here OpenCV capture YUYV from camera without RGB conversion
An openCV VideoCapture object is the easiest way to capture video, but it automatically converts the frames to BGR format. You can disable this with videoCapture.set(CV_CAP_PROP_CONVERT_RGB, false)
For some reason that particular command didn't work with my camera so I was forced to use V4L2 library to read frames instead (this is the same library that videoCapture uses). Sample video capture code for V4L2: http://linuxtv.org/downloads/v4l-dvb-apis/capture-example.html. I then stored the frames in a cv::Mat for processing.
You can store data of any format in an openCV Mat, but some operations make assumptions about what format the data is in. For example imshow will assume the data is either BGR or greyscale format. If your data isn't one of those formats your image will look wrong.
Short answer is: yes, you can present YUV data to OpenCV by converting it to a Mat. Please see my answer to a related question.
If your YUV data is in the form of a raw video, use file i/o to read from the yuv video file one frame at a time (as a char array), and convert to Mat using the method I describe in the referenced answer above.
When your camera hardware does work and you use OpenCV capture (for example VideoCapture) it will convert the YUV stream to BGR. There is no way to get raw YUV strem directly into OpenCV from a camera. If you need to work on a raw YUV stream (e.g., to interpret the payload in a custom manner), the only way is to use libs like V4L2 directly to read YUV stream from a camera and convert that into a Mat and then use rest of the OpenCV functions. But this goes off topic.