I am a beginner to Opencv. In my new opencv project I have to capture video frames from a camera device and need to give them to the opencv for processing, But now my came
Short answer is: yes, you can present YUV data to OpenCV by converting it to a Mat. Please see my answer to a related question.
If your YUV data is in the form of a raw video, use file i/o to read from the yuv video file one frame at a time (as a char array), and convert to Mat using the method I describe in the referenced answer above.
When your camera hardware does work and you use OpenCV capture (for example VideoCapture) it will convert the YUV stream to BGR. There is no way to get raw YUV strem directly into OpenCV from a camera. If you need to work on a raw YUV stream (e.g., to interpret the payload in a custom manner), the only way is to use libs like V4L2 directly to read YUV stream from a camera and convert that into a Mat and then use rest of the OpenCV functions. But this goes off topic.