v4l2

OpenCV 3.2 with Python 3 VideoCapture changes settings of v4l2

有些话、适合烂在心里 提交于 2019-12-24 08:48:54
问题 I worked with OpenCV 3.2 with Python3 and SBC OXU4. I have a true 5MPx web-camera connected to SBC. I want to take from this camera 2592x1944 resolution picture. If I use Cheese I can take picture with this resolution. I can save pictures with command line program streamer -t 4 -r 4 -s 2592x1944 -o b0.jpeg But when I take picture with OpenCV3.2 like this: #!/usr/bin/env python3 import cv2 import os import time capture1 = cv2.VideoCapture(2) if capture1.isOpened(): capture1.set(3, 2592)

A misunderstanding of V4L2

佐手、 提交于 2019-12-24 01:09:38
问题 I have a small problem with the size of my buffers in a C++ program. I grab YUYV images from a camera using V4L2 (an example is available here ) I want to take one image and put it into a my own image structure. Here is the buffer given by the V4L2 structure and its size (uchar*)buffers_[buf.index].start, buf.bytesused In my structure, I create a new buffer (mybuffer) with a size of width*height*bitSize (byte size is 4 since I grab YUYV or YUV422 images). The problem is that I was expecting

imx6 视频多层分屏显示

别来无恙 提交于 2019-12-23 18:15:15
【推荐】2019 Java 开发者跳槽指南.pdf(吐血整理) >>> //main.c #include <stdio.h> #include <stdlib.h> #include <errno.h> #include <stdint.h> #include <sys/types.h> #include <stdint.h> #include <sys/stat.h> #include <fcntl.h> #include <sys/ioctl.h> #include <sys/time.h> #include <unistd.h> #include <asm/types.h> #include <linux/videodev2.h> #include <sys/mman.h> #include <math.h> #include <string.h> #include <malloc.h> #include <sys/time.h> #include <signal.h> #include "mxcfb.h" #include "ipu.h" #include "g2d.h" #define G2D_CACHEABLE 1 #define TFAIL -1 #define TPASS 0 #define NUMBER_BUFFERS 3 char v4l_capture

OpenCV capture YUYV from camera without RGB conversion

跟風遠走 提交于 2019-12-22 08:52:34
问题 I trying to use openCV/c++ to capture the left and right image from a LI-USB30_V024 stereo camera without automatically converting it to RGB. The camera outputs images in YUYV format. I have tried using videoCapture.set(CV_CAP_PROP_CONVERT_RGB, false) but I get the message "HIGHGUI ERROR: V4L: Property (16) not supported by device". The reason I want to avoid the conversion to RGB is because the camera packages the left and right video together in to a single YUYV image. Both cameras are

OpenCV output on V4l2

空扰寡人 提交于 2019-12-22 08:23:15
问题 I wanted to know if I can use "opencv" to write on a v4l2 device. I would take a picture, apply small changes with the features of opencv, and then send it on a v4l2 device. I searched on the web, but there are a lot of examples on how to read from a V4L2 device, but I found nothing about writing on v4l2. can someone help me? 回答1: The question is 8 month old, but if you still need an answer (I suppose your OS is Linux): Install v4l2 loopback module 1.1. Load and configure it linux: i.e.

How can I capture audio AND video simultenaous with ffmpeg from a linux USB capture device

随声附和 提交于 2019-12-21 04:57:06
问题 I'm capturing a video by means of an USB Terratec Grabster AV350 (which is based on the em2860 chip). I don't succeed to get the audio when it is played . If I play the captured video with vlc or with ffplay I got only 3 seconds sound and then a silence for the rest of the video ... During the capturing I don't get any errors. At the end it indicates the size of the video and audio captured .... I'm using the ffmpeg command for this : ffmpeg -f alsa -ac 2 -i hw:3 -f video4linux2 -i /dev

How to write/pipe to a virtual webcam created by V4L2loopback module?

无人久伴 提交于 2019-12-18 12:41:23
问题 I have written an application which reads from a webcam and processes the frames using OpenCV on linux. Now I want to pipe the output of my application to a virtual webcam that has been created by the V4L2loopback module so other applications are able to read it. I have written the application using C. I am not sure how to approach doing this. Could you please give me any hints? 回答1: I have found an answer in the old V4L2loopback module's page on Google code. http://code.google.com/p

V4L2 difference between JPEG and MJPEG pixel formats

北战南征 提交于 2019-12-11 11:03:40
问题 What is the difference between these two pixel formats in v4l2 API: V4L2_PIX_FMT_JPEG and V4L2_PIX_FMT_MJPEG ? To me it seems that both should return JPG images when packets are read from a webcamera. 回答1: I am interested in this question as well and I hope somebody can post an answer with some detail. Perhaps my observations below are useful to find an answer. I noticed that there are some differences between the two settings. The PI Noir camera (which I use on a Raspberry Pi 1 B) supports

How can I change webcam properties that OpenCV doesn't support but v4l2 API does?

人走茶凉 提交于 2019-12-10 17:14:00
问题 I'm using OpenCV 3.1 and Python 2.7 to capture video frames from my webcam, Logitech C270 . I'm also using video4linux2(v4l2) to set the properties of my camera but this led to a few problems. My OS is Ubuntu 15.04 . The specific property I'm trying to change is absolute_exposure . I'm able to change it manually using v4l2 API via terminal, with the command v4l2-ctl --set-ctrl exposure_absolute=40 , and it works nice but I need to write a script for this task. Using OpenCV's set(cv2.CAP_PROP

Gstreamer message to signal new frame from video source (webcam)

你离开我真会死。 提交于 2019-12-10 15:28:49
问题 I am trying to save a stream from webcam as series of image using gstreamer. I have written this code so far... #!/usr/bin/python import sys, os import pygtk, gtk, gobject import pygst pygst.require("0.10") import gst def __init__(self): #.... # Code to create a gtk Window #.... self.player = gst.Pipeline("player") source = gst.element_factory_make("v4l2src", "video-source") sink = gst.element_factory_make("xvimagesink", "video-output") caps = gst.Caps("video/x-raw-yuv, width=640, height=480"