v4l2

Camera driver&V4L2驱动架构介绍

隐身守侯 提交于 2019-12-10 06:52:41
基于Mavell Pxa920 1.Camera相关介绍 1.1. 手机Camera介绍 手机的数码相机功能指的是手机通过内置数码相机进行拍摄静态图片或短片拍摄,作为手机的一项新的附加功能,手机的数码相机功能得到了迅速的发展。 手机摄像头分为内置与外置,内置摄像头是指摄像头在手机内部,更方便。外置手机通过数据线或者手机下部接口与数码相机相连,来完成数码相机的一切拍摄功能。 处于发展阶段的手机的数码相机的性能应该也处于初级阶段,带有光学变焦的手机目前国内销售的还没有这个功能,不过相信随着手机数码相机功能的发展,带有光学变焦的手机也会逐渐上市,但大部分都拥有数码变焦功能。 目前手机的数码相机功能主要包括拍摄静态图像,连拍功能,短片拍摄,镜头可旋转,自动白平衡,内置闪光灯等等。手机的拍摄功能是与其屏幕材质、屏幕的分辨率、摄像头像素、摄像头材质有直接关系。 1.2. Camera技术指标 1.2.1. 图像压缩方式JPEG (joint photographic expert group)静态图像压缩方式。一种有损图像的压缩方式。压缩比越大,图像质量也就越差。当图像精度要求不高存储空间有限时,可以选择这种格式。目前大部分数码相机都使用JPEG格式。 1.2.2. 图像噪音 指的是图像中的杂点干扰,表现为图像中有固定的彩色杂点。 1.2.3. 视角 与人的眼睛成像是相似原理

Linux下基于v4l2框架的图像采集

送分小仙女□ 提交于 2019-12-10 06:30:18
对于linux下笔记本内置摄像头图像采集,一开始也挺头大的,怎样去驱动笔记本内置摄像头呢?伟大的互联网告诉我们这都不叫事。通过 lsusb查看摄像头信息,如果不明显,可以用lsusb -v|less 查看详细信息,确定摄像头是usb设备后,v4l2框架才是可用的。 V4L2(Video For Linux Two) 是内核提供给应用程序访问音、视频驱动的统一接口,它图像采集的操作步骤: 打开设备-> 检查和设置设备属性-> 设置帧格式-> 设置一种输入输出方法(缓冲 区管理)-> 循环获取数据-> 关闭设备。 V4L2接口的介绍可参考 http://blog.csdn.net/g_salamander/article/details/8107692 ,具体V4L2采集数据的源码 http://download.csdn.net/detail/xiaohouye/9499342 ,但过程中应该注意的是摄像头所支持视频输出的格式,可以用V4L2提供的指令去通过ioctl获得,也可以用比较懒的方法,先安装luvceiw,然后命令行luvceiw -L,查看相应摄像头输出的详细信息。 由于我的摄像头是YUV422输出,采集到的数据还要相应转成图片格式输出,先把YUV422转成RGB,再从RGB转到bmp格式。 来源: oschina 链接: https://my.oschina.net/u

Where does v4l2_buffer->timestamp value starts counting?

我的未来我决定 提交于 2019-12-09 13:53:11
问题 I am trying to use v4l2_buffer's timestamp value (type timeval) to synchronize images captured from a UVC webcam to external events. However the timestamp is not the same as the system time, or the up time, etc: printf("image captured at %ld, %ld\n", buffer->timestamp.tv_sec, buffer->timestamp.tv_usec); struct timeval tv; gettimeofday(&tv, 0); printf("current time %ld, %ld\n", tv.tv_sec, tv.tv_usec); Results in image captured at 367746, 476270 current time 1335083395, 11225 My uptime is 10

Could not join Multicast group : No such Device

六月ゝ 毕业季﹏ 提交于 2019-12-08 05:47:31
问题 I would like to stream camera data over UDP multicast using gstreamer. For that i used below pipeline, gst-launch-1.0 v4l2src ! videoconvert ! video/x-raw,width=720,height=576,framerate=25/1 ! x264enc ! mpegtsmux ! rtpmp2tpay ! udpsink host=224.1.1.1 port=9090 auto-multicast=true sync=true async=false qos=true But, I am getting below error, could not get/set settings from/on resource : gstmultiudpsink.c(948): gst_multiudpsink_configure_client (): Could not join Multicast group : No such

Processing .Raw images using ffmpeg or OpenCV

孤人 提交于 2019-12-08 05:22:39
问题 After reading Wikipedia page of Raw image format which is the digital negative of any image. To be viewed or printed, the output from a camera's image sensor has to be processed, that is, converted to a photographic rendering of the scene, and then stored in a standard raster graphics format such as JPEG. This processing, whether done in-camera or later in a raw-file converter, involves a number of operations, typically including I have some .raw files grabbed from my Logitech c920 using v4l2

Two webcams on one usb 2.0 hub - works in windows but not linux

让人想犯罪 __ 提交于 2019-12-08 04:30:45
问题 The openCV code below grabs simultaneous images from two cameras. It works fine in windows, with the cameras both attached to one usb 2.0 hub. When I try the same code in linux, it only has enough bandwidth for one camera at a time. I've also tried viewing the two streams at once with guvcview, same issue. What I need is some way to force the webcams to work together, possibly by setting the amount of bandwidth the driver requests. capture = cv.CaptureFromCAM(0) capture2 = cv.CaptureFromCAM(1

Undefine reference for libraries, so How could I find the right path?

送分小仙女□ 提交于 2019-12-07 16:31:27
问题 I am trying to compile a v4l2 example in Ubuntu but I am getting the following error: guilherme@notedev01:~/Downloads/V4l2_samples-0.4.1$ make gcc -O2 -L/usr/include -lX11 -lXext -o viewer viewer.c /tmp/ccUjnjWQ.o: In function `image_destroy': viewer.c:(.text+0x234): undefined reference to `XDestroyImage' viewer.c:(.text+0x256): undefined reference to `XFreeGC' viewer.c:(.text+0x277): undefined reference to `XShmDetach' viewer.c:(.text+0x2ac): undefined reference to `XFreePixmap' /tmp

Cannot turn off/on CameraCapture using Python/opencv: Device or resource busy

笑着哭i 提交于 2019-12-07 11:53:42
问题 When I try to re-open opencv's CameraCapture using Python I get: libv4l2: error setting pixformat: Device or resource busy HIGHGUI ERROR: libv4l unable to ioctl S_FMT libv4l2: error setting pixformat: Device or resource busy libv4l1: error setting pixformat: Device or resource busy HIGHGUI ERROR: libv4l unable to ioctl VIDIOCSPICT Although my application runs in a bigger context using PyQt and various other modules, I was able to isolate the problem. So when I hit "r" (reload) the capture

Zero shutter lag in Android camera

▼魔方 西西 提交于 2019-12-07 03:38:55
问题 In normal shutter lag,sensor driver give the caputured image buffer to v4l2 layer and here jpeg(hardware) header adds some extra data(exif info and thumbnail) and this layer give the image buffer to preview heap(In HAL layer) for further processing. but what is the process of taking picture in case of zero shutter lag.Is this same as normal shutter lag? How to reduce the time between take picture call and image processing. if not than explain . 回答1: To achieve zero shutter lag, the camera

Could not join Multicast group : No such Device

瘦欲@ 提交于 2019-12-06 16:30:57
I would like to stream camera data over UDP multicast using gstreamer. For that i used below pipeline, gst-launch-1.0 v4l2src ! videoconvert ! video/x-raw,width=720,height=576,framerate=25/1 ! x264enc ! mpegtsmux ! rtpmp2tpay ! udpsink host=224.1.1.1 port=9090 auto-multicast=true sync=true async=false qos=true But, I am getting below error, could not get/set settings from/on resource : gstmultiudpsink.c(948): gst_multiudpsink_configure_client (): Could not join Multicast group : No such Device But, the same pipeline is working in Ubuntu14.10 64 bit PC. It didn't work on RHEL7 64 bit PC. These