Low-latency video streaming from c++ opencv application in WINDOWS [closed]

风格不统一 提交于 2019-12-13 10:41:02

问题


There are quite a lot of questions on the topic, but most of them involve the use of undesired protocols - HTML5, WebRTC, etc.

Basically, the problem can formulated as follows: how do I stream my own cv::Mat images over either RTSP or MJPEG [AFAIK it is better for realtime streaming] streams in Windows? Nearly everything I can find relies on the OS being Linux, and is just not applicable to the project.

FFMPEG's pipelining sort of worked, but the delay was about 10 seconds. Could take it down to 3-4 seconds using some -incomprehensibly-long-parameter-list-that-ffmpeg-team-loves-so-much, but it is not enough, because the project under consideration is a surveillance app with active user controlling the camera, so I need to be as close to realtime as possible.

Another issue is that the solution should not eat the whole amount of cores, because they are already overloaded with object tracking algorithms.

Thanks for help in advance!

EDIT: ffmpeg -re -an -f mjpeg -i http://..addr.../video.mjpg -vcodec libx264 -tune zerolatency -f rtp rtp://127.0.0.1:1234 -sdp_file stream.sdp - command I used to retranslate the stream directly without any preprocessing, and it yielded about 4 seconds of delay on a localhost.


回答1:


First you have to find out from where your latency is coming from.

There are basic 4 sources of latency:

  1. Video Capture
  2. Encoding
  3. Transmission
  4. Decoding (Player)

Since you are measuring from localhost, we could consider transmission as 0 seconds. If your video resolution and frame rate are not gargantuan, decoding times would also be close to zero.

We now should focus on the first 2 itens: Capture and Encoding.

The "problem" here is that libx264 is a software encoder. So, it uses CPU power AND needs the data in the main memory, not in the GPU memory where the image is first created.

So, when FFMPEG captures a frame it has to pass the layers of the OS from video memory to main memory.

Unfortunately you wont get any better results than 3 or 2 seconds if you use libx264.

I suggest you take a look on the Nvidia Capture solution. https://developer.nvidia.com/capture-sdk

If you are using a capable GPU you can than capture and encode each frame from the backbuffer or intra frame buffer directly in the GPU. You can than use ffmpeg to send it as you please.



来源:https://stackoverflow.com/questions/47281258/low-latency-video-streaming-from-c-opencv-application-in-windows

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!