video-encoding

ParamValidationExt error with WelsInitEncoderExt failed while setting up OpenH264 encoder

倖福魔咒の 提交于 2019-12-23 02:36:20
问题 Scenario: I am using OpenH264 with my App to encode into a video_file.mp4 . Environment: Platform : MacOs Sierra Compiler : Clang++ The code: Following is the crux of the code I have: void EncodeVideoFile() { ISVCEncoder * encoder_; std:string video_file_name = "/Path/to/some/folder/video_file.mp4"; EncodeFileParam * pEncFileParam; SEncParamExt * pEnxParamExt; float frameRate = 1000; EUsageType usageType = EUsageType::CAMERA_VIDEO_REAL_TIME; bool denoise = false; bool lossless = true; bool

FFMPEG Motion Compensation and Search

空扰寡人 提交于 2019-12-23 01:45:19
问题 I'm trying to modify the motion detection part of FFMPEG. What I want to do is to extend the search space, so that whenever the macroblock hit the right most edge of the frame, I need it to still move the block towards the left-most as if they are connected (in my example videos, the right edge is actually a continue of the left edge). Can someone help me to point where exactly I can modify it within FFMPEG source code or x265, or x264? I took H265 as an example from here . It has a motion

How to continuously extract video frames from streaming RTMP using avconv / ffmpeg?

大憨熊 提交于 2019-12-22 10:29:45
问题 We're dealing with streaming video on RTMP and my goal is to extract frames from the stream at a given interval, e.g. every 1 second. Currently I run a command in a loop, which takes a frame and exports it as base64 JPEG: avconv -i <URL> -y -f image2 -ss 3 -vcodec mjpeg -vframes 1 -s sqcif /dev/stdout 2>/dev/null | base64 -w 0 But each of these processes is long (takes a few seconds -- which adds even more delay on streaming video that's not real time already). I am wondering if there is a

Multiple CUDA contexts for one device - any sense?

心已入冬 提交于 2019-12-22 04:59:08
问题 I thought I had the grasp of this but apparently I do not:) I need to perform parallel H.264 stream encoding with NVENC from frames that are not in any of the formats accepted by the encoder so I have a following code pipeline: A callback informing that a new frame has arrived is called I copy the frame to CUDA memory and perform the needed color space conversions (only the first cuMemcpy is synchronous, so I can return from the callback, all pending operations are pushed in a dedicated

Multiple CUDA contexts for one device - any sense?

故事扮演 提交于 2019-12-22 04:59:01
问题 I thought I had the grasp of this but apparently I do not:) I need to perform parallel H.264 stream encoding with NVENC from frames that are not in any of the formats accepted by the encoder so I have a following code pipeline: A callback informing that a new frame has arrived is called I copy the frame to CUDA memory and perform the needed color space conversions (only the first cuMemcpy is synchronous, so I can return from the callback, all pending operations are pushed in a dedicated

Capture video from vlc command line with a stop time

爱⌒轻易说出口 提交于 2019-12-21 23:06:03
问题 I'm trying to capture a video from an RPT stream to my pc (Ubuntu 12-04 LTS). I'm using vlc from command line as below: cvlc -vvv rtp://address:port --start-time=00 --stop-time=300 --sout file/ts:test.ts but vlc ignores the command --stop-time and it continues to download video even more than 300 seconds as specified. Does anyone know the reason for this? and a possible solution? Thanks 回答1: If you know the start-time and the end-time you can compute the record time. You can afterward use the

Multi bitrate live HLS with FFmpeg on Windows

廉价感情. 提交于 2019-12-21 17:27:34
问题 I am trying to encode a live stream into Apple HLS for iPhone on windows. I was looking at different options and wowza can do it, but doesn't support CDN distribution of HLS as far as I can see. Plus it costs a lot of money. What I did find was this site: http://www.espend.de/artikel/iphone-ipad-ipod-http-streaming-segmenter-and-m3u8-windows.html I can now set up a single bitrate stream easily, but my goal is an adapive multi-bitrate live stream. Is it possible? For VOD content it can easily

iPhone: HTTP live streaming without any server side processing

爱⌒轻易说出口 提交于 2019-12-21 06:17:42
问题 I want to be able to (live) stream the frames/video FROM the iPhone camera to the internet. I've seen in a Thread (streaming video FROM an iPhone) that it's possible using AVCaptureSession's beginConfiguration and commitConfiguration. But I don't know how to start designing this task. There are already a lot of tutorials about how to stream video TO the iPhone, and it is not actually what I am searching for. Could you guys give me any ideas which could help me further? 回答1: That's a tricky

Performance variation of encoder using MediaCodec encode from surface

偶尔善良 提交于 2019-12-21 05:57:23
问题 I render a texture to both display and a codec input surface (from where an encoder uses it). There is a large performance variation when the texture is rendered to the display surface, and when it is rendered to the input surface of the encoder, but only on some devices, like S3 Galaxy (~10 times slower to render to encoder surface). On other devices, the speed is similar (S4, Nexus4, etc). A similar scenario can be demonstrated with Grafika and Record GL app activity. (FBO blit x2) The fps

Encode video using ffmpeg from javacv on Android causes native code crash

二次信任 提交于 2019-12-21 05:43:06
问题 NOTE: I have updated this since originally asking the question to reflect some of what I have learned about loading live camera images into the ffmpeg libraries. I am using ffmpeg from javacv compiled for Android to encode/decode video for my application. (Note that originally, I was trying to use ffmpeg-java , but it has some incompatible libraries) Original problem: The problem that I've run into is that I am currently getting each frame as a Bitmap (just a plain android.graphics.Bitmap )