ffmpeg

nginx+ffmpeg+jwplayer

百般思念 提交于 2021-02-13 08:36:09
播放监控的画面 RTSP(Real Time Streaming Protocol),实时流传输协议,是TCP/IP协议体系中的一个应用层协议,由哥伦比亚大学、网景和RealNetworks公司提交的IETF RFC标准。该协议定义了一对多应用程序如何有效地通过IP网络传送多媒体数据。RTSP在体系结构上位于RTP和RTCP之上,它使用TCP或RTP完成数据传输。HTTP与RTSP相比,HTTP传送HTML,而RTP传送的是多媒体数据。HTTP请求由客户机发出,服务器作出响应;使用RTSP时,客户机和服务器都可以发出请求,即RTSP可以是双向的。(管他说的啥,反正就是摄像头的协议,网页不能直接播)。 跟后台研究了几天,决定的实现方式是:nginx搭的服务器,ffmpeg转码,jwplayer播放。 一、FFmpeg下载:http://ffmpeg.zeranoe.com/builds/ 下载并解压FFmpeg文件夹,配置环境变量:在“Path”变量原有变量值内容上加上d:\ffmpeg\bin,验证:ffmpeg -version 出现版本号则成功。 二、官网下载windows Stable version版Nginx安装nginx服务器,配置:config和mime.types。 1.在nginx\conf\nginx.conf中: http { include mime

Why AVCodecContext extradata is NULL?

拜拜、爱过 提交于 2021-02-11 18:20:02
问题 I am trying to decode h264 video using ffmpeg and stagefright library. I'm using this example. The example shows how to decode mp4 files, but i want to decode only h264 video. Here is piece of my code.. AVFormatSource::AVFormatSource(const char *videoPath) { av_register_all(); mDataSource = avformat_alloc_context(); avformat_open_input(&mDataSource, videoPath, NULL, NULL); for (int i = 0; i < mDataSource->nb_streams; i++) { if (mDataSource->streams[i]->codec->codec_type == AVMEDIA_TYPE_VIDEO)

Azure Functions + Azure Batch实现MP3音频转码方案

无人久伴 提交于 2021-02-11 14:58:49
客户需求 客户的环境是一个网络音乐播放系统,根据网络情况提供给手机用户收听各种码率的MP3歌曲,在客户没购买歌曲的情况下提供一个三十秒内的试听版本。这样一个系统非常明确地一个需求就是会定期需要将一批从音乐版商手中获取到的高比特率音乐文件转换成各种低码率的MP3文件和试听文件,由于收到版商的文件数量和时间都不确定,所以长期部署大量的转码服务器为系统提供转码服务显然非常浪费资源,但是如果不准备好足够的转码服务器的话,当大批量文件需要转码时又没法能够快速完成任务,在现在这个时间比金钱更加重要的互联网时代显然是不可接受的。这时候选择公有云这样高弹性、按需计费的计算平台就显得非常合适了。 技术选型 使用Azure Fuctions+Azure Batch+Azure Blob Storage方案,全部都是基于PaaS平台,无需对服务器进行管理,省去服务器在日常维护中各种补丁安全管理要求。 方案架构图: 方案实现: 利用Azure Function监控Blob文件变化,Azure Functions的一大优点就是提供了不同类型的触发器(http Trigger,Blob Trigger,Timer Trigger,Queue Trigger…),这里我们正好利用上Blob Trigger用来监控Blob文件的变化。 首先是创建一个Azure Functions的Project

FFmpeg pad filter calculating wrong width

梦想与她 提交于 2021-02-11 14:32:42
问题 I'm using ffmpeg.exe -i in.jpg -filter_complex "[0:v]pad=iw:ih+10:0:0" out.jpg to add a padding of 10px at the bottom of images and videos. In most cases it works as expected but with some the width is off by 1px resulting in failure with error: [Parsed_pad_0 @ 000002ba70617c40] Input area 0:0:623:640 not within the padded area 0:0:622:650 or zero-sized [Parsed_pad_0 @ 000002ba70617c40] Failed to configure input pad on Parsed_pad_0 Error reinitializing filters! Failed to inject frame into

FFmpeg pad filter calculating wrong width

一世执手 提交于 2021-02-11 14:29:49
问题 I'm using ffmpeg.exe -i in.jpg -filter_complex "[0:v]pad=iw:ih+10:0:0" out.jpg to add a padding of 10px at the bottom of images and videos. In most cases it works as expected but with some the width is off by 1px resulting in failure with error: [Parsed_pad_0 @ 000002ba70617c40] Input area 0:0:623:640 not within the padded area 0:0:622:650 or zero-sized [Parsed_pad_0 @ 000002ba70617c40] Failed to configure input pad on Parsed_pad_0 Error reinitializing filters! Failed to inject frame into

Ffmpeg on idle python Mac

此生再无相见时 提交于 2021-02-11 14:16:13
问题 I am trying to use FFMPEG on idle 3.7.4 python on macOS Catalina. I ran brew install ffmpeg and it successfully installed. However, when I go to IDLE and run my script (the script is to convert a .mp3 file to a .wav): from os import path from pydub import AudioSegment src = "transcript.mp3" dst = "test.wav" sound = AudioSegment.from_mp3(src) sound.export(dst, format="wav") This is what I get in return: Warning (from warnings module): File "/Library/Frameworks/Python.framework/Versions/3.7/lib

Merge and Concat Multiple Audio and Video files using FFMPEG

纵饮孤独 提交于 2021-02-11 13:52:47
问题 I have a script at present that results in several audio-only and video-only files. My goal is to take these and then concat each video clip whilst merging in the audio-clips. I thought this would be easy but I have not been able to find a good example to work from. FFMPEG.org suggests that the movie/amovie input syntax could be the best option.. but am craving more doco. FFMPEG's information on the topic is @ https://www.ffmpeg.org/ffmpeg-filters.html#Examples-124. It suggests that to "..

FFMPEG - Width/ Height Not Divisible by 2 (Scaling to Generate MBR Output)

允我心安 提交于 2021-02-11 13:44:18
问题 I am trying to generate multilple variants of videos in my library (Mp4 formats) and have renditions planned ranging from 1080p to 240p and popular sizes in between. For that I am taking a video with a AxB resolution and then running through a code (on bash) which scales them to desired following sizes - 426x240 640x360 842x480 1280x720 1920x1080, with different bitrates of course, and then saves as Mp4 again. Now, this works just fine if source video has height and width divisible by 2, but

ffmpeg - Dynamic letters and random position watermark to video?

余生长醉 提交于 2021-02-11 12:51:49
问题 I am making an online course, and to avoid piracy distribution I thought to put watermarks on the videos (including personal user information) so it cannot upload to sharing websites. Now the hard part: I would move the watermark during the video, in 3/4 random positions, every 30 seconds. It is possibile with ffmpeg? 回答1: Edit : this is an adaptation of the answer in LN's link, which will randomize the position every 30 seconds with no repeats: ffmpeg -i input.mp4 \ -vf \ "drawtext=fontfile