h.264

h264 extracting frames works only on certain operating system / ffmpeg version

倖福魔咒の 提交于 2019-12-24 18:54:17
问题 I have received parsed h264 data from my phone, and I am trying to extract frames from the data. I used the following ffmpeg command lines: ffmpeg -i temp.h264 -ss 5 -pix_fmt yuv420p -vframes 1 foo.yuv ffmpeg -s 1280:720 -pix_fmt yuv420p -i foo.yuv output.jpg This results in the right output image on Ubuntu (KDE neon User Edition 5.12) with ffmpeg version 2.8.14. However, it does not work on macOS High Sierra (10.13.4) with ffmpeg version 4.0 and instead shows an output message: Output file

Wrong Presentation time at H264 streams [Live555 OpenRtspClient]

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-24 17:52:54
问题 I modify the OpenRtspClient so that Now instead of writing frames to file I collect them in a queue with incoming presenttaion times Then give the h264 frames to MP4 muxer [ Geraint Davies MP4 mux filter] Finally write muxed data to file... So I can able to save h264 stream into MP4 container... But the problem is that, some of the recorded data [NOT all of them] has wrong values for time duration: Suppose that a 10 minute record seems that it was 12 h stream... VLC play the 10 minute that

x264 IDR access unit with a SPS and a PPS

ぐ巨炮叔叔 提交于 2019-12-24 16:26:24
问题 I am trying to encode video in h.264 that when split with Apples HTTP Live Streaming tools media file segmenter will pass the media file validator I am getting two errors on the split MPEG-TS file WARNING: Media segment contains a video track but does not contain any IDR access unit with a SPS and a PPS. WARNING: 7 samples (17.073 %) do not have timestamps in track 257 (avc1). After hours of research I think the "IDR" warning relates to not having keyframes in the right place on the segmented

How to integrate Live555 in XCode (iOS SDK)

烈酒焚心 提交于 2019-12-24 09:50:04
问题 I have to implement the live streaming from iphone to wowza server using rtsp h264. I did search and found one library Live555. I created the .a files along with include headers. But I am not able to use them in my XCode. As I used then then it start giving errors in understanding the c++ keyword "class". This is maybe because of .hh files. Is anyone having idea, how to include live555 in ios application. Thanks in advance... 来源: https://stackoverflow.com/questions/19142363/how-to-integrate

how to streaming h.264 video send to WOWZA using rtsp with live555

风流意气都作罢 提交于 2019-12-24 06:45:38
问题 I am new for capturing video, encoding in h.264, WOWZA server. I have checked so many solutions on stack-overflow and google but not get perfect that I can use. Basic functionality: continue capturing from iPhone (Video should be in h.264 encoded) using live555 library, generate RTSP url send that same video to WOWZA server for live-broadcast video Note: Video should be continuously play on server from iphone device without major delay. My Question: How to capture video which is encoded in h

Where is the CLSID for Media Foundation H264 Encoder?

主宰稳场 提交于 2019-12-24 05:26:26
问题 The Media Foundation H264 Encoder MFT documentation does not mention a CLSID for the encoder. Other Encoder class IDs, and the H264 Decoder MFT class ID are defined at \Program Files (x86)\Microsoft SDKs\7.1\Include\wmcodecdsp.h or \Program Files (x86)\Windows Kits\8.x\Include\am\wmcodecdsp.h I see this codec when I enumerate the devices, and can obtain the CLSID, which is {6ca50344-051a-4ded-9779-a43305165e35}, from the enumerated list, but I cannot find a named GUID, which I would expect to

Format of H.264 decoder configuration record taken from .mp4

蹲街弑〆低调 提交于 2019-12-24 01:54:19
问题 I am inspecting decoder configuration record contained in .mp4 video file recorded from Android devices. Some devices have strange or incorrect parameters written in decoder configuration record. Here is sample from Galaxy Player 4.0 which is incorrect: DecoderConfigurationRecord: 010283f2ffe100086742000de90283f201000568ce010f20 pictureParameterSetNALUnits : 68ce010f20 AVCLevelIndication : 242 AVCProfileIndication : 2 sequenceParameterSetNALUnits : 6742000de90283f2 lengthSizeMinusOne : 3

video streaming infrastructure

杀马特。学长 韩版系。学妹 提交于 2019-12-23 22:45:16
问题 We would like to set-up a live video-chat web site and are looking for basic architectural advice and/or a recomendation for a particular framework to use. Here are the basic features of the site: Most streams will be broadcast live from a single person with a web cam, etc., and viewed by typically 1-10 people, although there could be up to 100+ viewers on the high side. Audio and video do not have to be super-high quality, but do need to be "good enough". The main point is to convey the

How to encode h.264 live stream to RTP packet with Java

可紊 提交于 2019-12-23 22:21:52
问题 I am developing an application for Android OS, and I need a real-time decode video stream from the camera, that encoded with h.264 codec, convert frame data to RTP packet and sent packet to server. For a start, may try to implement on PC read video from the pre-recorded video file (mp4 with h.264) from HDD to simplify the development and debugging. Is there a ready-made solution? Any ideas? Thanks! 回答1: See Spydroid. It pipes the camera input into the H.264 encoder and turns the output into

openCV VideoCapture doesn't work with gstreamer x264

…衆ロ難τιáo~ 提交于 2019-12-23 18:08:49
问题 I'd like to display a rtp / vp8 video stream that comes from gstreamer, in openCV. I have already a working solution which is implemented like this : gst-launch-0.10 udpsrc port=6666 ! "application/x-rtp,media=(string)video,clock-rate=(int)90000,encoding-name=(string)VP8-DRAFT-IETF-01,payload=(int)120" ! rtpvp8depay ! vp8dec ! ffmpegcolorspace ! ffenc_mpeg4 ! filesink location=videoStream Basically it grabs incoming data from a UDP socket, depacketize rtp, decode vp8, pass to ffmpegcolorspace