video-encoding

need to create a webm video from RGB frames

旧城冷巷雨未停 提交于 2019-12-21 05:22:12
问题 I have an app that generates a bunch of jpgs that I need to turn into a webm video. I'm trying to get my rgb data from the jpegs into the vpxenc sample. I can see the basic shapes from the original jpgs in the output video, but everything is tinted green (even pixels that should be black are about halfway green) and every other scanline has some garbage in it. I'm trying to feed it VPX_IMG_FMT_YV12 data, which I'm assuming is structured like so: for each frame 8-bit Y data 8-bit averages of

Video recording to a circular buffer on Android

跟風遠走 提交于 2019-12-20 22:05:44
问题 I'm looking for the best way (if any...) to capture continuous video to a circular buffer on the SD card, allowing the user to capture events after they have happened. The standard video recording API allows you to just write directly to a file, and when you reach the limit (set by the user, or the capacity of the SD card) you have to stop and restart the recording. This creates up to a 2 second long window where the recording is not running. This is what some existing apps like DailyRoads

Recommendation on the best quality/performance H264 encoder for video encoding?

喜你入骨 提交于 2019-12-20 10:55:40
问题 I am looking for a video encoder that is fast, requires less CPU power and produces very good quality mp4 video. The input videos can be in any format and uploaded by users. Only thing I know is FFMPEG library. Is there anything else that is better? The program must have a batch utility (exe) that I am interested in. I would appreciate if you kindly share your knowledge. Thanks 回答1: Use x264 . It's fast and flexible enough for it to suit your needs. Other H.264 video encoders are junk

Video conversion in java [closed]

为君一笑 提交于 2019-12-19 04:24:44
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 4 years ago . Is there any framework or open source project for Java that does video conversion from any video format to any video format. Something similar to Panda Video Conversion framework. 回答1: Do you Mean JMF? - ( Java Media FrameWork ) Check this out. 回答2: Along the lines of JMF - perhaps this will become to be better

Why low qmax value improve video quality?

杀马特。学长 韩版系。学妹 提交于 2019-12-18 18:26:15
问题 Maybe my questions doesn't make sense due to not understanding but please explain me what I miss because I did read posts and wiki and still it's not clear to me. As I understand setting low value for qmax will improve the quality by increasing the bitrate. Maybe I didn't understood something but isn't lowing the Q(quantization) will decrease the quantization levels and thus the bitrate which means degradation in quality? Or in ffmpeg lowing Q means increasing the quantization levels? If the

H264 NAL unit prefixes

你离开我真会死。 提交于 2019-12-18 12:39:02
问题 I need some clarification on H264 NAL unit delimiter prefixes ( 00 00 00 01 and 00 00 01 ), I am using Intel Media SDK to generate a H264 and pack it into RTP. The issue is that so far I was looking only for 00 00 00 01 as a unit separator and basically was able to find only AUD,SPS,PPS and SEI units in the bitstream. Looking at the memory I saw that after the SEI there was a byte sequence 00 00 01 25 that could be a start of an IDR unit, but my search algorithm did not detect it because of a

ffmpeg::avcodec_encode_video setting PTS h264

江枫思渺然 提交于 2019-12-18 12:26:40
问题 I'm trying to encode video as H264 using libavcodec ffmpeg::avcodec_encode_video(codec,output,size,avframe); returns an error that I don't have the avframe->pts value set correctly. I have tried setting it to 0,1, AV_NOPTS_VALUE and 90khz * framenumber but still get the error non-strictly-monotonic PTS The ffmpeg.c example sets the packet.pts with ffmpeg::av_rescale_q() but this is only called after you have encoded the frame ! When used with the MP4V codec the avcodec_encode_video() sets the

How to use Android MediaCodec encode Camera data(YUV420sp)

偶尔善良 提交于 2019-12-18 12:19:35
问题 Thank you for your focus! I want to use Android MediaCodec APIs to encode the video frame which aquired from Camera, unfortunately, I have not success to do that! I still not familiar with the MediaCodec API。 The follow is my codes,I need your help to figure out what I should do. 1、The Camera setting: Parameters parameters = mCamera.getParameters(); parameters.setPreviewFormat(ImageFormat.NV21); parameters.setPreviewSize(320, 240); mCamera.setParameters(parameters); 2、Set the encoder: private

Getting QualComm encoders to work via MediaCodec API

会有一股神秘感。 提交于 2019-12-18 11:43:13
问题 I am trying to do hardware encoding (avc) of NV12 stream using Android MediaCodec API. When using OMX.qcom.video.encoder.avc, resolutions 1280x720 and 640x480 work fine, while the others (i.e. 640x360, 320x240, 800x480) produce output where chroma component seems shifted (please see snapshot). I have double-checked that the input image is correct by saving it to a jpeg file. This problem only occurs on QualComm devices (i.e. Samsung Galaxy S4). Anyone has this working properly? Any additional

Android MediaCodec Encode and Decode In Asynchronous Mode

此生再无相见时 提交于 2019-12-18 10:29:22
问题 I am trying to decode a video from a file and encode it into a different format with MediaCodec in the new Asynchronous Mode supported in API Level 21 and up (Android OS 5.0 Lollipop). There are many examples for doing this in Synchronous Mode on sites such as Big Flake, Google's Grafika, and dozens of answers on StackOverflow, but none of them support Asynchronous mode. I do not need to display the video during the process. I believe that the general procedure is to read the file with a