video-encoding

H.264 (or similar) encoder in C#?

╄→гoц情女王★ 提交于 2020-01-01 08:53:32
问题 Does anyone know of an open source H.264 encoder in C# (or any other managed language)? I might be able to make do with a python implementation as well. The libraries that I’ve found (e.g. x264) are written in pretty low level c (procedural with lots of macros) and assembly. Tweaking them is turning out to be far more complex than I'd thought. My project has no concern for performance or compatibility. We just want to test how some ideas will impact the perception of the outputted video. We’d

MediaCodec Video Streaming From Camera wrong orientation & color

早过忘川 提交于 2020-01-01 03:44:47
问题 I'm trying to stream video capturing directly from camera for android devices. So far I have been able to capture each frame from android camera's onPreviewFrame (byte[] data, Camera camera) function, encode the data & then successfully decode the data and show to the surface. I used android's MediaCodec for the encoding & decoding. But the color & the orientation of the video is not correct [ 90 degree rotated ]. After searching a while I have found this YV12toYUV420PackedSemiPlanar function

RGB-frame encoding - FFmpeg/libav

隐身守侯 提交于 2020-01-01 03:37:13
问题 I am learning video encoding & decoding in FFmpeg. I tried the code sample on this page (only the video encoding & decoding part). Here the dummy image being created is in YCbCr format. How do I achieve similar encoding by creating RGB frames? I am stuck at: Firstly, how to create this RGB dummy frame? Secondly, how to encode it? Which codec to use? Most of them work with YUV420p only... EDIT: I have a YCbCr encoder and decoder as given on the this page. The thing is, I have RGB frame

Overlaying video with ffmpeg

我们两清 提交于 2019-12-31 00:35:07
问题 I'm attempting to write a script that will merge 2 separate video files into 1 wider one, in which both videos play back simultaneously. I have it mostly figured out, but when I view the final output, the video that I'm overlaying is extremely slow. Here's what I'm doing: Expand the left video to the final video dimensions ffmpeg -i left.avi -vf "pad=640:240:0:0:black" left_wide.avi Overlay the right video on top of the left one ffmpeg -i left_wide.avi -vf "movie=right.avi [mv]; [in][mv]

Compress a video on client side web [closed]

荒凉一梦 提交于 2019-12-29 01:36:13
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 years ago . I have to upload a video from front-end web to my Django back-end. I need to compress the video before to upload it. So, I need some libraries (for example javascript libraries) to use on client-side (browser) to compress the video and call my ajax function to upload it. Is possible this? Can we suggest me

How to convert RGB from YUV420p for ffmpeg encoder?

寵の児 提交于 2019-12-28 01:55:21
问题 I want to make .avi video file from bitmap images by using c++ code. I wrote the following code: //Get RGB array data from bmp file uint8_t* rgb24Data = new uint8_t[3*imgWidth*imgHeight]; hBitmap = (HBITMAP) LoadImage( NULL, _T("myfile.bmp"), IMAGE_BITMAP, 0, 0, LR_LOADFROMFILE); GetDIBits(hdc, hBitmap, 0, imgHeight, rgb24Data , (BITMAPINFO*)&bmi, DIB_RGB_COLORS); /* Allocate the encoded raw picture. */ AVPicture dst_picture; avpicture_alloc(&dst_picture, AV_PIX_FMT_YUV420P, imgWidth,

android : cannot change encoding video size & how to encoding h.264

风格不统一 提交于 2019-12-25 03:55:29
问题 I have an HTC Desire (Android 2.3.3, API level 9). I am trying to write a program to record 320x240 h.263 video. Without any settings, the code works well and the output resolution is 177x144. But it always crashes when I set the video size. And I find that only android 3.0+ support encoding h.264, I want to know how to do that on android 2.1+? I would be grateful for a solution to either issue. Here is what I am doing and the log recorder = new MediaRecorder(); recorder.setAudioSource

android jcodec: how to set frame rate

时光怂恿深爱的人放手 提交于 2019-12-24 04:52:28
问题 I have a set of images and I would like to generate a slideshow as a video file. I am using jcodec. When I encode a frame, is it possible to specify that that frame has to be shown for a certain amount of time (eg. 1 sec)? 回答1: Yes,It's possible to specify the time for the frame. It's explained in https://github.com/jcodec/jcodec/issues/21#issuecomment-23095738 new MP4Packet( result, // Bytebuffer that contains encoded frame i, // Presentation timestamp ( think seconds ) expressed in

Can I use Amazon Elastic Transcoder to only create thumbnails?

我与影子孤独终老i 提交于 2019-12-23 08:28:28
问题 I have a Rails app using Paperclip to upload and store videos on Amazon S3. I'm not particularly interested converting the video files into another format, or adding watermarks, nothing fancy. I just want to create thumbnails from the videos to use as poster images on my video players. I see that Amazon Elastic Transcoder allows for free thumbnail creation (or rather, they don't charge for thumbnail creation), and since I'm already using Amazon services, I wanted to see if I can use this for

Feeding D3D surfaces to Quick Sync encoder MFT does not work

扶醉桌前 提交于 2019-12-23 03:58:22
问题 I want to encode video using the "Intel® Quick Sync Video H.264 Encoder MFT". I'm using the MFT manually, without using a paired decoder MFT, or any other MediaFoundation components. Feeding normal buffers (IMFSamples with buffers created by MFCreateAlignedMemoryBuffer) works well. Now I'm investigating whether I can feed it ID3D11Texture2D surfaces as input (DXGI_FORMAT_NV12, 1280x720) in order to improve performance. I tried to pass IMFSample instances created with