mediacodec

Android MediaCodec HEVC Supported Resolutions

别说谁变了你拦得住时间么 提交于 2019-12-23 18:29:44
问题 Does anyone know what the supported resolutions are for Android MediaCodec when decoding HEVC? Through trial and error I've found that the following work: 640x272 720x304 960x400 1280x528 1920x800 2560x1072 And the following don't: 512x216 3840x1600 Is there any official documentation? 回答1: I doubt that there is any official documentation - in practice, you can probably rely on the fact that the resolutions that are tested by the CTS work, but other resolutions can behave in any way. Judging

mediacodec ExtractMpegFramesTest example mismatch

孤人 提交于 2019-12-23 16:42:55
问题 I tried to run This example from bigflake and I think that there is a mismatch. they write "ExtractMpegFramesTest.java (requires 4.1, API 16) " so the minimum API required is 16, but I look over the code and they use "import android.opengl.EGL14;" which required minimum API 17. Has anyone encountered this problem and succeeded to solve it?(succeeded to save 10 frames on Android 4.1 device) 回答1: I've updated the site to have two copies of the source file, one that uses EGL 1.0 and one that

How does an output surface of a Decoder is passed to an input surface of an Encoder?

六月ゝ 毕业季﹏ 提交于 2019-12-23 15:42:24
问题 I'm trying to understand how the surface-to-surface approach works with MediaCodec. In a ByteBuffer only approach, decoded data is placed in OutputBuffers. This non-encoded data can be processed manually then passed to the InputBuffers of an Encoder. If we give a look at an example from Android MediaCodec CTS using a surface to surface approach to pass data between a decoder and an encoder, we configure the Decoder to output the decoded data onto a Surface called outputSurface, and we

Android audio too fast on some devices with MediaCodec and AudioTrack

ぃ、小莉子 提交于 2019-12-23 12:43:18
问题 I am decoding audio using MediaExtractor , MediaCodec , and AudioTrack . I am configuring the AudioTrack using the MediaFormat returned by MediaCodec.getOutputFormat() after receiving MediaCodec.INFO_OUTPUT_FORMAT_CHANGED from the MediaCodec . On some devices, this results in speeded up audio, while the MediaFormat returned by the MediaExtractor works correctly. (On other devices, the reverse is true.) Here are some details: The audio files in question are largely 22050 Hz mono MP3s. The

Does MediaCodec in Android support audio encodec?

ε祈祈猫儿з 提交于 2019-12-23 05:28:08
问题 I am trying to build a video system. I get the data from Camera through Preview, then I give the data to Mediacodec and let it do the video encoding job. Now I am focus on the audio part. I am not sure which API to use for the audio caperture and audio encoding. I've done some search. But it seems that most of the demos use MediaRecord. Since I've replaced the MediaRecord with MediaCodec, I think I need to find some new way to do the audio part. How to capeture the audio? Can MediaCodec do

How to configure specific GOP size in MediaCodec with KEY_I_FRAME_INTERVAL parameter?

若如初见. 提交于 2019-12-23 05:25:35
问题 My issue is the next: I know MediaFormat.KEY_I_FRAME_INTERVAL is the interval in seconds (Integer) where the I-Frame is going to appear. So if i give the value to 1, and my frame rate is 15, then the GOP size is going to be 15, and if the frame rate is 30, the GOP size is going to be 30. So, being in the situation that the parameter MediaFormat.KEY_I_FRAME_INTERVAL is one integer (not being able to give the value 0.5 to be the gopsize 15 frames using a framerate of 30), Is there any solution

Android Rotate Surface created using MediaCodec

一笑奈何 提交于 2019-12-23 04:51:29
问题 I am using Camera2 APIs to capture video from the Camera. I understand that we need to rotate the preview as Camera2 does not have an equivalent of setDisplayOrientation(). Able to rotate preview using a Matrix. I am also using the MediaCodec APIs to encode the video. The encoded video however is inverted when I rotate the phone 180 degrees. I am out of ideas on how to rotate the encoded video. I tried KEY_ROTATION in MediaFormat while configuring. But I guess this is only while decoding

It takes too long time to mux h264 into mp4 file using mp4parser

怎甘沉沦 提交于 2019-12-23 04:50:38
问题 I'm using mp4parser to mux h264 file and aac file into mp4 file.And the code is as belows. String h264Path = "path to my h264 file, generated by Android MediaCodec"; DataSource videoFile = new FileDataSourceImpl(h264Path); H264TrackImpl h264Track = new H264TrackImpl(videoFile, "eng", 5, 1); // 5fps. you can play with timescale and timetick to get non integer fps, 23.967 is 24000/1001 Movie movie = new Movie(); movie.addTrack(h264Track); Container out = new DefaultMp4Builder().build(movie);

H264 format doesn't audio how to get audio in h264

∥☆過路亽.° 提交于 2019-12-23 04:46:02
问题 I am getting data from Camera and I am trying to convert NV21 data to .H264 format. I am done this with MediaCodec but When I saved .H264 format it doesn't have audio and only playing Video at VLC media player. I want to play Video with its audio.Can I do this with using .H264 format or Do I use other converting format? How can do this? I shared below my code. private synchronized void encode(byte[] data) { ByteBuffer[] inputBuffers = mMediaCodec.getInputBuffers(); ByteBuffer[] outputBuffers

Modify ExtractMpegFramesTest example to render decoded output on screen

蹲街弑〆低调 提交于 2019-12-23 03:44:09
问题 I'm trying to modify ExtractMpegFramesTest to do the rendering on screen and still use glReadPixels to extract the frames. I copied the relevant code for extracting the frames from ExtractMpegFramesTest (the CodecOutputSurface and STextureRender classes) and the frame extraction works as expected when rendered off screen. I have a TextureView with a SurfaceTextureListener and when I receive onSurfaceTextureAvailable I get the SurfaceTexture and start the decoding process. I pass this