问题
I am developing H264 H/W accelerated video decoder for android. So far, I've come around with some libraries MediaCodec
, Stagefright
, OpenMax IL
, OpenMax AL
and FFmpeg
. After a bit research, I've found that -
I found a great resource of using stagefright with FFmpeg, but I can not use FFmpeg as for its license, it is quite restrictive for distributed software. (Or possible to discard FFmpeg from this approach?)
I can not use MediaCodec as its a Java API and I have to call it via the JNI from C++ layer which is relatively slow and I am not allowed.
I can not use OpenMax AL as it only supports the decoding of MPEG-2 transport stream via a buffer queue. This rules out passing raw h264 NALUs or other media formats for that matter.
Now only left are - stagefright and OpenMax IL. I came to know that stagefright uses OpenMax(OMX) interface. So should I go with stagefright or OpenMax IL? Which will be more promising?
Also, I came to know that Android H/W accelerated decoder is vendor specific and every vendors has their own OMX interfacing APIs. Is it true? If so, do I need to write H/W vendor specific implementation incase of OpenMax IL? What about stagefright? - Is it hardware agnostic or hardware dependent? If there is no way of H/W indenpent implementation using stagefright or OpenMax IL, I need to support at least Qualcomm's Snapdragon, Samsung's Exynos and Tegra-4.
Note that, I need to decode H264 Annex B stream and expect decoded data after decode which I will send to my video rendering pipeline. So basically, I only need the decoder module.
I am really confused a lot. Please help me putting in right direction. Thanks in advance!
EDIT
My software is for commercial purpose and the source code is private as well. And I am also restricted to use ffmpeg by client. :)
回答1:
You really should go for MediaCodec. Calling java methods via JNI does have some overhead, but you should keep in mind what order of magnitude the overhead is. If you'd call a function per pixel, the overhead of JNI calls might be problematic. But for using MediaCodec, you only do a few function calls per frame, and the overhead there is negligible.
See e.g. http://git.videolan.org/?p=vlc.git;a=blob;f=modules/codec/omxil/mediacodec_jni.c;h=57df9889c97706436823a4960206e323565e221c;hb=b31df501269b56c65327be181cdca3df48946fb1 as an example on using MediaCodec from C code using JNI. As others also have gone this way, I can assure you that the JNI overhead is not a reason to consider other APIs than MediaCodec.
Using stagefright or OMX directly is problematic; the ABI differs between each platform version (so you can either only target one version, or compile multiple times targeting different versions, packaging it all up in one package), and you'd have to deal with a lot of device specific quirks, while MediaCodec should (and on modern versions does) work the same across all devices.
回答2:
I found a great resource of using stagefright with FFmpeg, but I can not use FFmpeg as for its license, it is quite restrictive for distributed software. (Or possible to discard FFmpeg from this approach?)
That's not true. FFmpeg is LGPL, so you can just use it in your commercially redistributable application.
However, you might be using modules of FFmpeg which are GPL licensed, e.g. libx264. In that case, your program must be GPL-compliant.
But not even that is bad for distributing software -- it just means that you need to give your customers (who should be kings, anyway), access to the source code of the application they are paying for, and are not allowed to restrict their freedoms. Not a bad deal, IMHO.
Also, I came to know that Android H/W accelerated decoder is vendor specific and every vendors has their own OMX interfacing APIs. Is it true?
Obviously, yes. If you need hardware acceleration, someone has to write a program that makes your specific hardware accelerate something.
来源:https://stackoverflow.com/questions/32427289/developing-h264-hardware-decoder-android-stagefright-or-openmax-il