问题
I have a pretty good understanding of decoding with Android MediaCodec and feeding YUV through a Surface into an OpenGL texture. I would like to do something similar with Vulkan. However I have not been successful in finding any documentation or sample code.
My question is: how would I wire up the following pipeline?
MediaCodec Video Decoder ⇨ Surface ⇨ texture ⇨ Vulkan
Details
- Video decoder is configured using MediaCodec#configure
- Surface is an Android Surface (link to API, link to arch.)
OpenGL Comparison
For comparison, in OpenGL case an Android Surface is constructed and used like so
textureId = glGenTextures( &textureId )
surface = new Surface( new SurfaceTexture( textureId ) )
mediaCodec.configure( surface )
回答1:
Just FYI: I don't know Android well.
Creating Vulkan "texture" is done with vkCreateImage
. Unlike your OpenGL ES example there is some extra work managing the memory explicitly (with vkAllocateMemory
and vkBindImageMemory
).
Next step would be the hard one. There is apparently no SurfaceTexture
for Vulkan (yet).
The work making that efficient from Vulkan side was released quite recently.
I.e. VK_KHX_external_memory
and associated extensions of Vulkan. So hopefully the official Android SurfaceTexture
for Vulkan is comming too.
That being said, you could implement a new SurfaceTexture
yourself. Or at least "import" Vulkan Image into OpenGL ES. The problem with this is:
the extensions are considered experimentalUPDATE: not anymore- the drivers probably do not support the extensions anyway (yet)
- it is probably not trivial to program, and also may require messing with some low-level linux API on Android.
So, best way for now would be to create some shim, which copies the data.
I would:
1) It seems the MediaCodec
is able to work with a ByteBuffer
instead of Surface
. And ByteBuffer
can wrap raw bytes[]
.
2) Crete VkImages
and VkBuffer
. The Image on the Device memory (the resulting object we want). And the Buffer on the Host side (to facilitate copying).
3) Map (vkMapMemory
), and wrap the Host side Buffer with the ByteBuffer
used by MediaCodec
.
4) Each time there's new data copy it with submitting vkCmdCopyBufferToImage
in Vulkan.
I ommited lot of boilerplate (especially synchronization), but hopefully you get the idea.
回答2:
This is currently not possible, as there is no way to import memory objects from outside Vulkan, or any SDK Vulkan object that can export a Surface. Take a look at VK_KHX_external_memory and related extensions for how parts of this might work in the future.
EDIT 2018-05-23: This is now possible using the VK_ANDROID_external_memory_android_hardware_buffer extension and the extensions it depends on. You can use AImageReader_newWithUsage to create an AImageReader compatible with GPU sampling. Get the ANativeWindow from that AImageReader and use it as the AMediaCodec's output surface. Then for each image you receive, get the AHardwareBuffer and import that into a VkDeviceMemory/VkImage pair using the extension.
来源:https://stackoverflow.com/questions/43507536/how-to-connect-android-mediacodec-surface-to-vulkan