问题
I am trying to use MediaCodec to record raw frames from ImageReader in onImageAvailable callback but unable to write a working code. Most of the examples are using Camera 1 API or MediaRecorder. My aim is to capture individual frames process it and create an mp4 out of it
Raw YUV frames
@Override
public void onImageAvailable(ImageReader reader) {
Image i = reader.acquireLatestImage();
processImage(i);
i.close();
Log.d("hehe", "onImageAvailable");
}
};
MediaCodec
MediaCodec codec = MediaCodec.createByCodecName(name);
MediaFormat mOutputFormat; // member variable
codec.setCallback(new MediaCodec.Callback() {
@Override
void onInputBufferAvailable(MediaCodec mc, int inputBufferId) {
ByteBuffer inputBuffer = codec.getInputBuffer(inputBufferId);
// fill inputBuffer with valid data
…
codec.queueInputBuffer(inputBufferId, …);
}
@Override
void onOutputBufferAvailable(MediaCodec mc, int outputBufferId, …) {
ByteBuffer outputBuffer = codec.getOutputBuffer(outputBufferId);
MediaFormat bufferFormat = codec.getOutputFormat(outputBufferId); // option A
// bufferFormat is equivalent to mOutputFormat
// outputBuffer is ready to be processed or rendered.
…
codec.releaseOutputBuffer(outputBufferId, …);
}
@Override
void onOutputFormatChanged(MediaCodec mc, MediaFormat format) {
// Subsequent data will conform to new format.
// Can ignore if using getOutputFormat(outputBufferId)
mOutputFormat = format; // option B
}
@Override
void onError(…) {
…
}
});
codec.configure(format, …);
mOutputFormat = codec.getOutputFormat(); // option B
codec.start();
// wait for processing to complete
codec.stop();
codec.release();
I am unable to relate the code given on https://developer.android.com/reference/android/media/MediaCodec . Please help
回答1:
You have to create a Queue, push your image's buffer created from Image planes into the Queue and process it in the void onInputBufferAvailable(MediaCodec mc, int inputBufferId)
1) Create a class to wrappe the buffer data :
class MyData{
byte[] buffer;
long presentationTimeUs;
// to tell your encoder that is a EOS, otherwise you can not know when to stop
boolean isEOS;
public MyData(byte[] buffer,long presentationTimeUs, boolean isEOS){
this.buffer = new byte[buffer.length];
System.arraycopy(buffer, 0, this.buffer, 0, buffer.length);
this.presentationTimeUs = presentationTimeUs;
this.isEOS = isEOS;
}
public byte[] getBuffer() {
return buffer;
}
public void setBuffer(byte[] buffer) {
this.buffer = buffer;
}
public long getPresentationTimeUs() {
return presentationTimeUs;
}
public void setPresentationTimeUs(long presentationTimeUs) {
this.presentationTimeUs = presentationTimeUs;
}
public boolean isEOS() {
return isEOS;
}
public void setEOS(boolean EOS) {
isEOS = EOS;
}
}
2) Create the Queue :
Queue<MyData> mQueue = new LinkedList<MyData>();
3) Convert Image Planes to byte Array (byte[]) using native code:
Adding native support to Gradle file :
android { compileSdkVersion 27 defaultConfig { ... externalNativeBuild { cmake { arguments "-DANDROID_STL=stlport_static" cppFlags "-std=c++11" } } } externalNativeBuild { cmake { path "CMakeLists.txt" } } ...
}
- Crating a function to convert image planes to byte array : (native-yuv-to-buffer.cpp)
extern "C" JNIEXPORT jbyteArray JNICALL
Java_labs_farzi_camera2previewstream_MainActivity_yuvToBuffer (
JNIEnv *env,
jobject instance,
jobject yPlane,
jobject uPlane,
jobject vPlane,
jint yPixelStride,
jint yRowStride,
jint uPixelStride,
jint uRowStride,
jint vPixelStride,
jint vRowStride,
jint imgWidth,
jint imgHeight) {
bbuf_yIn = static_cast<uint8_t *>(env->GetDirectBufferAddress(yPlane));
bbuf_uIn = static_cast<uint8_t *>(env->GetDirectBufferAddress(uPlane));
bbuf_vIn = static_cast<uint8_t *>(env->GetDirectBufferAddress(vPlane));
buf = (uint8_t *) malloc(sizeof(uint8_t) * imgWidth * imgHeight +
2 * (imgWidth + 1) / 2 * (imgHeight + 1) / 2);
bool isNV21;
if (yPixelStride == 1) {
// All pixels in a row are contiguous; copy one line at a time.
for (int y = 0; y < imgHeight; y++)
memcpy(buf + y * imgWidth, bbuf_yIn + y * yRowStride,
static_cast<size_t>(imgWidth));
} else {
// Highly improbable, but not disallowed by the API. In this case
// individual pixels aren't stored consecutively but sparsely with
// other data inbetween each pixel.
for (int y = 0; y < imgHeight; y++)
for (int x = 0; x < imgWidth; x++)
buf[y * imgWidth + x] = bbuf_yIn[y * yRowStride + x * yPixelStride];
}
uint8_t *chromaBuf = &buf[imgWidth * imgHeight];
int chromaBufStride = 2 * ((imgWidth + 1) / 2);
if (uPixelStride == 2 && vPixelStride == 2 &&
uRowStride == vRowStride && bbuf_vIn == bbuf_uIn + 1) {
isNV21 = true;
// The actual cb/cr planes happened to be laid out in
// exact NV21 form in memory; copy them as is
for (int y = 0; y < (imgHeight + 1) / 2; y++)
memcpy(chromaBuf + y * chromaBufStride, bbuf_vIn + y * vRowStride,
static_cast<size_t>(chromaBufStride));
} else if (vPixelStride == 2 && uPixelStride == 2 &&
uRowStride == vRowStride && bbuf_vIn == bbuf_uIn + 1) {
isNV21 = false;
// The cb/cr planes happened to be laid out in exact NV12 form
// in memory; if the destination API can use NV12 in addition to
// NV21 do something similar as above, but using cbPtr instead of crPtr.
// If not, remove this clause and use the generic code below.
} else {
isNV21 = true;
if (vPixelStride == 1 && uPixelStride == 1) {
// Continuous cb/cr planes; the input data was I420/YV12 or similar;
// copy it into NV21 form
for (int y = 0; y < (imgHeight + 1) / 2; y++) {
for (int x = 0; x < (imgWidth + 1) / 2; x++) {
chromaBuf[y * chromaBufStride + 2 * x + 0] = bbuf_vIn[y * vRowStride + x];
chromaBuf[y * chromaBufStride + 2 * x + 1] = bbuf_uIn[y * uRowStride + x];
}
}
} else {
// Generic data copying into NV21
for (int y = 0; y < (imgHeight + 1) / 2; y++) {
for (int x = 0; x < (imgWidth + 1) / 2; x++) {
chromaBuf[y * chromaBufStride + 2 * x + 0] = bbuf_vIn[y * vRowStride +
x * uPixelStride];
chromaBuf[y * chromaBufStride + 2 * x + 1] = bbuf_uIn[y * uRowStride +
x * vPixelStride];
}
}
}
}
uint8_t *I420Buff = (uint8_t *) malloc(sizeof(uint8_t) * imgWidth * imgHeight +
2 * (imgWidth + 1) / 2 * (imgHeight + 1) / 2);
SPtoI420(buf,I420Buff,imgWidth,imgHeight,isNV21);
jbyteArray ret = env->NewByteArray(imgWidth * imgHeight *
3/2);
env->SetByteArrayRegion (ret, 0, imgWidth * imgHeight *
3/2, (jbyte*)I420Buff);
free(buf);
free (I420Buff);
return ret;
}
Adding a function to convert Semi-planar to planar :
bool SPtoI420(const uint8_t *src, uint8_t *dst, int width, int height, bool isNV21) { if (!src || !dst) { return false; }
unsigned int YSize = width * height; unsigned int UVSize = (YSize>>1); // NV21: Y..Y + VUV...U const uint8_t *pSrcY = src; const uint8_t *pSrcUV = src + YSize; // I420: Y..Y + U.U + V.V uint8_t *pDstY = dst; uint8_t *pDstU = dst + YSize; uint8_t *pDstV = dst + YSize + (UVSize>>1); // copy Y memcpy(pDstY, pSrcY, YSize); // copy U and V for (int k=0; k < (UVSize>>1); k++) { if(isNV21) { pDstV[k] = pSrcUV[k * 2]; // copy V pDstU[k] = pSrcUV[k * 2 + 1]; // copy U }else{ pDstU[k] = pSrcUV[k * 2]; // copy V pDstV[k] = pSrcUV[k * 2 + 1]; // copy U } } return true;}
4) Push your buffer to the queue :
private final ImageReader.OnImageAvailableListener mOnGetPreviewListener
= new ImageReader.OnImageAvailableListener() {
@Override
public void onImageAvailable(ImageReader reader) {
Image image = reader.acquireLatestImage();
if (image == null)
return;
final Image.Plane[] planes = image.getPlanes();
Image.Plane yPlane = planes[0];
Image.Plane uPlane = planes[1];
Image.Plane vPlane = planes[2];
byte[] mBuffer = yuvToBuffer(yPlane.getBuffer(),
uPlane.getBuffer(),
vPlane.getBuffer(),
yPlane.getPixelStride(),
yPlane.getRowStride(),
uPlane.getPixelStride(),
uPlane.getRowStride(),
vPlane.getPixelStride(),
vPlane.getRowStride(),
image.getWidth(),
image.getHeight());
mQueue.add(new MyData(mBuffer, image.getTimestamp(), false));
image.close();
Log.d("hehe", "onImageAvailable");
}
};
5) Encode the data and save a h264 video file (VLC to play it):
public void onInputBufferAvailable(MediaCodec mc, int inputBufferId) {
ByteBuffer inputBuffer = mc.getInputBuffer(inputBufferId);
Log.d(TAG, "onInputBufferAvailable: ");
// fill inputBuffer with valid data
MyData data = mQueue.poll();
if (data != null) {
// check if is EOS and process with EOS flag if is the case
// else if NOT EOS
if (inputBuffer != null) {
Log.e(TAG, "onInputBufferAvailable: "+data.getBuffer().length);
inputBuffer.clear();
inputBuffer.put(data.getBuffer());
mc.queueInputBuffer(inputBufferId,
0,
data.getBuffer().length,
data.getPresentationTimeUs(),
0);
}
} else {
mc.queueInputBuffer(inputBufferId,
0,
0,
0,
0);
}
}
@Override
public void onOutputBufferAvailable(@NonNull MediaCodec codec, int index, @NonNull MediaCodec.BufferInfo info) {
Log.d(TAG, "onOutputBufferAvailable: ");
ByteBuffer outputBuffer = codec.getOutputBuffer(index);
byte[] outData = new byte[info.size];
if (outputBuffer != null) {
outputBuffer.get(outData);
try {
fos.write(outData);
} catch (IOException e) {
e.printStackTrace();
}
}
codec.releaseOutputBuffer(index,false);
}
6) Mux your track in the void onOutputBufferAvailable(MediaCodec mc, int outputBufferId, …)
, the processing is similar to the examples with synchronous mode that you can find on the Internet.
I hope my answer will help you
Full example code here
回答2:
Why don't you try out this sample: https://github.com/googlesamples/android-Camera2Video
I think it will definitely fulfill all of your requirements and you can always reach out to me if you're unable to relate to the code in the sample mentioned above.
This sample uses the Camera2 API as well as what you want is the conversion from raw YUV frames, which can be done using it. So, I hope that you wont have any issues or problems if you would go through the given sample once and use its code for recording MP4 videos in your desired app.
For instance - a) In this, you will have to implement a CameraDevice.StateCallback to receive events about changes of the state of the camera device. Override its methods to set your CameraDevice instance, start the preview, and stop and release the camera.
b) When starting the preview, set up the MediaRecorder to accept video format.
c) Then, set up a CaptureRequest.Builder using createCaptureRequest(CameraDevice.TEMPLATE_RECORD) on your CameraDevice instance.
d) Then, implement a CameraCaptureSession.StateCallback, using the method createCaptureSession(surfaces, new CameraCaptureSession.StateCallback(){}) on your CameraDevice instance, where surfaces is a list consisting of the surface view of your TextureView and the surface of your MediaRecorder instance.
e) Use start() and stop() methods on your MediaRecorder instance to actually start and stop the recording.
f) Lastly, set up and clean up your camera device in onResume() and onPause().
Happy coding.
来源:https://stackoverflow.com/questions/52289534/recording-video-using-mediacodec-with-camera2-api