Create video from screen grabs in android

前端 未结 1 737
小蘑菇
小蘑菇 2021-02-01 11:38

I would like to record user interaction in a video that people can then upload to their social media sites.

For example, the Talking Tom Cat android app has a little cam

相关标签:
1条回答
  • 2021-02-01 11:57

    Use the MediaCodec API with CONFIGURE_FLAG_ENCODE to set it up as an encoder. No ffmpeg required :)

    You've already found how to grab the screen in the other question you linked to, now you just need to feed each captured frame to MediaCodec, setting the appropriate format flags, timestamp, etc.

    EDIT: Sample code for this was hard to find, but here it is, hat tip to Martin Storsjö. Quick API walkthrough:

    MediaFormat inputFormat = MediaFormat.createVideoFormat("video/avc", width, height);
    inputFormat.setInteger(MediaFormat.KEY_BIT_RATE, bitRate);
    inputFormat.setInteger(MediaFormat.KEY_FRAME_RATE, frameRate);
    inputFormat.setInteger(MediaFormat.KEY_COLOR_FORMAT, colorFormat);
    inputFormat.setInteger(MediaFormat.KEY_I_FRAME_INTERVAL, 75);
    inputFormat.setInteger("stride", stride);
    inputFormat.setInteger("slice-height", sliceHeight);
    
    encoder = MediaCodec.createByCodecName("OMX.TI.DUCATI1.VIDEO.H264E"); // need to find name in media codec list, it is chipset-specific
    
    encoder.configure(inputFormat, null, null, MediaCodec.CONFIGURE_FLAG_ENCODE);
    encoder.start();
    encoderInputBuffers = encoder.getInputBuffers();
    encoderOutputBuffers = encoder.getOutputBuffers();
    
    byte[] inputFrame = new byte[frameSize];
    
    while ( ... have data ... ) {
        int inputBufIndex = encoder.dequeueInputBuffer(timeout);
    
        if (inputBufIndex >= 0) {
            ByteBuffer inputBuf = encoderInputBuffers[inputBufIndex];
            inputBuf.clear();
    
            // HERE: fill in input frame in correct color format, taking strides into account
            // This is an example for I420
            for (int i = 0; i < width; i++) {
                for (int j = 0; j < height; j++) {
                    inputFrame[ i * stride + j ] = ...; // Y[i][j]
                    inputFrame[ i * stride/2 + j/2 + stride * sliceHeight ] = ...; // U[i][j]
                    inputFrame[ i * stride/2 + j/2 + stride * sliceHeight * 5/4 ] = ...; // V[i][j]
                }
            }
    
            inputBuf.put(inputFrame);
    
            encoder.queueInputBuffer(
                inputBufIndex,
                0 /* offset */,
                sampleSize,
                presentationTimeUs,
                0);
        }
    
        int outputBufIndex = encoder.dequeueOutputBuffer(info, timeout);
    
        if (outputBufIndex >= 0) {
            ByteBuffer outputBuf = encoderOutputBuffers[outputBufIndex];
    
            // HERE: read get the encoded data
    
            encoder.releaseOutputBuffer(
                outputBufIndex, 
                false);
        }
        else {
            // Handle change of buffers, format, etc
        }
    }
    

    There are also some open issues.

    EDIT: You'd feed the data in as a byte buffer in one of the supported pixel formats, for example I420 or NV12. There is unfortunately no perfect way of determining which formats would work on a particular device; however it is typical for the same formats you can get from the camera to work with the encoder.

    0 讨论(0)
提交回复
热议问题