Recording video on Android using JavaCV (Updated 2014 02 17)

前端 未结 3 510
难免孤独
难免孤独 2021-02-04 19:27

I\'m trying to record a video in Android using the JavaCV lib. I need to record the video in 640x360.

I have installed everything as described in README.txt file and I f

相关标签:
3条回答
  • 2021-02-04 19:43

    @Fabio Seeing that your code is from this Open Source Android Touch-To-Record library and I too have used it. Here is my modified version of the onPreviewFrame method, inside CameraPreview class, to take transpose and resize a captured frame, as the captured video played sideways (app was locked to portrait) and with greenish output.

    I defined "yuvIplImage" as following in my setCameraParams() method.

    IplImage yuvIplImage = IplImage.create(mPreviewSize.height, mPreviewSize.width, opencv_core.IPL_DEPTH_8U, 2);
    

    Also initialize your videoRecorder object as following, giving width as height and vice versa.

    //call initVideoRecorder() method like this to initialize videoRecorder object of FFmpegFrameRecorder class.
    initVideoRecorder(strVideoPath, mPreview.getPreviewSize().height, mPreview.getPreviewSize().width, recorderParameters);
    
    //method implementation
    public void initVideoRecorder(String videoPath, int width, int height, RecorderParameters recorderParameters)
    {
        Log.e(TAG, "initVideoRecorder");
    
        videoRecorder = new FFmpegFrameRecorder(videoPath, width, height, 1);
        videoRecorder.setFormat(recorderParameters.getVideoOutputFormat());
        videoRecorder.setSampleRate(recorderParameters.getAudioSamplingRate());
        videoRecorder.setFrameRate(recorderParameters.getVideoFrameRate());
        videoRecorder.setVideoCodec(recorderParameters.getVideoCodec());
        videoRecorder.setVideoQuality(recorderParameters.getVideoQuality());
        videoRecorder.setAudioQuality(recorderParameters.getVideoQuality());
        videoRecorder.setAudioCodec(recorderParameters.getAudioCodec());
        videoRecorder.setVideoBitrate(1000000);
        videoRecorder.setAudioBitrate(64000);
    }
    

    This is my onPreviewFrame() method:

    @Override
    public void onPreviewFrame(byte[] data, Camera camera)
    {
    
        long frameTimeStamp = 0L;
    
        if(FragmentCamera.mAudioTimestamp == 0L && FragmentCamera.firstTime > 0L)
        {
            frameTimeStamp = 1000L * (System.currentTimeMillis() - FragmentCamera.firstTime);
        }
        else if(FragmentCamera.mLastAudioTimestamp == FragmentCamera.mAudioTimestamp)
        {
            frameTimeStamp = FragmentCamera.mAudioTimestamp + FragmentCamera.frameTime;
        }
        else
        {
            long l2 = (System.nanoTime() - FragmentCamera.mAudioTimeRecorded) / 1000L;
            frameTimeStamp = l2 + FragmentCamera.mAudioTimestamp;
            FragmentCamera.mLastAudioTimestamp = FragmentCamera.mAudioTimestamp;
        }
    
        synchronized(FragmentCamera.mVideoRecordLock)
        {
            if(FragmentCamera.recording && FragmentCamera.rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null)
            {
                FragmentCamera.mVideoTimestamp += FragmentCamera.frameTime;
    
                if(lastSavedframe.getTimeStamp() > FragmentCamera.mVideoTimestamp)
                {
                    FragmentCamera.mVideoTimestamp = lastSavedframe.getTimeStamp();
                }
    
                try
                {
                    yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData());
    
                    IplImage bgrImage = IplImage.create(mPreviewSize.width, mPreviewSize.height, opencv_core.IPL_DEPTH_8U, 4);// In my case, mPreviewSize.width = 1280 and mPreviewSize.height = 720
                    IplImage transposed = IplImage.create(mPreviewSize.height, mPreviewSize.width, yuvIplImage.depth(), 4);
                    IplImage squared = IplImage.create(mPreviewSize.height, mPreviewSize.height, yuvIplImage.depth(), 4);
    
                    int[] _temp = new int[mPreviewSize.width * mPreviewSize.height];
    
                    Util.YUV_NV21_TO_BGR(_temp, data, mPreviewSize.width,  mPreviewSize.height);
    
                    bgrImage.getIntBuffer().put(_temp);
    
                    opencv_core.cvTranspose(bgrImage, transposed);
                    opencv_core.cvFlip(transposed, transposed, 1);
    
                    opencv_core.cvSetImageROI(transposed, opencv_core.cvRect(0, 0, mPreviewSize.height, mPreviewSize.height));
                    opencv_core.cvCopy(transposed, squared, null);
                    opencv_core.cvResetImageROI(transposed);
    
                    videoRecorder.setTimestamp(lastSavedframe.getTimeStamp());
                    videoRecorder.record(squared);
                }
                catch(com.googlecode.javacv.FrameRecorder.Exception e)
                {
                    e.printStackTrace();
                }
            }
    
            lastSavedframe = new SavedFrames(data, frameTimeStamp);
        }
    }
    

    This code uses a method "YUV_NV21_TO_BGR", which I found from this link

    Basically this method is used to resolve, which I call as, "The Green Devil problem on Android", just like yours. I was having the same issue and wasted almost 3-4 days. Before adding "YUV_NV21_TO_BGR" method when I just took transpose of YuvIplImage, more importantly a combination of transpose, flip (with or without resizing), there was greenish output in resulting video. This "YUV_NV21_TO_BGR" method saved the day. Thanks to @David Han from above google groups thread.

    0 讨论(0)
  • 2021-02-04 19:51

    Your camera, most likely, can provide 640x480 preview frames. The fix would be to clip this frame before it is recorded, like this:

    @Override
    public void onPreviewFrame(byte[] data, Camera camera) {
        /* get video data */
        if (yuvIplimage != null && recording) {
            ByteBuffer bb = yuvIplimage.getByteBuffer(); // resets the buffer
            final int startY = imageWidth*(imageHeight-finalImageHeight)/2;
            final int lenY = imageWidth*finalImageHeight;
            bb.put(data, startY, lenY);
            final int startVU = imageWidth*imageHeight + imageWidth*(imageHeight-finalImageHeight)/4;
            final int lenVU = imageWidth* finalImageHeight/2;
            bb.put(data, startVU, lenVU);
    
    //      Log.v(LOG_TAG, "Writing Frame");
            try {
                long t = 1000 * (System.currentTimeMillis() - startTime);
                if (t > recorder.getTimestamp()) {
                    recorder.setTimestamp(t);
                 }
                 recorder.record(yuvIplimage);
            } catch (FFmpegFrameRecorder.Exception e) {
                 Log.e(LOG_TAG, "problem with recorder():", e);
            }
        }
    }
    

    The preview frame has semi-planar YVU format: 640x480 luminance (Y) bytes, followed by 320x240 pairs of chroma (V and U) bytes. We copy to yuvIpImage first the relevant Y, and after that - relevant VU pairs. Note that it is easy and fast because the width you want is same as the native width.

    Your camera and camera view should be initialized for 640x480, and recorder - to 640x360. Note that the efficient cropping is only possible when imageWidth==finalImageWidth.

    FIX it happens so that IplImage.getByteBuffer() resets the buffer, therefore the fix is to use a temporary bb object.

    Note that you will probably want to overlay the preview with a frame that will "hide" margins that you crop this way: our manipulations only change the recorded frames, not the CameraView.

    0 讨论(0)
  • 2021-02-04 20:03

    Use this link to resolve the issue . The issue is with rotation of image.The YUV Image handling has been done.

    0 讨论(0)
提交回复
热议问题