How to save a YUV_420_888 image?

前端 未结 2 1419
梦如初夏
梦如初夏 2021-01-19 06:07

I built my own camera app with the camera2 API. I started with the sample \"camera2Raw\" and I added YUV_420_888 support instead of JPEG. But now I am wondering how I save t

2条回答
  •  时光说笑
    2021-01-19 06:47

    Typically, you don't save a YUV image as a file, and as such there are no built in functions to do so. Moreover, there are no standard image format encodings for such YUV data. YUV is typically an intermediate form of data that is convenient for the camera pipeline and conversion into other formats afterwards.

    If you're really intent on this, you can write the buffers for the three channels as unencoded byte data to a file and then open it elsewhere and reconstruct it. Make sure you save the other important information too, though, such as stride data. This is what I do. Here are the relevant lines from a file format switch statement I use, along with comments on the reasoning:

    File file = new File(SAVE_DIR, mFilename);
    FileOutputStream output = null;
    ByteBuffer buffer;
    byte[] bytes;
    boolean success = false;
    
    switch (mImage.getFormat()){
    
        (... other image data format cases ...)
    
        // YUV_420_888 images are saved in a format of our own devising. First write out the
        // information necessary to reconstruct the image, all as ints: width, height, U-,V-plane
        // pixel strides, and U-,V-plane row strides. (Y-plane will have pixel-stride 1 always.)
        // Then directly place the three planes of byte data, uncompressed.
        //
        // Note the YUV_420_888 format does not guarantee the last pixel makes it in these planes,
        // so some cases are necessary at the decoding end, based on the number of bytes present.
        // An alternative would be to also encode, prior to each plane of bytes, how many bytes are
        // in the following plane. Perhaps in the future.
        case ImageFormat.YUV_420_888:
            // "prebuffer" simply contains the meta information about the following planes.
            ByteBuffer prebuffer = ByteBuffer.allocate(16);
            prebuffer.putInt(mImage.getWidth())
            .putInt(mImage.getHeight())
            .putInt(mImage.getPlanes()[1].getPixelStride())
            .putInt(mImage.getPlanes()[1].getRowStride());
    
            try {
                output = new FileOutputStream(file);
                output.write(prebuffer.array()); // write meta information to file
                // Now write the actual planes.
                for (int i = 0; i<3; i++){
                    buffer = mImage.getPlanes()[i].getBuffer();
                    bytes = new byte[buffer.remaining()]; // makes byte array large enough to hold image
                    buffer.get(bytes); // copies image from buffer to byte array
                    output.write(bytes);    // write the byte array to file
                }
                success = true;
            } catch (FileNotFoundException e) {
                e.printStackTrace();
            } catch (IOException e) {
                e.printStackTrace();
            } finally {
                Log.v(appFragment.APP_TAG,"Closing image to free buffer.");
                mImage.close(); // close this to free up buffer for other images
                if (null != output) {
                    try {
                        output.close();
                    } catch (IOException e) {
                        e.printStackTrace();
                    }
                }
            }
            break;
        }
    

    Because it can be up to the device exactly how the data is interlaced, it can be challenging to extract the Y,U,V channels from this encoded information later. To see a MATLAB implementation of how to read and extract a file like this, see here.

提交回复
热议问题