How to convert android.media.Image to bitmap object?

前端 未结 5 751
余生分开走
余生分开走 2020-12-02 23:34

In android, I get an Image object from here https://inducesmile.com/android/android-camera2-api-example-tutorial/ this camera tutorial. But I want to now loop through the pi

相关标签:
5条回答
  • 2020-12-02 23:57

    YuvToRgbConverter is useful for conversion from Image to Bitmap.

    https://github.com/android/camera-samples/blob/master/Camera2Basic/utils/src/main/java/com/example/android/camera/utils/YuvToRgbConverter.kt

    Usage sample.

       val bmp = Bitmap.createBitmap(image.width, image.height, Bitmap.Config.ARGB_8888)
       yuvToRgbConverter.yuvToRgb(image, bmp)
    
    0 讨论(0)
  • 2020-12-03 00:04

    https://docs.oracle.com/javase/1.5.0/docs/api/java/nio/ByteBuffer.html#get%28byte[]%29

    According to the java docs: The buffer.get method transfers bytes from this buffer into the given destination array. An invocation of this method of the form src.get(a) behaves in exactly the same way as the invocation

     src.get(a, 0, a.length) 
    
    0 讨论(0)
  • 2020-12-03 00:04

    Actually you have two questions in one 1) How do you loop throw android.media.Image pixels 2) How do you convert android.media.image to Bitmap

    The 1-st is easy. Note that the Image object that you get from the camera, it's just a YUV frame, where Y, and U+V components are in different planes. In many Image Processing cases you need only the Y plane, that means the gray part of the image. To get it I suggest code like this:

        Image.Plane[] planes = image.getPlanes();
        int yRowStride = planes[0].getRowStride();
        byte[] yImage = new byte[yRowStride];
        planes[0].getBuffer().get(yImage);
    

    The yImage byte buffer is actually the gray pixels of the frame. In same manner you can get the U+V parts to. Note that they can be U first, and V after, or V and after it U, and maybe interlived (that is the common case case with Camera2 API). So you get UVUV....

    For debug purposes, I often write the frame to a file, and trying to open it with Vooya app (Linux) to check the format.

    The 2-th question is a little bit more complex. To get a Bitmap object I found some code example from TensorFlow project here. The most interesting functions for you is "convertImageToBitmap" that will return you with RGB values.

    To convert them to a real Bitmap do the next:

      Bitmap rgbFrameBitmap;
      int[] cachedRgbBytes;
      cachedRgbBytes = ImageUtils.convertImageToBitmap(image, cachedRgbBytes, cachedYuvBytes);
      rgbFrameBitmap = Bitmap.createBitmap(image.getWidth(), image.getHeight(), Bitmap.Config.ARGB_8888);
      rgbFrameBitmap.setPixels(cachedRgbBytes,0,image.getWidth(), 0, 0,image.getWidth(), image.getHeight());
    

    Note: There is more options of converting YUV to RGB frames, so if you need the pixels value, maybe Bitmap is not the best choice, as it may consume more memory than you need, to just get the RGB values

    0 讨论(0)
  • 2020-12-03 00:04

    1-Store the path to the image file as a string variable. To decode the content of an image file, you need the file path stored within your code as a string. Use the following syntax as a guide:

    String picPath = "/mnt/sdcard/Pictures/mypic.jpg";
    

    2-Create a Bitmap Object And Use BitmapFactory:

    Bitmap picBitmap;
    Bitmap picBitmap = BitmapFactory.decodeFile(picPath);
    
    0 讨论(0)
  • 2020-12-03 00:11

    If you want to loop all throughout the pixel then you need to convert it first to Bitmap object. Now since what I see in the source code that it returns an Image, you can directly convert the bytes to bitmap.

        Image image = reader.acquireLatestImage();
        ByteBuffer buffer = image.getPlanes()[0].getBuffer();
        byte[] bytes = new byte[buffer.capacity()];
        buffer.get(bytes);
        Bitmap bitmapImage = BitmapFactory.decodeByteArray(bytes, 0, bytes.length, null);
    

    Then once you get the bitmap object, you can now iterate through all of the pixels.

    0 讨论(0)
提交回复
热议问题