问题
In my application, we need to display the Video frame receives from server to our android application,
Server is sending video data @ 50 frame per second, having encoded in WebM i.e. using libvpx to encode and decode the images,
Now after decoding from libvpx its getting YUV data, that we can displayed over the image layout,
the current implementation is something like this,
In JNI / Native C++ code, we are converting YUV data to RGB Data In Android framework, calling
public Bitmap createImgae(byte[] bits, int width, int height, int scan) {
Bitmap bitmap=null;
System.out.println("video: creating bitmap");
//try{
bitmap = Bitmap.createBitmap(width, height,
Bitmap.Config.ARGB_8888);
bitmap.copyPixelsFromBuffer(ByteBuffer.wrap(bits));
//}catch(OutOfMemoryError ex){
//}
System.out.println("video: bitmap created");
return bitmap;
}
To create the bitmap image ,
to display the image over imageView using following code,
img = createImgae(imgRaw, imgInfo[0], imgInfo[1], 1);
if(img!=null && !img.isRecycled()){
iv.setImageBitmap(img);
//img.recycle();
img=null;
System.out.println("video: image displayed");
}
My query is, overall this function is taking approx 40 ms, is there any way to optimize it,
1 -- Is there any way to display YUV data to imageView ?
2 -- Is there any other way to create Image( Bitmap image) from RGB data ,
3 -- I believe i am always creating image, but i suppose i should create bitmap only once and do / supply new buffer always, as and when we received.
please share your views.
回答1:
Following code solve your problem and it may take less time on Yuv Format data because YuvImage class is provided with Android-SDK.
You can try this,
ByteArrayOutputStream out = new ByteArrayOutputStream();
YuvImage yuvImage = new YuvImage(data, ImageFormat.NV21, width, height, null);
yuvImage.compressToJpeg(new Rect(0, 0, width, height), 50, out);
byte[] imageBytes = out.toByteArray();
Bitmap image = BitmapFactory.decodeByteArray(imageBytes, 0, imageBytes.length);
iv.setImageBitmap(image);
or
void yourFunction(byte[] data, int mWidth, int mHeight)
{
int[] mIntArray = new int[mWidth*mHeight];
// Decode Yuv data to integer array
decodeYUV420SP(mIntArray, data, mWidth, mHeight);
//Initialize the bitmap, with the replaced color
Bitmap bmp = Bitmap.createBitmap(mIntArray, mWidth, mHeight, Bitmap.Config.ARGB_8888);
// Draw the bitmap with the replaced color
iv.setImageBitmap(bmp);
}
static public void decodeYUV420SP(int[] rgba, byte[] yuv420sp, int width,
int height) {
final int frameSize = width * height;
for (int j = 0, yp = 0; j < height; j++) {
int uvp = frameSize + (j >> 1) * width, u = 0, v = 0;
for (int i = 0; i < width; i++, yp++) {
int y = (0xff & ((int) yuv420sp[yp])) - 16;
if (y < 0)
y = 0;
if ((i & 1) == 0) {
v = (0xff & yuv420sp[uvp++]) - 128;
u = (0xff & yuv420sp[uvp++]) - 128;
}
int y1192 = 1192 * y;
int r = (y1192 + 1634 * v);
int g = (y1192 - 833 * v - 400 * u);
int b = (y1192 + 2066 * u);
if (r < 0)
r = 0;
else if (r > 262143)
r = 262143;
if (g < 0)
g = 0;
else if (g > 262143)
g = 262143;
if (b < 0)
b = 0;
else if (b > 262143)
b = 262143;
// rgb[yp] = 0xff000000 | ((r << 6) & 0xff0000) | ((g >> 2) &
// 0xff00) | ((b >> 10) & 0xff);
// rgba, divide 2^10 ( >> 10)
rgba[yp] = ((r << 14) & 0xff000000) | ((g << 6) & 0xff0000)
| ((b >> 2) | 0xff00);
}
}
}
回答2:
Create a bitmap after getting Width and height in onCreate.
editedBitmap = Bitmap.createBitmap(widthPreview, heightPreview,
android.graphics.Bitmap.Config.ARGB_8888);
And in onPreviewFrame.
int[] rgbData = decodeGreyscale(aNv21Byte,widthPreview,heightPreview);
editedBitmap.setPixels(rgbData, 0, widthPreview, 0, 0, widthPreview, heightPreview);
And
private int[] decodeGreyscale(byte[] nv21, int width, int height) {
int pixelCount = width * height;
int[] out = new int[pixelCount];
for (int i = 0; i < pixelCount; ++i) {
int luminance = nv21[i] & 0xFF;
// out[i] = Color.argb(0xFF, luminance, luminance, luminance);
out[i] = 0xff000000 | luminance <<16 | luminance <<8 | luminance;//No need to create Color object for each.
}
return out;
}
And Bonus.
if(cameraId==CameraInfo.CAMERA_FACING_FRONT)
{
matrix.setRotate(270F);
}
finalBitmap = Bitmap.createBitmap(editedBitmap, 0, 0, widthPreview, heightPreview, matrix, true);
回答3:
Based on the accepted answer I could find a quite faster way to make the YUV to RGB convertion using RenderScript intrinsict convertion method. I have found the direct example here: Yuv2RgbRenderScript.
It can be as simple as copy the convertYuvToRgbIntrinsic method in the RenderScriptHelper class to replace the decodeYUV420SP that Hitesh Patel give in his answer. Also, you will need to initialize a RenderScript object (the example is in the MainActivity class).
And don't forget to add in the project graddle the use of render script (in the android page you can find the way to do it).
回答4:
Another way would be using ScriptIntrinsicYuvToRGB, this is more efficient then encoding (and decoding) each time a JPEG
fun yuvByteArrayToBitmap(bytes: ByteArray, width: Int, height: Int): Bitmap {
val rs = RenderScript.create(this)
val yuvToRgbIntrinsic = ScriptIntrinsicYuvToRGB.create(rs, Element.U8_4(rs));
val yuvType = Type.Builder(rs, Element.U8(rs)).setX(bytes.size);
val input = Allocation.createTyped(rs, yuvType.create(), Allocation.USAGE_SCRIPT);
val rgbaType = Type.Builder(rs, Element.RGBA_8888(rs)).setX(width).setY(height);
val output = Allocation.createTyped(rs, rgbaType.create(), Allocation.USAGE_SCRIPT);
input.copyFrom(bytes);
yuvToRgbIntrinsic.setInput(input);
yuvToRgbIntrinsic.forEach(output);
val bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888)
output.copyTo(bitmap)
input.destroy()
output.destroy()
yuvToRgbIntrinsic.destroy()
rs.destroy()
return bitmap
}
来源:https://stackoverflow.com/questions/9192982/displaying-yuv-image-in-android