问题
So I got the tensorflow object detection API
running on Android and I've noticed while going through the code that before processing frames taken from the camera they're is a conversion that goes like this in the CameraActivity.java
:
imageConverter =
new Runnable() {
@Override
public void run() {
ImageUtils.convertYUV420SPToARGB8888(bytes, previewWidth, previewHeight, rgbBytes);
}
};
I tried to look it up and I only understood the difference between the two types but I couldn't figure why this conversion is necessary (or preferable)..
Is this conversion since it's happening in real time and for every frame going to affect the preview or the processing time ?
Any information or explanation is much appreciated even if it's basic
来源:https://stackoverflow.com/questions/50720286/why-tensorflow-object-detection-api-uses-yuv420sp-to-argb8888-conversion