问题
I am trying to deploy the Android TensorFlow-Lite Example, specifically, the Detector Activity.
I have had success in deploying it in a Tablet. The app works great, it is able to detect objects, put a bounding rectangle around it, with a label as well as a confidence level.
I then set up my Raspberry Pi 3 Model B Board, installed Android Things in it, connected via ADB, and then deployed the same program from Android Studio. However, the screen I was using for my Rπ board was blank.
Upon checking a Camera Demo For Android Things tutorial, I had this idea to enable hardware acceleration in order to support the Camera Preview. I added in:
android:hardwareAccelerated="true"
in the application
tag of the Manifest.
I also added in the following within the application tag:
<uses-library android:name="com.google.android.things" />
And an intent filter in my activity tag:
<intent-filter>
<action android:name="android.intent.action.MAIN" />
<category android:name="android.intent.category.IOT_LAUNCHER" />
<category android:name="android.intent.category.DEFAULT" />
</intent-filter>
So that the TensorFlow App runs after boot.
I deployed the application again, but the same error persists -- I am unable to configure the preview screen session.
Here is the following code that was included in the TensorFlow Example:
private void createCameraPreviewSession() {
try {
final SurfaceTexture texture = textureView.getSurfaceTexture();
assert texture != null;
// We configure the size of default buffer to be the size of camera preview we want.
texture.setDefaultBufferSize(previewSize.getWidth(), previewSize.getHeight());
// This is the output Surface we need to start preview.
final Surface surface = new Surface(texture);
// We set up a CaptureRequest.Builder with the output Surface.
previewRequestBuilder = cameraDevice.createCaptureRequest(CameraDevice.TEMPLATE_PREVIEW);
previewRequestBuilder.addTarget(surface);
LOGGER.e("Opening camera preview: " + previewSize.getWidth() + "x" + previewSize.getHeight());
// Create the reader for the preview frames.
previewReader =
ImageReader.newInstance(
previewSize.getWidth(), previewSize.getHeight(), ImageFormat.YUV_420_888, 2);
previewReader.setOnImageAvailableListener(imageListener, backgroundHandler);
previewRequestBuilder.addTarget(previewReader.getSurface());
// Here, we create a CameraCaptureSession for camera preview.
cameraDevice.createCaptureSession(
Arrays.asList(surface, previewReader.getSurface()),
new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(final CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == cameraDevice) {
return;
}
// When the session is ready, we start displaying the preview.
captureSession = cameraCaptureSession;
try {
// Auto focus should be continuous for camera preview.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AF_MODE,
CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
previewRequestBuilder.set(
CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);
} catch (final CameraAccessException e) {
LOGGER.e(e, "Exception!");
LOGGER.e("camera access exception!");
}
}
@Override
public void onConfigureFailed(final CameraCaptureSession cameraCaptureSession) {
showToast("Failed");
LOGGER.e("configure failed!!");
}
},
null);
} catch (final CameraAccessException e) {
LOGGER.e("camera access exception!");
LOGGER.e(e, "Exception!");
}
}
The error log is the one that is in the onConfigureFailed
override method, and the relevant error logs leading towards that statement are:
11-12 14:02:40.677 1991-2035/org.tensorflow.demo E/CameraCaptureSession: Session 0: Failed to create capture session; configuration failed
11-12 14:02:40.679 1991-2035/org.tensorflow.demo E/tensorflow: CameraConnectionFragment: configure failed!!
However, I couldn't trace the Session 0:
stack trace.
Aside from turning on hardware acceleration and adding several other tags to the Manifest, I have not tried anything.
I have done my research and I have seen other examples, but they only take a photo at the click of a button. I need a working cameraPreview.
I also have that CameraDemoForAndroidThings example, but I'm afraid that I don't know a lick of Kotlin to be able to guess how it works.
If there's any one who managed to make a Java version of the TensorFlow Detection Activity run on Raspberry Pi Android Things, kindly contribute and let us know how you did it.
UPDATE:
Apparently, the camera can only support one stream configuration at a time. I was also able to infer that I have to modify the createCaptureSession()
function to only use one surface, my function now looks like this:
cameraDevice.createCaptureSession(
// Arrays.asList(surface, previewReader.getSurface()),
Arrays.asList(surface),
new CameraCaptureSession.StateCallback() {
@Override
public void onConfigured(final CameraCaptureSession cameraCaptureSession) {
// The camera is already closed
if (null == cameraDevice) {
return;
}
// When the session is ready, we start displaying the preview.
captureSession = cameraCaptureSession;
try {
// Auto focus should be continuous for camera preview.
// previewRequestBuilder.set(
// CaptureRequest.CONTROL_AF_MODE,
// CaptureRequest.CONTROL_AF_MODE_CONTINUOUS_PICTURE);
// Flash is automatically enabled when necessary.
// previewRequestBuilder.set(
// CaptureRequest.CONTROL_AE_MODE, CaptureRequest.CONTROL_AE_MODE_ON_AUTO_FLASH);
// Finally, we start displaying the camera preview.
previewRequest = previewRequestBuilder.build();
captureSession.setRepeatingRequest(
previewRequest, captureCallback, backgroundHandler);
previewRequestBuilder.addTarget(previewReader.getSurface());
} catch (final CameraAccessException e) {
LOGGER.e("exception hit while configuring camera!");
LOGGER.e(e, "Exception!");
}
}
@Override
public void onConfigureFailed(final CameraCaptureSession cameraCaptureSession) {
LOGGER.e("Configure failed!");
showToast("Failed");
}
},
null);
This enables me to get a live preview. However, the code does not proceed with sending the image from the preview to the processImage()
block.
Has anyone successfully implemented the TensorFlow-Lite examples that involve live Camera Previews to Android Things?
来源:https://stackoverflow.com/questions/53264074/android-things-creating-a-camera-preview-session-fails-and-no-preview-is-show