问题
I am currently working on a project regarding neural networks. For this, I want to build an Android Application which should use tensorflow [lite] to solve some object detection / recognition problems.
As I want the code to be as portable as possible, I want to write most of the code in C++, thus using the C++ API of tensorflow lite over the Java API / wrapper. So, I modified tensorflow/contrib/lite/BUILD and added the following to be able to create a shared tensorflow library.
cc_binary(
name = "libtensorflowLite.so",
linkopts=["-shared", "-Wl"],
linkshared=1,
copts = tflite_copts(),
deps = [
":framework",
"//tensorflow/contrib/lite/kernels:builtin_ops",
],
)
(Which is based on the answer to this issue: https://github.com/tensorflow/tensorflow/issues/17826)
Then I used
bazel build //tensorflow/contrib/lite:libtensorflowLite.so --crosstool_top=//external:android/crosstool --cpu=arm64-v8a --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --cxxopt="-std=c++11"
to finally build it.
Afterwards I headed over to Android Studio and set up a basic project. For adding the shared library to the project, I refered to this example:
https://github.com/googlesamples/android-ndk/tree/840858984e1bb8a7fab37c1b7c571efbe7d6eb75/hello-libs
I also added the needed dependencies for flatbuffers.
The build / compilation process succeeds without any linker errors (well, at least after trying around for some hours..).
The APK is then successfully installed on an Android device, but immediately crashes after it starts. Logcat gives the following output:
04-14 20:09:59.084 9623-9623/com.example.hellolibs E/AndroidRuntime: FATAL EXCEPTION: main
Process: com.example.hellolibs, PID: 9623
java.lang.UnsatisfiedLinkError: dlopen failed: library "/home/User/tensorflowtest/app/src/main/cpp/../../../../distribution/tensorflow/lib/x86/libtensorflowLite.so" not found
at java.lang.Runtime.loadLibrary0(Runtime.java:1016)
at java.lang.System.loadLibrary(System.java:1657)
at com.example.hellolibs.MainActivity.<clinit>(MainActivity.java:36)
at java.lang.Class.newInstance(Native Method)
at android.app.Instrumentation.newActivity(Instrumentation.java:1174)
at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2669)
at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2856)
at android.app.ActivityThread.-wrap11(Unknown Source:0)
at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1589)
at android.os.Handler.dispatchMessage(Handler.java:106)
at android.os.Looper.loop(Looper.java:164)
at android.app.ActivityThread.main(ActivityThread.java:6494)
at java.lang.reflect.Method.invoke(Native Method)
at com.android.internal.os.RuntimeInit$MethodAndArgsCaller.run(RuntimeInit.java:438)
at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:807)
I tried this on an android x86 emulator and on a real arm64-v8a android smartphone.
So for me this looks like on startup the application tries to load the tensorflowLite shared library, but is unable to find it. Opening the apk with a zip archive manager I can verify that the platform (arm, x86) dependent .so files are packed into the APK as expected (by adding the following to build.gradle:
sourceSets {
main {
// let gradle pack the shared library into apk
jniLibs.srcDirs = ['../distribution/tensorflow/lib']
}
})
What I do not understand is why it looks for the library in the path where I placed it on my Ubuntu 17.10 PC. So, I thought I had done a mistake trying to adapt the example about adding external libraries to an Android Studio Project I mentioned earlier. That's why I downloaded the whole project and opened it in Android Studio and verified that the example works as expected. Afterwards I replaced the example libgperf.so by the libtensorflowLite.so and left everything else, especially the CMakeLists.txt, untouched. But I get the exact same error again, therefore I suspect this to be a problem with the libtensorflowLite library itself and not the android project (although that's just my guess).
I am working on android studio 3.1.1, NDK Version 14 and API Level 24 (Android 7.0). If anyone has an idea what could be wrong, any help would be highly appreciated. I am also open for any other methods which allow me to use tensorflow lite with C++ for an android application.
Thanks a lot,
Martin
回答1:
I just remembered I asked this question a few weeks ago. Meanwhile, I found a solution to the problem and TensorflowLite is now nicely embedded into my Android Project, where I do all the programming using the C++ API!
The problem was that the Tensorflow shared library I built did not contain a soname. So, during build process, the library was stripped and as no name was found, the path was used as the "name". I noticed that while I further investigated my native-lib.so (the NDK C++ library which is then loaded by the App) using linux "strings" tool. Here I found out that indeed the path to load the library from "/home/User/tensorflowtest/app/src/main/cpp/../../../../distribution/tensorflow/lib/x86/libtensorflowLite.so" was set. Adding a "-Wl,-soname=libtensorflowLite.so" to the build options in the BUILD file fixed this issue! You can find the whole rule I used below.
As it was a pain to get everything set up due to the lack of explanations (it seems TensorflowLite is mostly used via Java API on Android ?), I want to give a short guidance on how use the C++ API of TensorflowLite in Android Studio (from within an Android NDK project).
1. Build the library for your architecture
To use the C++ API, you first need to build the TensorflowLite library. For this, add the following rule to the BUILD file in tensorflow/contrib/lite:
cc_binary(
name = "libtensorflowLite.so",
linkopts=[
"-shared",
"-Wl,-soname=libtensorflowLite.so",
],
linkshared = 1,
copts = tflite_copts(),
deps = [
":framework",
"//tensorflow/contrib/lite/kernels:builtin_ops",
],
)
Note: With this, a shared library can be built! A static one might also work.
Now you can build the library using
bazel build //tensorflow/contrib/lite:libtensorflowLite.so --crosstool_top=//external:android/crosstool --cpu=arm64-v8a --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --cxxopt="-std=c++11"
If you want to support multiple architectures, you will have to build the library several times and change the --cpu flag correspondingly.
NOTE: This works fine at least for arm64-v8a and the armeabi-v7a (haven't tested it with MIPS so this might work aswell). However on an x86 device, I get the "atomic_store_8" error already adressed in this topic: https://github.com/tensorflow/tensorflow/issues/16589
2. Add the library and the needed headers to be included in your Android Studio project
Having built the library, you now need to make sure it also is linked into your Application (more specifically: Into your Android NDK library, which in my case is named "native-lib"). I will give a short overview on how to do this, however if you need a more detailed explanation you may refer to the github link I provided in my initial question: https://github.com/googlesamples/android-ndk/tree/840858984e1bb8a7fab37c1b7c571efbe7d6eb75/hello-libs
2.1. In your Android Studio Project, open the CMakeLists.txt
2.2. Add the following:
# This will create a new "variable" holding the path to a directory
# where we will put our library and header files.
# Change this to your needs
set(distribution_DIR ${CMAKE_SOURCE_DIR}/distribution)
# This states that there exists a shared library called libtensorflowLite
# which will be imported (means it is not built with the rest of the project!)
add_library(libtensorflowLite SHARED IMPORTED)
# This indicates where the libtensorflowLite.so for each architecture is found relative to our distribution directory
set_target_properties(libtensorflowLite PROPERTIES IMPORTED_LOCATION
${distribution_DIR}/lib/${ANDROID_ABI}/libtensorflowLite.so)
# This indicates where the header files are found relative to our distribution dir
target_include_directories(native-lib PRIVATE
${distribution_DIR}/include)
# Finally, we make sure our libtensorflowLite.so is linked to our native-lib and loaded during runtime
target_link_libraries( # Specifies the target library.
native-lib
libtensorflowLite
# Links the target library to the log library
# included in the NDK.
${log-lib} )
2.3. Open the build.gradle for your Module: App (not the project one!)
2.4. Make sure our library will be packed into your APK
Add this inside the Android section:
sourceSets {
main {
// let gradle pack the shared library into apk
jni.srcDirs = []
jniLibs.srcDirs = ['distribution/lib']
}
}
You may have to edit the path accoding to your needs: The files here will be packed in to your .apk inside the lib directory.
3. Include flatbuffers
TensorflowLite uses the flatbuffers serialization library. I guess this will be added automatically if you build your project using bazel. But this is not the case when using Android Studio. Of course, you could also add a static or shared library too. However, for me it was easiest to just let flatbuffers compile each time with the rest of my app (it is not that big). I copied all of the flatbuffers *.cpp source files to my project and added them to the CMakeLists.
4. Copy the needed headers for TensorflowLite and flatbuffers
In 3. I just copied the cpp files to my project. However, the header files need to be located in the directory we set in target_include_directories in step 2.2.
So go ahead and copy all of the flatbuffers (from the flatbuffers repository) *.h files to this directory. Next, from the TensorflowLite repository, you need all header files inside the tensorflow/contrib/lite directory. However you should keep the folder structure
For me it looks like this:
- distribution
- lib
- arm64-v8a
- libtensorflowLite
- armeabi-v7a
- libtensorflowLite
- arm64-v8a
- include
- flatbuffers
- tensorflow
- contrib
- lite
- kernels
- nnapi
- schema
- tools
- lite
- contrib
- lib
So, if I haven't forgotten anything everything should be set up correctly by now! Hopefully this helped and it worked for you as it did for me ;)
Best regards,
Martin
来源:https://stackoverflow.com/questions/49834875/problems-with-using-tensorflow-lite-c-api-in-android-studio-project