TensorFlow Lite C++ API example for inference

前端 未结 2 1611
野趣味
野趣味 2021-02-06 02:05

I am trying to get a TensorFlow Lite example to run on a machine with an ARM Cortex-A72 processor. Unfortunately, I wasn\'t able to deploy a test model due to the lack of exampl

相关标签:
2条回答
  • 2021-02-06 02:32

    Here is the minimal set of includes:

    #include "tensorflow/lite/interpreter.h"
    #include "tensorflow/lite/kernels/register.h"
    #include "tensorflow/lite/model.h"
    #include "tensorflow/lite/tools/gen_op_registration.h"
    

    These will include other headers, e.g. <memory> which defines std::unique_ptr.

    0 讨论(0)
  • 2021-02-06 02:55

    I finally got it to run. Considering my directory structure looks like this:

    /(root)
        /tensorflow
            # whole tf repo
        /demo
            demo.cpp
            linear.tflite
            libtensorflow-lite.a
    

    I changed demo.cpp to

    #include <stdio.h>
    #include "tensorflow/lite/interpreter.h"
    #include "tensorflow/lite/kernels/register.h"
    #include "tensorflow/lite/model.h"
    #include "tensorflow/lite/tools/gen_op_registration.h"
    
    int main(){
    
        std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("linear.tflite");
    
        if(!model){
            printf("Failed to mmap model\n");
            exit(0);
        }
    
        tflite::ops::builtin::BuiltinOpResolver resolver;
        std::unique_ptr<tflite::Interpreter> interpreter;
        tflite::InterpreterBuilder(*model.get(), resolver)(&interpreter);
    
        // Resize input tensors, if desired.
        interpreter->AllocateTensors();
    
        float* input = interpreter->typed_input_tensor<float>(0);
        // Dummy input for testing
        *input = 2.0;
    
        interpreter->Invoke();
    
        float* output = interpreter->typed_output_tensor<float>(0);
    
        printf("Result is: %f\n", *output);
    
        return 0;
    }
    

    Also, I had to adapt my compile command (I had to install flatbuffers manually to make it work). What worked for me was:

    g++ demo.cpp -I/tensorflow -L/demo -ltensorflow-lite -lrt -ldl -pthread -lflatbuffers -o demo
    

    Thanks to @AlexCohn for getting me on the right track!

    0 讨论(0)
提交回复
热议问题