问题
I am using the Tensorflow C++ API to load a SavedModel
and run inference. The model loads fine, but when I run the inference, I have the following error:
$ ./bazel-bin/tensorflow/gan_loader/gan_loader
2020-06-21 19:29:18.669604: I tensorflow/cc/saved_model/reader.cc:31] Reading SavedModel from: /home/eduardo/Documents/GitHub/edualvarado/tensorflow/tensorflow/gan_loader/generator_model_final
2020-06-21 19:29:18.671368: I tensorflow/cc/saved_model/reader.cc:54] Reading meta graph with tags { serve }
2020-06-21 19:29:18.671385: I tensorflow/cc/saved_model/loader.cc:295] Reading SavedModel debug info (if present) from: /home/eduardo/Documents/GitHub/edualvarado/tensorflow/tensorflow/gan_loader/generator_model_final
2020-06-21 19:29:18.671474: I tensorflow/core/platform/cpu_feature_guard.cc:143] Your CPU supports instructions that this TensorFlow binary was not compiled to use: SSE3 SSE4.1 SSE4.2 AVX AVX2 FMA
2020-06-21 19:29:18.688557: I tensorflow/cc/saved_model/loader.cc:234] Restoring SavedModel bundle.
2020-06-21 19:29:18.707707: I tensorflow/cc/saved_model/loader.cc:183] Running initialization op on SavedModel bundle at path: /home/eduardo/Documents/GitHub/edualvarado/tensorflow/tensorflow/gan_loader/generator_model_final
2020-06-21 19:29:18.714949: I tensorflow/cc/saved_model/loader.cc:364] SavedModel load for tags { serve }; Status: success: OK. Took 45356 microseconds.
Segmentation fault (core dumped)
The complete infering.py
code is the following. At the beginning, commented you can find the information about the SavedModel
.
/* INFO ABOUT SAVEDMODEL
The given SavedModel SignatureDef contains the following input(s):
inputs['dense_1_input'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 100)
name: serving_default_dense_1_input:0
The given SavedModel SignatureDef contains the following output(s):
outputs['conv2d_2'] tensor_info:
dtype: DT_FLOAT
shape: (-1, 28, 28, 1)
name: StatefulPartitionedCall:0
Method name is: tensorflow/serving/predict
*/
#include <fstream>
#include <utility>
#include <vector>
#include "tensorflow/cc/ops/const_op.h"
#include "tensorflow/cc/ops/image_ops.h"
#include "tensorflow/cc/ops/standard_ops.h"
#include "tensorflow/core/framework/graph.pb.h"
#include "tensorflow/core/framework/tensor.h"
#include "tensorflow/core/graph/default_device.h"
#include "tensorflow/core/graph/graph_def_builder.h"
#include "tensorflow/core/lib/core/errors.h"
#include "tensorflow/core/lib/core/stringpiece.h"
#include "tensorflow/core/lib/core/threadpool.h"
#include "tensorflow/core/lib/io/path.h"
#include "tensorflow/core/lib/strings/str_util.h"
#include "tensorflow/core/lib/strings/stringprintf.h"
#include "tensorflow/core/platform/env.h"
#include "tensorflow/core/platform/init_main.h"
#include "tensorflow/core/platform/logging.h"
#include "tensorflow/core/platform/types.h"
#include "tensorflow/core/public/session.h"
#include "tensorflow/core/util/command_line_flags.h"
#include "tensorflow/cc/saved_model/loader.h"
#include "tensorflow/cc/saved_model/tag_constants.h"
// These are all common classes it's handy to reference with no namespace.
using tensorflow::Flag;
using tensorflow::int32;
using tensorflow::Status;
using tensorflow::string;
using tensorflow::Tensor;
using tensorflow::tstring;
/*
TODO: Functions
*/
Tensor CreateLatentSpace(const int latent_dim, const int num_samples) {
Tensor tensor(tensorflow::DT_FLOAT, tensorflow::TensorShape({num_samples, latent_dim}));
auto tensor_mapped = tensor.tensor<float, 2>();
for (int idx = 0; idx < tensor.dim_size(0); ++idx) {
for (int i = 0; i < tensor.dim_size(1); ++i) {
tensor_mapped(idx, i) = drand48() - 0.5;
}
}
return tensor;
}
int main(int argc, char* argv[]) {
// These are the command-line flags the program can understand.
// They define where the graph and input data is located, and what kind of
// input the model expects.
// To create latent space
int32 latent_dim = 100;
int32 samples_per_row = 5;
int32 num_samples = 25;
// Input/Output names
string input_layer = "serving_default_dense_1_input";
string output_layer = "StatefulPartitionedCall";
// Arguments
std::vector<Flag> flag_list = {
Flag("latent_dim", &latent_dim, "latent dimensions"),
Flag("samples_per_row", &samples_per_row, "samples per row"),
Flag("num_samples", &num_samples, "number of samples"),
Flag("input_layer", &input_layer, "name of input layer"),
Flag("output_layer", &output_layer, "name of output layer"),
};
string usage = tensorflow::Flags::Usage(argv[0], flag_list);
const bool parse_result = tensorflow::Flags::Parse(&argc, argv, flag_list);
if (!parse_result) {
LOG(ERROR) << usage;
return -1;
}
// We need to call this to set up global state for TensorFlow.
tensorflow::port::InitMain(argv[0], &argc, &argv);
if (argc > 1) {
LOG(ERROR) << "Unknown argument " << argv[1] << "\n" << usage;
return -1;
}
// TODO: First we load and initialize the model.
std::unique_ptr<tensorflow::Session> session;
tensorflow::SavedModelBundle model;
tensorflow::SessionOptions session_options;
tensorflow::RunOptions run_options;
const string export_dir = "/home/eduardo/Documents/GitHub/edualvarado/tensorflow/tensorflow/gan_loader/generator_model_final";
const std::unordered_set<std::string> tags = {"serve"};
auto load_graph_status = tensorflow::LoadSavedModel(session_options, run_options, export_dir, tags, &model);
if (!load_graph_status.ok()) {
std::cerr << "Failed: " << load_graph_status;
return -1;
}
// TODO: Create latent space
auto latent_space_tensor = CreateLatentSpace(100, 1);
// TODO: Run the latent space through the model
std::vector<Tensor> outputs;
Status run_status = session->Run({{input_layer, latent_space_tensor}},
{output_layer}, {}, &outputs);
if (!run_status.ok()) {
LOG(ERROR) << "Running model failed: " << run_status;
return -1;
}
// TODO: Save the figure
return 0;
}
I think I have tried almost everything, but sadly there is no so much documentation about the C++ API. Could you please provide me with some guidance why this is happening?
Thank you very much.
OS Environment:
- Ubuntu 18.04.
- Tensorflow 2.2.0
- Bazel 2.0.0
回答1:
In the code snippet, the session
ptr is not initialized before calling run(..).
std::unique_ptr<tensorflow::Session> session;
Status run_status = session->Run({{input_layer, latent_space_tensor}},
{output_layer}, {}, &outputs);
Try initialising session
before calling run(..), this will fix the issue.
One way to initialise the session is
std::unique_ptr<tensorflow::Session> session = make_unique<tensorflow::Session>()
This calls the default constructor for tensorflow::Session
and now your ptr points to the constructed object, and manages it deallocation when the ptr goes out of scope.
来源:https://stackoverflow.com/questions/62502217/segmentation-fault-core-dumped-infering-with-tensorflow-c-api-from-savedmo