TF model served using docker & C++ inference client on Windows 10

你说的曾经没有我的故事 提交于 2021-01-29 07:16:35

问题


I am trying code up a c++ tensorflow client to push images to the model which is served via tensorflow-serve docker, on Windows 10.

docker run -p 8501:8501 --name tfserving_model_test --mount type=bind,source=D:/docker_test/model,target=/models/model -e MODEL_NAME=test_model -t tensorflow/serving

Trying a simple code which was a part of TF serve (resnet_client.cc) example where I am passing a black image.

        // Preparing required variables to make a predict request.
        PredictRequest predictRequest;
        PredictResponse response;
        ClientContext context;

        // Describing model name and signature from remote server.
        predictRequest.mutable_model_spec()->set_name("test_model");

        google::protobuf::Map< std::string, tensorflow::TensorProto >& inputs =
            *predictRequest.mutable_inputs();

        // Setting dimensions of the input shape.
        tensorflow::TensorProto inputShape;
        inputShape.set_dtype(tensorflow::DataType::DT_FLOAT);
        inputShape.mutable_tensor_shape()->add_dim()->set_size(1);               
        inputShape.mutable_tensor_shape()->add_dim()->set_size(224);     
        inputShape.mutable_tensor_shape()->add_dim()->set_size(224);
        inputShape.mutable_tensor_shape()->add_dim()->set_size(3); 



        // Loading an image for the request.
        for (auto x = 0; x < 224; ++x) //num cols
            for (auto y = 0; y < 224; ++y) { // num rows
                for (auto c = 0; c < 3; ++c) {
                    inputShape.add_float_val((float)0);
                }
            }

        //std::cout << inputShape.DebugString() << std::endl;
        inputs["input_1"] = inputShape;
        std::string channel = "http://127.0.0.1:8501"; 
        printf("%s \n", channel.c_str());
        std::unique_ptr<tensorflow::serving::PredictionService::Stub> stub =
            tensorflow::serving::PredictionService::NewStub(
                grpc::CreateChannel(channel, grpc::InsecureChannelCredentials()));
         
        // Firing predict request.
        grpc::Status status = stub->Predict(&context, predictRequest, &response);

        // Checking server response status.
        if (!status.ok()) {
            std::cerr << "Predict request has failed with code " << status.error_code()
                << " and message: " << status.error_message() << std::endl;
            return 1;
        }

When I try to run the executable I get, "DNS resolution failed"

I do have a simple python client script where I use the 'requests' package which seems to work fine. The server url in the script is set to 'http://localhost:8501/v1/models/model:predict' followed by response = requests.post(SERVER_URL, data=json_request)

In the c++ code, i have tried "localhost:8501" in place of "http://127.0.0.1:8501". This gives me 'Trying to connect an http1.x server' and using "http://localhost:8501" says "DNS resolution failed"

When I run the docker command, the command line tells me "Running gRPC ModelServer at 0.0.0.0:8500". However, setting channel="0.0.0.0:8500" throws "failed to connect to all addresses"

What would be the correct 'channel' string in c++?

Thanks! :)


回答1:


This correct way to do this is,

docker run -p 8500:8500 -p 8501:8501 --name tfserving_model_test --mount type=bind,source=D:/docker_test/model,target=/models/model -e MODEL_NAME=test_model -t tensorflow/serving

Having -p 8500 exposes the docker's grpc port.

and now, channel="localhost:8500" works!



来源:https://stackoverflow.com/questions/63475495/tf-model-served-using-docker-c-inference-client-on-windows-10

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!