问题
Does anybody know how to create a C# client for tensorflow serving?
My tensorflow serving installation:
I installed tensorflow serving using the tensorflow serving dockerfile, then inside the container I did the following:
pip install tensorflow
pip install tensorflow-serving-api
echo "deb [arch=amd64] http://storage.googleapis.com/tensorflow-serving-apt stable tensorflow-model-server tensorflow-model-server-universal" | tee /etc/apt/sources.list.d/tensorflow-serving.list
curl https://storage.googleapis.com/tensorflow-serving-apt/tensorflow-serving.release.pub.gpg | apt-key add -
apt-get update && apt-get install tensorflow-model-server
Then I run the tensorflow serving server:
tensorflow_model_server --port=9000 --model_name=example_model --model_base_path=/serving/my_model_2 &> my_log &
where my_model_2 contains the exported tensorflow model I want to serve.
Given this information I have the following questions:
- Do I need to install tensorflow serving in a different way in order to make a C# client? If I need to install it in a different way; Can you tell me how?
- Can you give me a general idea of what I have to do in order to accomplish my goal? I mean, I suspect I have to install my tf serving in a different way in order to create a explicit .proto file . I'm a little lost about this if you can give me the general idea and an example I would appreciate it.
回答1:
As far as I understand, you need the proto files to generate a tensorflow serving client in C# for the grpc services.
https://github.com/Wertugo/TensorFlowServingCSharpClient This is one example I am following. Its the same MNIST example with C# client.
Hope this helps.
Please update here if you have got any better options.
来源:https://stackoverflow.com/questions/49040715/tensorflow-serving