tensorflow-serving

Use string as input in Keras IMDB example

江枫思渺然 提交于 2020-01-04 05:51:05
问题 I was looking at the Keras IMDB Movie reviews sentiment classification example (and the corresponding model on github), which learns to decide whether a review is positive or negative. The data has been preprocessed such that each review is encoded as a sequence of integers, e.g. the review "This movie is awesome!" would be [11, 17, 6, 1187] and for this input the model gives the output 'positive'. The dataset also makes available the word index used for encoding the sequences, i.e. I know

Example for Deploying a Tensorflow Model via a RESTful API [closed]

谁都会走 提交于 2019-12-31 08:14:00
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 3 years ago . Is there any example code for deploying a Tensorflow Model via a RESTful API? I see examples for a command line program and for a mobile app. Is there a framework for this or people just load the model and expose the predict method via a web framework (like Flask)to take input (say via JSON) and return the

Google Cloud ML FAILED_PRECONDITION

随声附和 提交于 2019-12-30 22:56:39
问题 I am trying to use Google Cloud ML to host a Tensorflow model and get predictions. I have a pretrained model that I have uploaded to the cloud and I have created a model and version in my Cloud ML console. I followed the instructions from here to prepare my data for requesting online predictions. For both the Python method and the glcoud method I get the same error. For simplicity, I'll post the gcloud method: I run gcloud ml-engine predict --model spell_correction --json-instances test.json

Google Cloud ML FAILED_PRECONDITION

狂风中的少年 提交于 2019-12-30 22:56:03
问题 I am trying to use Google Cloud ML to host a Tensorflow model and get predictions. I have a pretrained model that I have uploaded to the cloud and I have created a model and version in my Cloud ML console. I followed the instructions from here to prepare my data for requesting online predictions. For both the Python method and the glcoud method I get the same error. For simplicity, I'll post the gcloud method: I run gcloud ml-engine predict --model spell_correction --json-instances test.json

Estimator with Coordinator as an input function for reading input data in distributed fashion in tensorflow

穿精又带淫゛_ 提交于 2019-12-25 12:23:11
问题 The CNN cifar10 tutorial (tensor flow tutorials) gives an example of low-level API use for reading data as an independent job to train model (with multiple GPU). Is it possible to use high-level API Estimator with low-level threading support and multi/single GPUs training? I am looking for a way to combine both: The custom Estimator from high-level API, details https://www.tensorflow.org/extend/estimators input_fn as a queue, which gives the same functionality which is described in https:/

Implementing TensorFlow Attention OCR on iOS

家住魔仙堡 提交于 2019-12-25 09:24:46
问题 I have successfully trained (using Inception V3 weights as initialization) the Attention OCR model described here: https://github.com/tensorflow/models/tree/master/attention_ocr and frozen the resulting checkpoint files into a graph. How can this network be implemented using the C++ API on iOS? Thank you in advance. 回答1: As suggested by others you can use some existing iOS demos (1, 2) as a starting point, but pay close attention to the following details: Make sure you use the right tools to

gRPC-only Tensorflow Serving client in C++

余生长醉 提交于 2019-12-25 01:43:51
问题 There seems to be a bit of information out there for creating a gRPC -only client in Python (and even a few other languages) and I was able to successfully get a working client that uses only gRPC in Python that works for our implementation. What I can't seem to find is a case where someone has successfully written the client in C++. The constraints of the task are as follows: The build system cannot be bazel , because the final application already has its own build system. The client cannot

ML engine serving seems to not be working as intended

最后都变了- 提交于 2019-12-24 20:17:22
问题 While using the following code and doing a gcloud ml-engine local predict I get: InvalidArgumentError (see above for traceback): You must feed a value for placeholder tensor 'Placeholder' with dtype string and shape [?] [[Node: Placeholder = Placeholderdtype=DT_STRING, shape=[?], _device="/job:localhost/replica:0/task:0/device:CPU:0"]] (Error code: 2) tf_files_path = './tf' # os.makedirs(tf_files_path) # temp dir estimator =\ tf.keras.estimator.model_to_estimator(keras_model_path="model_data

Nodejs Tensorflow Serving Client Error 3

廉价感情. 提交于 2019-12-24 19:17:56
问题 I'm serving a pre-trained inception model, and I've followed the official tutorials to serve it up until now. I'm currently getting an Error Code 3, as follows: { Error: contents must be scalar, got shape [305] [[Node: map/while/DecodeJpeg = DecodeJpeg[_output_shapes=[[?,?,3]], acceptable_fraction=1, channels=3, dct_method="", fancy_upscaling=true, ratio=1, try_recover_truncated=false, _device="/job:localhost/replica:0/task:0/device:CPU:0"](map/while/TensorArrayReadV3)]] at /server/node

{ “error”: “Serving signature name: ”serving_default“ not found in signature def” }

蓝咒 提交于 2019-12-24 12:26:40
问题 I used GCP(google cloud platform) to train my model and I could export the exported model. I used the model and used a local docker image of Tensorflow serving 1.8 CPU and i get the following result as output for REST post call { "error": "Serving signature name: \"serving_default\" not found in signature def" } 回答1: View the SignatureDefs of your model using SavedModelCLI command as shown below: saved_model_cli show --dir /usr/local/google/home/abc/serving/tensorflow_serving/servables