tensorflow-serving

how to input multi features for tensorflow model inference

◇◆丶佛笑我妖孽 提交于 2021-01-29 12:26:39
问题 I'm trying to model serving test. Now, I'm following this example "https://www.tensorflow.org/beta/guide/saved_model" This example is OK. But, In my case, I have multi input features. loaded = tf.saved_model.load(export_path) infer = loaded.signatures["serving_default"] print(infer.structured_input_signature) => ((), {'input1': TensorSpec(shape=(None, 1), dtype=tf.int32, name='input1'), 'input2': TensorSpec(shape=(None, 1), dtype=tf.int32, name='input2')}) In example, for single input

TF model served using docker & C++ inference client on Windows 10

你说的曾经没有我的故事 提交于 2021-01-29 07:16:35
问题 I am trying code up a c++ tensorflow client to push images to the model which is served via tensorflow-serve docker, on Windows 10. docker run -p 8501:8501 --name tfserving_model_test --mount type=bind,source=D:/docker_test/model,target=/models/model -e MODEL_NAME=test_model -t tensorflow/serving Trying a simple code which was a part of TF serve (resnet_client.cc) example where I am passing a black image. // Preparing required variables to make a predict request. PredictRequest predictRequest

Run tensorflow model in CPP

烂漫一生 提交于 2021-01-29 04:56:00
问题 I trained my model using tf.keras. I convert this model to '.pb' by, import os import tensorflow as tf from tensorflow.keras import backend as K K.set_learning_phase(0) from tensorflow.keras.models import load_model model = load_model('model_checkpoint.h5') model.save('model_tf2', save_format='tf') This creates a folder 'model_tf2' with 'assets', varaibles, and saved_model.pb I'm trying to load this model in cpp. Referring to many other posts (mainly, Using Tensorflow checkpoint to restore

Run tensorflow model in CPP

我只是一个虾纸丫 提交于 2021-01-29 04:53:14
问题 I trained my model using tf.keras. I convert this model to '.pb' by, import os import tensorflow as tf from tensorflow.keras import backend as K K.set_learning_phase(0) from tensorflow.keras.models import load_model model = load_model('model_checkpoint.h5') model.save('model_tf2', save_format='tf') This creates a folder 'model_tf2' with 'assets', varaibles, and saved_model.pb I'm trying to load this model in cpp. Referring to many other posts (mainly, Using Tensorflow checkpoint to restore

Alternative function for tf.contrib.layers.flatten(x) Tensor Flow

家住魔仙堡 提交于 2021-01-29 00:12:50
问题 i am using Tensor flow 0.8.0 verison on Jetson TK1 with Cuda 6.5 on 32 bit arm architecture. For that i can't upgrade the Tensor Flow version and i am facing trouble in Flatten function x = tf.placeholder(dtype = tf.float32, shape = [None, 28, 28]) y = tf.placeholder(dtype = tf.int32, shape = [None]) images_flat = tf.contrib.layers.flatten(x) The error i am getting at this point is AttributeError: 'module' object has no attribute 'flatten' is there any alternative to this function that may be

Alternative function for tf.contrib.layers.flatten(x) Tensor Flow

孤者浪人 提交于 2021-01-28 23:57:18
问题 i am using Tensor flow 0.8.0 verison on Jetson TK1 with Cuda 6.5 on 32 bit arm architecture. For that i can't upgrade the Tensor Flow version and i am facing trouble in Flatten function x = tf.placeholder(dtype = tf.float32, shape = [None, 28, 28]) y = tf.placeholder(dtype = tf.int32, shape = [None]) images_flat = tf.contrib.layers.flatten(x) The error i am getting at this point is AttributeError: 'module' object has no attribute 'flatten' is there any alternative to this function that may be

convert tensorflow model to pb tensorflow

▼魔方 西西 提交于 2021-01-27 19:02:02
问题 I have a pretrained model that I need to convert to pb. I have following file in the folder: bert_config.json model.ckpt-1000data model.ckpt-10000.index model.ckpt-1000.meta vocab.txt How can I convert this to pb format? Thanks 回答1: You can freeze the model: TensorFlow: How to freeze a model and serve it with a python API import os, argparse import tensorflow as tf # The original freeze_graph function # from tensorflow.python.tools.freeze_graph import freeze_graph dir = os.path.dirname(os

TensorFlow v2: Replacement for tf.contrib.predictor.from_saved_model

∥☆過路亽.° 提交于 2020-12-31 04:59:02
问题 So far, I was using tf.contrib.predictor.from_saved_model to load a SavedModel ( tf.estimator model class). However, this function has unfortunately been removed in TensorFlow v2. So far, in TensorFlow v1, my coding was the following: predict_fn = predictor.from_saved_model(model_dir + '/' + model, signature_def_key='predict') prediction_feed_dict = dict() for key in predict_fn._feed_tensors.keys(): #forec_data is a DataFrame holding the data to be fed in for index in forec_data.index:

TensorFlow v2: Replacement for tf.contrib.predictor.from_saved_model

允我心安 提交于 2020-12-31 04:57:18
问题 So far, I was using tf.contrib.predictor.from_saved_model to load a SavedModel ( tf.estimator model class). However, this function has unfortunately been removed in TensorFlow v2. So far, in TensorFlow v1, my coding was the following: predict_fn = predictor.from_saved_model(model_dir + '/' + model, signature_def_key='predict') prediction_feed_dict = dict() for key in predict_fn._feed_tensors.keys(): #forec_data is a DataFrame holding the data to be fed in for index in forec_data.index:

TensorFlow v2: Replacement for tf.contrib.predictor.from_saved_model

℡╲_俬逩灬. 提交于 2020-12-31 04:56:47
问题 So far, I was using tf.contrib.predictor.from_saved_model to load a SavedModel ( tf.estimator model class). However, this function has unfortunately been removed in TensorFlow v2. So far, in TensorFlow v1, my coding was the following: predict_fn = predictor.from_saved_model(model_dir + '/' + model, signature_def_key='predict') prediction_feed_dict = dict() for key in predict_fn._feed_tensors.keys(): #forec_data is a DataFrame holding the data to be fed in for index in forec_data.index: