tensorflow-serving signature for an XOR

半城伤御伤魂 提交于 2019-12-08 03:59:46

问题


I am trying to export my first xor NN using tensorflow serving but I am not getting any result when I call the gRPC. Here the code I use to predict the XOR

import tensorflow as tf
sess = tf.Session()
from keras import backend as K
K.set_session(sess)
K.set_learning_phase(0)  # all new operations will be in test mode from now on

from tensorflow.python.saved_model import builder as saved_model_builder
from tensorflow.python.saved_model import tag_constants, signature_constants, signature_def_utils_impl

from keras.models import Sequential
from keras.layers.core import Dense, Dropout, Activation
from keras.optimizers import SGD
import numpy as np

model_version = "2" #Change this to export different model versions, i.e. 2, ..., 7
epoch = 100 ## the higher this number is the more accurate the prediction will be 10000 is a good number to s
et it at just takes a while to train

#Exhaustion of Different Possibilities
X = np.array([
    [0,0],
    [0,1],
    [1,0],
    [1,1]
])

#Return values of the different inputs
Y = np.array([[0],[1],[1],[0]])

#Create Model
model = Sequential()
model.add(Dense(8, input_dim=2))
model.add(Activation('tanh'))
model.add(Dense(1))
model.add(Activation('sigmoid'))
sgd = SGD(lr=0.1)

model.compile(loss='binary_crossentropy', optimizer=sgd)
model.fit(X, Y, batch_size=1, nb_epoch=epoch)

test = np.array([[0.0,0.0]])

#setting values for the sake of saving the model in the proper format
x = model.input
y = model.output

print('Results of Model', model.predict_proba(X))

prediction_signature = tf.saved_model.signature_def_utils.predict_signature_def({"inputs": x}, {"prediction":
y})

valid_prediction_signature = tf.saved_model.signature_def_utils.is_valid_signature(prediction_signature)
if(valid_prediction_signature == False):
    raise ValueError("Error: Prediction signature not valid!")

builder = saved_model_builder.SavedModelBuilder('./'+model_version)
legacy_init_op = tf.group(tf.tables_initializer(), name='legacy_init_op')

# Add the meta_graph and the variables to the builder
builder.add_meta_graph_and_variables(
      sess, [tag_constants.SERVING],
      signature_def_map={
           signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:prediction_signature,
      },
      legacy_init_op=legacy_init_op)

# save the graph
builder.save()

Implement in docker

docker run -p 8501:8501 --mount type=bind,source=/root/tensorflow3/projects/example/xor_keras_tensorflow_serving,target=/models/xor -e MODEL_NAME=xor -t tensorflow/serving &

then I request the prediction by the following:

curl -d '{"inputs": [1,1]}' -X POST http://localhost:8501/v2/models/xor

the result is always this

<HTML><HEAD>
<TITLE>404 Not Found</TITLE>
</HEAD><BODY>
<H1>Not Found</H1>
</BODY></HTML>

Can you help me to find where I wrong? I have tried to change "inputs" in the curl with "instances", but nothing Thanks, Manuel


回答1:


Can you first try

curl http://localhost:8501/v1/models/xor

to check if the model is running? This should return the status of your model.

From RESTful API doc, the format is GET http://host:port/v1/models/${MODEL_NAME}[/versions/${MODEL_VERSION}]




回答2:


Thanks! you got the point! so I solved.

Actually there were 2 errors to the curl command:

  1. localhost:8501/v1/models/xor I put v2 thinking to use the version #2, but if you put v2 it does not work. It seems v# it is not the versiono of your saved model
  2. Also I need to specify the predict so the exact request is: http://localhost:8501/v1/models/xor:predict


来源:https://stackoverflow.com/questions/53380386/tensorflow-serving-signature-for-an-xor

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!