Logging requests being served by tensorflow serving model

前端 未结 2 455
你的背包
你的背包 2021-01-02 07:53

I have built a model using tesnorflow serving and also ran it on server using this command:-

bazel-bin/tensorflow_serving/model_servers/tensorflow_model_serv         


        
相关标签:
2条回答
  • 2021-01-02 08:09

    When you run this command below, you are starting a process of tensorflow model server which serves the model at a port number (9009 over here).

    bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9009 
    --model_name=ETA_DNN_Regressor --model_base_path=//apps/node-apps/tensorflow- 
    models-repository/ETA
    

    You are not displaying the logs here,but the model server running. This is the reason why the screen is stagnant. You need to use the flag -v=1 when you run the above command to display the logs on your console

    bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server -v=1 --port=9009 --model_name='model_name' --model_base_path=model_path

    Now moving to your logging/monitoring of incoming requests/responses. You cannot monitor the incoming requests/responses when the VLOG is set to 1. VLOGs is called Verbose logs. You need to use the log level 3 to display all errors, warnings, and some informational messages related to processing times (INFO1 and STAT1). Please look into the given link for further details on VLOGS. http://webhelp.esri.com/arcims/9.2/general/topics/log_verbose.htm

    Now moving your second problem. I would suggest you to use environment variables provided by Tensorflow serving export TF_CPP_MIN_VLOG_LEVEL=3 instead of setting flags. Set the environment variable before you start the server. After that, please enter the below command to start your server and store the logs to a logfile named mylog

    bazel-bin/tensorflow_serving/model_servers/tensorflow_model_server --port=9009 --model_name='model_name' --model_base_path=model_path &> my_log &. Even though you close your console, all the logs gets stored as the model server runs. Hope this helps.

    0 讨论(0)
  • 2021-01-02 08:21

    For rudimentary HTTP request logging, you can set TF_CPP_VMODULE=http_server=1 to set the VLOG level just for the module http_server.cc — that will get you a very bare request log showing incoming requests and some basic error cases:

    2020-08-26 10:42:47.225542: I tensorflow_serving/model_servers/http_server.cc:156] Processing HTTP request: POST /v1/models/mymodel:predict body: 761 bytes.
    2020-08-26 10:44:32.472497: I tensorflow_serving/model_servers/http_server.cc:139] Ignoring HTTP request: GET /someboguspath
    2020-08-26 10:51:36.540963: I tensorflow_serving/model_servers/http_server.cc:156] Processing HTTP request: GET /v1/someboguspath body: 0 bytes.
    2020-08-26 10:51:36.541012: I tensorflow_serving/model_servers/http_server.cc:168] Error Processing HTTP/REST request: GET /v1/someboguspath Error: Invalid argument: Malformed request: GET /v1/someboguspath
    2020-08-26 10:53:17.039291: I tensorflow_serving/model_servers/http_server.cc:156] Processing HTTP request: GET /v1/models/someboguspath body: 0 bytes.
    2020-08-26 10:53:17.039456: I tensorflow_serving/model_servers/http_server.cc:168] Error Processing HTTP/REST request: GET /v1/models/someboguspath Error: Not found: Could not find any versions of model someboguspath
    2020-08-26 11:01:43.466636: I tensorflow_serving/model_servers/http_server.cc:156] Processing HTTP request: POST /v1/models/mymodel:predict body: 755 bytes.
    2020-08-26 11:01:43.473195: I tensorflow_serving/model_servers/http_server.cc:168] Error Processing HTTP/REST request: POST /v1/models/mymodel:predict Error: Invalid argument: Incompatible shapes: [1,38,768] vs. [1,40,768]
         [[{{node model/transformer/embeddings/add}}]]
    2020-08-26 11:02:56.435942: I tensorflow_serving/model_servers/http_server.cc:156] Processing HTTP request: POST /v1/models/mymodel:predict body: 754 bytes.
    2020-08-26 11:02:56.436762: I tensorflow_serving/model_servers/http_server.cc:168] Error Processing HTTP/REST request: POST /v1/models/mymodel:predict Error: Invalid argument: JSON Parse error: Missing a comma or ']' after an array element. at offset: 61
    

    ... you can skim https://github.com/tensorflow/serving/blob/master/tensorflow_serving/model_servers/http_server.cc for occurrences of VLOG(1) << to see all logging statements in this module.

    For gRPC probably there's some corresponding module that you can similarly enable VLOG for — I haven't gone looking for it.

    0 讨论(0)
提交回复
热议问题