问题
I have created an Azure ML webservice as an example and face an unknown error when it comes to deploy a web service. The error comes without an explanation, so it's hard to trace.
When running the experiment within the studio, the experiment was running without any issue. However, when deploy to webservice, the test function has failed with the same input as in the studio.
I have also published a sample of the service to see if anyone can see what the issue is.
https://gallery.cortanaintelligence.com/Experiment/mywebservice-1
Some info about the service:
The service takes input as a string represented for a sparse feature vector of svmlight format. It will return the predicted class for the input feature vector. The error fails when running the test function from the deployed service while the experiment within the studio is running without any issue.
Hope anyone has an idea how it went wrong.
回答1:
Using test dialog means you are using request-response service which is a real-time API. This has http timeout as maximum time to complete the request. Since the feature vector is too long, the request is getting timed out. Can you please try to use batch execution service as described below
https://azure.microsoft.com/en-us/documentation/articles/machine-learning-consume-web-services/#batch-execution-service-bes
来源:https://stackoverflow.com/questions/38135604/azure-machine-learning-web-service-input-data-issue