问题
I'm currently trying to write a demo for actions on google using a rest web service.
At the moment the user opens the action ("talk to testaction") and is presented a welcome message (via the Main intent). This initial intent expects a user response and also sets the next expected intent via the possible_intents field in the JSON response
According to the documentation I should be able to specify a custom intent in possible_intents of my HTTP JSON response.
However, if I use any intent other than "assistant.intent.action.TEXT", once I respond to the initial intent / prompt, I get the following error:
Sorry, I did not understand.
And the response to the initial welcome intent is not properly routed to my service.
This does not work:
{
"response": "...",
"expectUserResponse": true,
"conversationToken": "...",
"audioResponse": "...",
"debugInfo": {
"agentToAssistantDebug": {
"agentToAssistantJson": {
"conversation_token": "...",
"expect_user_response": true,
"expected_inputs": [
{
"input_prompt": {
[...]
},
"possible_intents": [
{
"intent": "testintent"
}
]
}
]
}
}
}
}
This works:
{
"response": "...",
"expectUserResponse": true,
"conversationToken": "...",
"audioResponse": "...",
"debugInfo": {
"agentToAssistantDebug": {
"agentToAssistantJson": {
"conversation_token": "...",
"expect_user_response": true,
"expected_inputs": [
{
"input_prompt": {
[...]
},
"possible_intents": [
{
"intent": "assistant.intent.action.TEXT"
}
]
}
]
}
}
}
}
my testintent is properly defined in the actions package and works just fine if I call it directly.
Is it really only possible to use the generic TEXT intent and I then have to do all of the text-matching and intent recognition myself in code?
回答1:
When using the Actions SDK, only the TEXT intent is supported. You have to use your own NLU to parse the raw text input provided by the user.
If you don't have your own NLU, then we recommend using API.AI.
来源:https://stackoverflow.com/questions/41427697/expectedinputs-possible-intents-only-works-with-assistant-intent-action-text