actions-on-google

How to use the node js client library with express?

青春壹個敷衍的年華 提交于 2019-12-11 13:33:54
问题 I have implemented a dialogFlow fulfillment without using the node js client library. I have been parsing requests and preparing responses manually. Now, I wanted to start using the node js client library, as found here: https://actions-on-google.github.io/actions-on-google-nodejs/ I am using the express framework as generated by the express generator. Here is some of the contents of my app.js: var express = require('express'); var app = express(); var bodyParser = require('body-parser'); var

Redirect user to the Default Welcome Intent when he says “cancel” or “exit”

十年热恋 提交于 2019-12-11 13:31:37
问题 I'm developing a Dialogflow application for Google Assitant. In that, if I say "Cancel" it directly calls the exit_conversation intent where I've specified actions_intent_CANCEL event. So it displays the output specified in that intent and bot exit the conversation. Instead of exiting the Bot I need to open Default Welcome Intent. Is there any way to do that? P.S. I'm using Python fulfillment as a backend for this bot. 回答1: In short - no, you can't do that. You're essentially asking that,

Simulator not “talking” in code lab: Build Actions for the Google Assistant (Level 1)

对着背影说爱祢 提交于 2019-12-11 13:08:46
问题 If I follow the instructions for this code lab up until "Debug your Action" at the bottom of step 4, I get the bizarre behavior pictured below. Does anyone know why? Note that the display shows a question, but the audio never plays (look on left to see it's absence). I have reproduced this twice now, once yesterday and again today (re-creating the entire project from scratch following the code lab instructions, each time). This appears to be a bug in the "phone" surface only. Switching to

Create Entities and training phrases for values in functions for google action

匆匆过客 提交于 2019-12-11 13:04:41
问题 I have created a trivia game using the SDK, it takes user input and then compares it to a value in my DB to see if its correct. At the moment, I am just passing a raw input variable through my conversation, this means that it regularly fails when it mishears the user since the exact string which was picked up is rarely == to the value in the DB. Specifically I would like it to only pick up numbers, and for example realise that it must extract '10' , from a speech input of 'my answer is 10'. {

Google Actions SDK Sign-In implicit flow

南笙酒味 提交于 2019-12-11 12:03:17
问题 EDIT: On phone assistant its working now problem just exist in google action simulator I just try to setup Google Actions SDK account Linking with implicit grant and try to test it in Simulator. First question is this even possible in Simulator? To Do so I added at the action console account linking with the type implicit grant to my action. The url I used is working. Now I added a signup request to my action. For testing so if I write signup in simulator the server response with: {

Follow up intent for NO INPUT not firing with dialogflow

久未见 提交于 2019-12-11 11:03:46
问题 I have a "book reading" action and I tried to add a follow up intent for my read intent to reprompt a user if there was no response. Following the doc https://developers.google.com/actions/assistant/reprompts - my webhook never gets called. However, if I add the no input handler as a main intent, I do get this event! Is this a bug or did I miss something. 回答1: The no-input event is a little unusual, since it is handled differently internally compared to many other events. It would not

Can't get conv.data to save a parameter in Actions on Google

断了今生、忘了曾经 提交于 2019-12-11 10:14:19
问题 As you can see in: Console Log from SaveLocation - AskLocationPermission it doesn't save the location_type to the conv.data . While user storage works great I want to correctly use conv.data and conv.user.storage . You can see in the Example of conversation that the parameter is fulfilled, but it doesn't get saved to the conv.data . How it should work is when user says that he wants to save this location as his home or work, it should look into the conv.user.storage if he has home or work

API.ai Actions on Google API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: “: Cannot find field.”

青春壹個敷衍的年華 提交于 2019-12-11 09:52:56
问题 I am using python to create webhook for Assistat app. I am able to ask user for location permission, but as soon as user gives consent, I receive following error UnparseableJsonResponse API Version 2: Failed to parse JSON response string with 'INVALID_ARGUMENT' error: ": Cannot find field.". I have checked my webhook server and no request comes to it. This looks like some issue at API.ai side. Below is the Debug response from Actions console when using Python client { "assistantToAgentDebug":

What is meant by speech bias and how to use speechBiasHints in google-actions appResponse

瘦欲@ 提交于 2019-12-11 07:39:26
问题 I've build an app in "Actions on google" using actions SDK. App request and App response works fine, earlier i've created expectedInput section in AppResponse without using speechBiasingHints, but now i want to use it, and i can't find any information about speechBiasHints. I mean i need info on: What is meant by speech bias Can you provide an example on how to use speechBiasingHints 回答1: Speech biases are influencing the speech to text recognition. So you can for example add here names and

Actions on Google - What is the relationship between commands/devices/executions in the Google Home EXEC input and response?

≯℡__Kan透↙ 提交于 2019-12-11 07:30:17
问题 This question concerns the Actions on Google Smart Home documentation Create a Smart Home App, specifically the action.devices.EXECUTE section. We are somewhat confused regarding the exact relationship between the list of 'Command' objects and their associated lists of Devices and Executions, especially regarding how these are translated to a response. Based on the documentation, we believe that the intent is for Commands to be processed in order: top to bottom. Per Command, each Execution is