alexa-skill

Amazon Alexa dynamic variables for intent

杀马特。学长 韩版系。学妹 提交于 2019-12-11 02:14:34
问题 I am trying to build an Alexa Skills Kit, where a user can invoke an intent by saying something like GetFriendLocation where is {Friend} and for Alexa to recognize the variable friend I have to define all the possible values in LIST_OF_Friends file. But what if I do not know all the values for Friend and still would like to make a best match for ones present in some service that my app has access to. 回答1: Supposedly if you stick a small dictionary into a slot (you can put up to 50,000 samples

alexa skill user input for spelling out letters

半腔热情 提交于 2019-12-10 20:39:14
问题 I'd like Alexa to be able to accept a variable-length list of English letters to my custom skill. It will allow users to search based on a string. There's two steps to this: Getting good representation for individual letters that Alexa can understand Enumerating sample utterances with variable number of letters For the first, one way would be to define a custom slot that has as its enumerated values of the English alphabet: SLOT_LETTER ay bee see dee ee eff gee ... etc but that feels hacky.

Alexa skill SSML max length

你。 提交于 2019-12-10 12:37:06
问题 What is the maximum length, or limits of, the SSML attribute in an Amazon Echo Alexa skill JSON response? "outputSpeech": { "type": "SSML", "ssml": "<speak>This output speech uses SSML.</speak>" } 回答1: From the JSON interface reference: Response Format This section documents the format of the response that your service returns. The service for an Alexa skill must send its response in JSON format. Note the following size limitations for the response: The outputSpeech response cannot exceed

ASK CLI to deploy to different environments?

我是研究僧i 提交于 2019-12-10 09:36:40
问题 Is it possible to use Alexa Skill Kit's ASK CLI deploy command to build, for example, a debug version of the app that deploys a development environment and a release version of the app that deploys to a test environment? My team and I are trying to deploy the same skill to two different environments, so our testing team can do their thing in the test environments and development can do their thing in the development environment. This will be a private skill so using http://developer.amazon

Alexa Skills Kit (ASK) and Utterances

我们两清 提交于 2019-12-10 04:06:43
问题 I am developing a simple, custom skill for Alexa. I have it up and running, and hosting the handler on AWS Lambda. It's working fine except... In the test UI, if I enter a valid utterance, e.g., help, cancel, swim, run (two custom utterances), everything works well; however, if I enter a nonsense utterance, e.g., dsfhfdsjhf, the Alexa service always maps the nonsense to the first valid intent in the intents schema. In my lambda code, I have a handler for handling unknown intents; however, the

My custom slot type is taking on unexpected values

落爺英雄遲暮 提交于 2019-12-08 21:59:40
问题 I noticed something strange when testing my interaction model with the Alexa skills kit. I defined a custom slot type, like so: CAR_MAKERS Mercedes | BMW | Volkswagen And my intent scheme was something like: { "intents": [ { "intent": "CountCarsIntent", "slots": [ { "name": "CarMaker", "type": "CAR_MAKERS" }, ... with sample utterances such as: CountCarsIntent Add {Amount} cars to {CarMaker} Now, when testing in the developer console, I noticed that I can write stuff like: "Add three cars to

Alexa Skill ARN - The remote endpoint could not be called, or the response it returned was invalid

╄→尐↘猪︶ㄣ 提交于 2019-12-08 19:18:40
问题 I've created a simple Lambda function to call a webpage, this works fine when I test it from the functions page however when trying to create a skill to call this function I end up with a "The remote endpoint could not be called, or the response it returned was invalid." error. Lambda Function var http = require('http'); exports.handler = function(event, context) { console.log('start request to ' + event.url) http.get(event.url, function(res) { console.log("Got response: " + res.statusCode);

How to implement a next intent in alexa

旧街凉风 提交于 2019-12-08 07:57:40
问题 How do you implement an 'AMAZON.NextIntent' in an alexa skill. suppose I have 3 audios(a1, a2, a3) enqueued and a1 is playing. If the user sends the request using 'nextIntent', what should be the response so that alexa plays a2? 回答1: Alexa SDK provides you ability to keep the state in session attributes. For Node.js it's this.attributes . You can read more about here in Skill State Management section. You can keep the current step in that attribute. Once your skill is started you can set the

Amazon Alexa - How to create Generic Slot

浪尽此生 提交于 2019-12-07 06:54:53
问题 How can I create a generic slot for an Alexa skill? So that I can create my own Todo app and it will recognise the free form text. 回答1: The Alexa blog announced a List Skill API. As mentioned above, the literal slot type is no longer supported for new skills. If you create a custom slot with a number of values - depending on your expected response values with a single word or 2+ words - Alexa will catch also spoken words not on the list and pass them to your skill. Transcription of these

How to implement a next intent in alexa

吃可爱长大的小学妹 提交于 2019-12-06 16:09:46
How do you implement an 'AMAZON.NextIntent' in an alexa skill. suppose I have 3 audios(a1, a2, a3) enqueued and a1 is playing. If the user sends the request using 'nextIntent', what should be the response so that alexa plays a2? Alexa SDK provides you ability to keep the state in session attributes. For Node.js it's this.attributes . You can read more about here in Skill State Management section. You can keep the current step in that attribute. Once your skill is started you can set the current step to "first" (or 1, or whatever). Once the AMAZON.NextIntent is triggered, you check the state of