azure-cognitive-services

The scale operation is not allowed for this subscription in this region. Try selecting different region or scale option

独自空忆成欢 提交于 2019-12-11 04:25:30
问题 I am on Azure trial subscription and I try to create a QnA Maker service. I got the error above. I heard that the trial subscription is not allowed in West or West Central or Brazil regions. However, I tried to create it in East or Central region and it still fails. The detail error is below {"code":"DeploymentFailed","message":"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.","details":[{

What audio formats are supported by Azure Cognitive Services' Speech Service (SST)?

风格不统一 提交于 2019-12-11 03:08:13
问题 Bearing in mind that the Microsoft/Azure Cognitive Services' "Speech Service" is currently going through a rationalisation exercise, as far as I can tell from looking at https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/rest-apis#speech-to-text https://docs.microsoft.com/en-us/azure/cognitive-services/speech/home only .wav binaries are acceptable, with anything else giving the response: {"Message":"Unsupported audio format"} Is there any other way to discover the

CustomVision: Operation returned an invalid status code: 'NotFound'

与世无争的帅哥 提交于 2019-12-11 02:33:48
问题 I'm using the NuGet package Microsoft.Cognitive.CustomVision.Prediction version 1.2.0 . I created 1 trial project and trained it with a few images. Now when I try to call the API for a prediction using the PredicionEndpoint , the system throws an exception: Microsoft.Rest.HttpOperationException . When I debug the code and inspect the exception, it says: {"Operation returned an invalid status code 'NotFound'"} This is my code: var attachmentStream = await httpClient.GetStreamAsync(imageUrl);

Azure Speech To Text: Conversation Transcribing userid always return $ref$

喜欢而已 提交于 2019-12-11 01:07:42
问题 Using sample code to transcribe conversation, but on recognized event i always get $ref$ when calling e.Result.UserId . I use 16-bit samples, 16 kHz sample rate, and a single channel (Mono) format for voice signatures. And 32-bit samples, 32 kHz sample rate, and a single channel (Mono) format for Transcribing conversations. All code from: https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-use-conversation-transcription-service Is there any ideas? or .wav sample

Could not load file or assembly Bond.IO

 ̄綄美尐妖づ 提交于 2019-12-10 21:03:55
问题 Using Microsoft.Bing.Speech nuget package and Net Framework 4.6.1 I'm having this exception when calling RecognizeAsync() Could not load file or assembly 'Bond.IO, Version=1.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040) My code: public static async Task SpeechToTextStreamPO(Stream audioStream, string textResult) { var subscriptionKey =

How does navigation work with LUIS subdialogs?

家住魔仙堡 提交于 2019-12-10 19:22:47
问题 I have a question... Unfortunately all the samples on the web are too shallow and don't really cover this well: I have a RootDialog that extends the LuisDialog. This RootDialog is responsible for figuring out what the user wants to do. It could be multiple things, but one of them would be initiating a new order. For this, the RootDialog would forward the call to the NewOrderDialog, and the responsibility of the NewOrderDialog would be to figure out some basic details (what does the user want

LUIS - Can we use phrases list for new values in the entity type List

爷,独闯天下 提交于 2019-12-10 18:50:06
问题 I'm creating LUIS chat bot app for extracting information regarding a company. For example " what is the filed_name1 for company Google ". So I'm currently extracting " filed_name1 " using "list entity " as the number of fields for a company are limited. Similarly I'm using List entity for extracting company name.As the company names are also limited as now. Now i want to handle scenario when a new company name gets added to existing list. I've tried using " Phrases list" to check if it can

Microsoft Cognitive services - Speech customization testing processing seems freezed

て烟熏妆下的殇ゞ 提交于 2019-12-08 12:53:10
问题 I upload sucessfully data to speech customization (wav audio+ txt transcription) for just one audio in a zip file according to Microsoft docs: https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-custom-speech-test-data. When i click to add a test i choose data and it's taking an eternity to process results and never stops processing. My audio is in pt-BR model. Any idea? I cannot interrupt or deleting tests while it's processing 回答1: There is currently an issue in

Is new ms botbuilder directline speech good fit for call center scenario?

試著忘記壹切 提交于 2019-12-08 10:04:16
问题 MS recently introduced direct speech channel and some samples for web frontend to use it. But i was wondering is it a good fit for use in call center scenario using some SIP or services like twilio phone? If so i would like to see some docs how to use direct line speech api and wire it up to some telephony? I've already created github issue but it stay wo attention https://github.com/MicrosoftDocs/bot-docs/issues/1162 PS: also i have related problem, i can't find any docs on how to exachange

How to integrate LUIS and QnA Maker services in single Node.js bot?

孤街浪徒 提交于 2019-12-07 05:32:00
问题 I'm developing a chatbot using Microsoft Bot Framework with Node.js SDK. I've integrated LUIS and QnA maker but I want to create this scenario if it's possible. Taking in example the following link and in particular this section: There are a few ways that a bot may implement a hybrid of LUIS and QnA Maker: Call LUIS first, and if no intent meets a specific threshold score, i.e., "None" intent is triggered, then call QnA Maker. Alternatively, create a LUIS intent for QnA Maker, feeding your