Calling SpeechAPI for text to speech on Azure

孤街醉人 提交于 2019-12-23 18:29:52

问题


I have the following very basic TTS code running on my local server

using System.Speech.Synthesis;
...
SpeechSynthesizer reader = new SpeechSynthesizer();
reader.Speak("This is a test");

This code has a dependency on System.Speech for which I have added a Reference in my VS 2015 project. Works fine but from what I have read and from trying it I know this will not work when the code is hosted on Azure. I have read several posts on SO querying if it is actually possible to do TTS on azure. Certainly 2 yrs ago it did not appear to be possible. How to get System.Speech on windows azure websites?

All roads seem to lead to the Microsoft Speech API https://azure.microsoft.com/en-gb/marketplace/partners/speechapis/speechapis/ I have signed up and have gotten my private and sec keys for calling into this API. However my question is this. How do I actually call the SpeechAPI? What do I have to change in the simple code example above so that this will work when running on azure?


回答1:


The speech API you referred to at the Azure marketplace is part of an AI Microsoft project called ProjectOxford which offers an array of APIs for computer vision, speech and language.

These are all RESTful APIs, meaning that you will be constructing HTTP requests to send to a hosted online service in the cloud. The speech-to-text documentation is available here and you can find sample code for various clients on github. Specifically for C# you can see some code in this sample project.

Please note that ProjectOxford is still in preview (Beta). Additional support for using these APIs can be found on the ProjectOxford MSDN forum.

But just to give you an idea of how your program will look like (taken from the above code sample on github):

        AccessTokenInfo token;

        // Note: Sign up at http://www.projectoxford.ai for the client credentials.
        Authentication auth = new Authentication("Your ClientId goes here", "Your Client Secret goes here");

        ... 

        token = auth.GetAccessToken();

        ...

        string requestUri = "https://speech.platform.bing.com/synthesize";

        var cortana = new Synthesize(new Synthesize.InputOptions()
        {
            RequestUri = new Uri(requestUri),
            // Text to be spoken.
            Text = "Hi, how are you doing?",
            VoiceType = Gender.Female,
            // Refer to the documentation for complete list of supported locales.
            Locale = "en-US",
            // You can also customize the output voice. Refer to the documentation to view the different
            // voices that the TTS service can output.
            VoiceName = "Microsoft Server Speech Text to Speech Voice (en-US, ZiraRUS)",
            // Service can return audio in different output format. 
            OutputFormat = AudioOutputFormat.Riff16Khz16BitMonoPcm,
            AuthorizationToken = "Bearer " + token.access_token,
        });

        cortana.OnAudioAvailable += PlayAudio;
        cortana.OnError += ErrorHandler;
        cortana.Speak(CancellationToken.None).Wait();


来源:https://stackoverflow.com/questions/35965060/calling-speechapi-for-text-to-speech-on-azure

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!