avspeechsynthesizer

Instruments reporting memory leak whenever AVSpeechSynthesizer is used to read text

送分小仙女□ 提交于 2019-12-23 19:29:48
问题 Everytime I use AVSpeechSynthesizer to speak text Instruments reports a memory leak in the AXSpeechImplementation library. Here's the code I'm using to make the call: AVSpeechUtterance *speak = [AVSpeechUtterance speechUtteranceWithString:text]; speak.voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-US"]; speak.rate = AVSpeechUtteranceMaximumSpeechRate * .2; [m_speechSynth speakUtterance:speak]; Here's the link to the Instruments screenshot http://imageshack.com/a/img690/7993/b9w5.png

How to consistently stop AVAudioSession after AVSpeechUtterance

北城以北 提交于 2019-12-23 09:40:46
问题 What I want to do is allow my app to speak an utterance using AVSpeechSynthesizer while background audio apps are playing audio. While my app is speaking, I'd like the background apps' audio to "dim" and then return to their original volume after my app has finished speaking. In my AudioFeedback class, I initialize I setup the AVAudioSessions like so: self.session = [AVAudioSession sharedInstance]; NSError *error; [self.session setCategory:AVAudioSessionCategoryPlayback withOptions

Swift: iPhone's volume is low when trying to change speech to iPhone's voice in swift

谁说我不能喝 提交于 2019-12-21 04:25:28
问题 I am trying Speech recognition sample. If I started to recognise my speech via microphone, then I tried to get iPhone's voice of that recognised text. It is working. But, voice is too low. Can u guide me on this? Rather than, if I am trying in simple button action, with AVSpeechUtterance code, volume is normal. After that, If I go for startRecognise() method, volume is too low. My Code func startRecognise() { let audioSession = AVAudioSession.sharedInstance() //2 do { try audioSession

AVSpeechSynthesizer with ios8

六眼飞鱼酱① 提交于 2019-12-20 10:55:15
问题 HI did anyone tried AvSpeechSynthesizer on iOS 8 ? I did quick app on Xcode 6 but no audio comes out, I ran the same on on the Xcode 5 and works without a hitch. sample code from http://nshipster.com/avspeechsynthesizer/ NSString *string = @"Hello, World!"; AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:string]; utterance.voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-US"]; AVSpeechSynthesizer *speechSynthesizer = [[AVSpeechSynthesizer alloc] init];

Using AVSpeechSynthesizer to read a description of location on a Map

大城市里の小女人 提交于 2019-12-20 04:31:09
问题 My map has 4 or 5 points close to each other and right now using AVSpeechSynthesizer I've got it so that it will say the name of the location (which is also displayed in a little bubble). I want it to still show that bubble but when clicked I want it to say a description of that place that I would have specified. This is my code at the moment: MapViewAnnotation.h @interface MapViewAnnotation : NSObject <MKAnnotation> { NSString *title; CLLocationCoordinate2D coordinate; NSString *desc; }

Why can't I control the Apple macOS Speech Synthesis audio unit with slider values?

与世无争的帅哥 提交于 2019-12-19 17:34:24
问题 I'm working to incorporate Apple speech synthesis audio unit stuff (works only on macOS, not iOS) into AudioKit and I've built a AKSpeechSynthesizer Class (initially created by wangchou in this pull request) and a demo project both available on the develop branch of AudioKit. My project is very similar to this Cocoa Speech Synthesis Example but on this project, the rate variable can be changed and varied smoothly between a low number of words per minute (40) up to a high number (300 ish).

Objective C: Wait for AVSpeechSynthesizer until it finishes speaking a word

一世执手 提交于 2019-12-13 07:47:45
问题 I need to disable the interaction with the user until the app finishes speaking. see example code below: self.view.userInteractionEnabled = NO; [self speak :@"wait for me to speak"]; self.view.userInteractionEnabled = YES; -(void)speak:(NSString*)word { AVSpeechUtterance *utterance = [[AVSpeechUtterance alloc] initWithString:word]; utterance.rate = AVSpeechUtteranceMinimumSpeechRate; utterance.rate = 0.2f; utterance.voice = [AVSpeechSynthesisVoice voiceWithLanguage [AVSpeechSynthesisVoice

AVSpeechRecognizer: required condition is false: _recordingTap == nil error in Swift3

吃可爱长大的小学妹 提交于 2019-12-12 17:50:56
问题 I have no idea why I have this error. The error I got is Terminating app due to uncaught exception 'com.apple.coreaudio.avfaudio', reason: 'required condition is false: _recordingTap == nil' UPDATE Actually, this works, but after several times, suddenly the button is disabled and the mic no longer is in action. Then it causes the error and get crashed. Could you please help me with this? class ViewController: UIViewController, SFSpeechRecognizerDelegate, UITextViewDelegate,

Localizing the Reading of Emoji on IOS 10.0 or Higher

一曲冷凌霜 提交于 2019-12-12 10:59:20
问题 I've noticed an issue where IOS does not seem to localize the reading (by AVSpeechSynthesizer) of emojis on IOS 10.0 or higher, but it does seem to do it properly on IOS 9.3 or lower. If you tell an AVSpeechSynthesizer that's set to English to speak an emoji by sending it the string, "😀", it will say "Grinning face with normal eyes." When you change the voice language of the synth to anything other than English, such as French, for example, and send the same emoji, it should say "Visage

AVSpeechSynthesizer word stress

筅森魡賤 提交于 2019-12-12 03:18:04
问题 I'm using AVSpeechSynthesizer for text to speech on 2 languages. Is there any way to specify speech stress on different part of the word? I've tried placing ' before and after desired vowel, and also using vowels with stress marks, e.g. ó , ý - which does not seem to have any impact. 回答1: In my case (language code is ru-RU), ` char helps to set word stress. 来源: https://stackoverflow.com/questions/32736130/avspeechsynthesizer-word-stress