Problem:
I have UITextField
side by side with UIButton
with send functionality. When user presses send button I'm performing simple action:
- (IBAction)sendMessage: (id)sender {
[self.chatService sendMessage: self.messageTextField.text];
self.messageTextField.text = @""; // here I get exception
}
Now when user starts using dictation from keyboard, then presses done on dictation view (keyboard) and immediately presses send button, I've got exception "Range or index out of bounds".
Possible solution:
I've noticed that other applications disable this "send" button when speech recognition server is processing data. This is exactly between two events: user presses "done" and results are appearing in text field. I wish to solve it in the same manner.
I've problem finding in documentation where this notification can be received. I've found UITextInput
protocol, but this is not what I need.
Similar topics:
- Using Dictation - iOS 6 - DidStart - solution not acceptable (might be rejected by apple)
- Disable Dictation button on the keyboard of iPhone 4S / new iPad - similar approach as above
What have I tried:
- simply catch and ignore exception. Crash didn't acured, but virtual keyboard become completely unresponsive
- Disabling send button when
[UITextInputMode currentInputMode].primaryLanguage
is equal@"dictation"
. NotificationUITextInputCurrentInputModeDidChangeNotification
which reports end of dictation mode arrives before dictation service commits new value and I'm still able to click send button to cause exception. I could add delay whenprimaryLanguage
losses @"dictation" value, but I don't like this approach. Most probably this required delay depends how much speech recognition service is responsive. - I've added bunch of actions on different events (this evets was looking processing:
UIControlEventEditingDidBegin
,UIControlEventEditingChanged
,UIControlEventEditingDidEnd
,UIControlEventEditingDidEndOnExit
). The good thing is that it looks likeUIControlEventEditingChanged
is fired exactly at desired moments: when user presses "Done" on dictation view and when service is committing or ending dictation. So this is my best concept so far. The bad thing is that this is fired in other cases too and there is no information to distinguish in which case this control event was fired, so I don't know should I disable or enable the button or do nothing.
I finally found ultimate solution.
It is simple elegant will pass apple review and it Always work. Just react on UIControlEventEditingChanged
and detect existance of replacemnt characterlike this:
-(void)viewDidLoad {
[super viewDidLoad];
[self.textField addTarget: self
action: @selector(eventEditingChanged:)
forControlEvents: UIControlEventEditingChanged];
}
-(IBAction)eventEditingChanged:(UITextField *)sender {
NSRange range = [sender.text rangeOfString: @"\uFFFC"];
self.sendButton.enabled = range.location==NSNotFound;
}
Old approach
Finlay I've found some solution. This is improved concept nr 3 with mix of concept nr 2 (based on that answer).
-(void)viewDidLoad {
[super viewDidLoad];
[self.textField addTarget: self
action: @selector(eventEditingChanged:)
forControlEvents: UIControlEventEditingChanged];
}
-(IBAction)eventEditingChanged:(UITextField *)sender {
NSString *primaryLanguage = [UITextInputMode currentInputMode].primaryLanguage;
if ([primaryLanguage isEqualToString: @"dictation"]) {
self.sendButton.enabled = NO;
} else {
// restore normal text field state
self.sendButton.enabled = self.textField.text.length>0;
}
}
- (IBAction)sendMessage: (id)sender {
[self.chatService sendMessage: self.messageTextField.text];
self.messageTextField.text = @"";
}
- (BOOL)textFieldShouldReturn:(UITextField *)textField {
if (self.textField.text.length==0 || !self.sendButton.enabled) {
return NO;
}
[self sendMessage: textField];
return YES;
}
// other UITextFieldDelegate methods ...
Now problem doesn't appears since user is blocked when it could happen (exactly between user presses "Done" button on dictation view and when results are coming from speech recognition service.
The good thing is that public API is used (only @"dictation" can be a problem, but I thin it should be accepted by Apple).
In iOS 7 Apple introduced TextKit so there are new information for this question: NSAttachmentCharacter = 0xfffc Used to denote an attachment as documentation says.
So, if your version is more or equal to 7.0, better approach is to check attributedString for attachments.
来源:https://stackoverflow.com/questions/24041181/how-to-detect-that-speech-recogntion-is-in-progress