drawing on iphone

前端 未结 3 1504
有刺的猬
有刺的猬 2020-12-10 19:07

I am trying to develop an iPhone application for children which will be able to draw characters on screen with touch and I want to know how to match the character drawn with

相关标签:
3条回答
  • 2020-12-10 19:19

    Using GLGestureRecognizer you can create a catalogue and it will compute a metric between an input point array against an "alphabet" you predefine.

    GLGestureRecognizer is an Objective-C implementation of the $1 Unistroke Recognizer, a simple gesture recognition algorithm (see Credits below). It is made available here in the form of an iPhone application project. It was implemented over the course of a couple evenings in late April 2009 by Adam Preble.

    A demo iPhone project (Gestures.xcodeproj) is provided; a UIView subclass receives touch events and sends them to the GLGestureRecognizer class while drawing the touched path in white. Once the gesture is completed, the resampled gesture is shown in green, its center at the red dot, along with the name of the best match, score (lower is better), and gesture orientation. A sample size of 16 points is used in the example, which seems to be adequate for very basic shapes.

    0 讨论(0)
  • 2020-12-10 19:32

    You could always use Gesture Recognition... but that's pretty difficult for a custom scenario like this.

    Otherwise you may find something in Quartz that will do at least part of this for you. I'm interested in seeing how you solve this.. it sounds like a rather difficult/interesting road ahead.

    0 讨论(0)
  • 2020-12-10 19:34

    Wow, this sounds like a tough task. One possibility that comes to my mind would be to use a support vector machine.

    1.) Generate an image of the drawing and "vectorize" it by attaching vectors to the path the user has drawn.

    2.) You need support vectors to compare. What i would do is, implement a "training application". Let some kids draw (eg. 10 times an A, 10 times a B, aso...), put the vectors in a database and use them as support vectors.

    3.) You need a rating algorithm which rates the user drawing by comparing it to the support vectors (this might be the most interesting part of it). I could think of measuring the distances of the support vectors start and end- points to the drawn vectors. The svm with the lowest distance is the letter you take. Then you might introduce a distance which is the "border", and take all user drawings above this border as unrecognized.

    A second approach might be that you generate images with the Letters (eg. white background and black letter (non- anti aliased)). You generate again an image of the user drawing and resize it to the image to compare with, trying to "overlap" it exactly. Then you count the black pixels that matches in the two images and take the letter with the most matchings.

    But, since i implemented something similar, i can tell that the svm approach is more satisfying since you can add support vectors if the result is not good enough. The quintessence is for sure your rating algorithm.

    Anyhow sounds like a couple of weeks of work.

    EDIT: Since this an interesting field i did some research and found a thesis about handwriting recognition. Have a look on this: http://risujin.org/cellwriter/. It basically describes the svm approach i mentioned and gives some algorithms that might help you.

    0 讨论(0)
提交回复
热议问题