Does Unicode store stroke count information about Chinese, Japanese, or other stroke-based characters?
UILocalizedIndexedCollation can be a total solution.
https://developer.apple.com/library/ios/documentation/iPhone/Reference/UILocalizedIndexedCollation_Class/UILocalizedIndexedCollation.html
If you want to do character recognition goggle HanziDict.
Also take a look at the Unihan data site:
http://www.unicode.org/charts/unihanrsindex.html
You can look up stroke count and then get character info. You might be able to build your own look up.
A little googling came up with Unihan.zip, a file published by the Unicode Consortium which contains several text files including Unihan_RadicalStrokeCounts.txt
which may be what you want. There is also an online Unihan Database Lookup based on this data.
In Python there is a library for that:
>>> from cjklib.characterlookup import CharacterLookup
>>> cjk = CharacterLookup('C')
>>> cjk.getStrokeCount(u'日')
4
Disclaimer: I wrote it
You mean, is it encoded somehow in the actual code point? No. There may well be a table somewhere you can find on the net (or create one) but it's not part of the Unicode mandate to store this sort of metadata.