问题
I'm new to Natural Language Processing and I'm a confused about the terms used.
What is tokenization? POS tagging? Entity Identify?
Tokenization is only split the text in parts that can have a meaning or give a meaning for these parts? And the meaning, what is the name when I determine that something is a noun, verb or adjetive. And if I want to divide into dates, names, currency?
I need a simple explanation about the areas/terms used in NLP.
回答1:
To add to dmn's explanation:
In general, there are two themes you should care about in NLP:
Statistical vs Rule-Based Analysis
Lightweight vs Heavyweight Analysis
Statistical Analysis uses statistics machine learning techniques to classify text and in general have good precision and good recall. Rule-Based Analysis techniques basically use hand-built rules and have very good precision but terrible recall (basically they identify the cases in your rules, but nothing else).
Lightweight vs Heavyweight Analysis are the two approaches you'll see in the field. In general, academic work is heavyweight, featuring parsers, fancy classifiers and lots of very high tech NLP stuff. In industry, by and large the focus is on data, and a lot of the academic stuff scales poorly and going beyond standard statistical or machine learning techniques doesn't bring you much. For example, parsing is largely useless (and slow) and as such keyword and ngram analysis is actually pretty useful, especially when you have a lot of data. For example, Google Translate isn't apparently that fancy behind the scenes- they just have so much data they can crush everybody else no matter how refined their translation software is.
The upshot of this is in industry there's a lot of machine learning and math, but the NLP stuff is used is not very sophisticated, because the sophisticated stuff really doesn't work well. Far preferred is using user data like clicks on related subjects and mechanical turk... and this works very well as people are far better at understanding natural language than computers.
Parsing is break a sentence down into phrases, say verb phrase, noun phrase, prepositional phrase, etc and get a grammatical tree. You can use the online version of the Stanford Parser to play with examples and get a feel for what a parser does. For example, Let's say we have the sentence
My cat's name is Pat.
Then we do POS tagging:
My/PRP$ cat/NN 's/POS name/NN is/VBZ Pat/NNP ./.
Using the POS tags and a trained statistical parser, we get a parse tree:
(ROOT
(S
(NP
(NP (PRP$ My) (NN cat) (POS 's))
(NN name))
(VP (VBZ is)
(NP (NNP Pat)))
(. .)))
We can also do a slightly different type of parse called a dependency parse:
poss(cat-2, My-1)
poss(name-4, cat-2)
possessive(cat-2, 's-3)
nsubj(Pat-6, name-4)
cop(Pat-6, is-5)
N-Grams are basically sets of adjacent words of length n. You can look at n-grams in Google's data here. You can also do character n-grams which are used heavily for spelling correction.
Sentiment Analysis is analyzing text to extract how people feel about something or in what light things (such as brands) are mentioned. This involves a lot of looking at words that denote emotion.
Semantic Analysis is analyzing the meaning of text. Often this takes the form of taxonomies and ontologies where you group concepts together (dog,cat belong to animal and pet) but it is a very undeveloped field. Resources like WordNet and Framenet are useful here.
回答2:
Let's use an example like
My cat's name is Pat. He likes to sit on the mat.
Tokenization is to take these sentences into what we call tokens, which are basically the words. The tokens for this sentence are my, cat's, name, is, pat, he, likes, to sit, on, the, mat
. (Sometimes you may see cat's
as two tokens; this depends on personal preference and intention lol.)
POS stands for Part-Of-Speech, so to tag these sentences for parts-of-speech would be to run it through a program called a POS tagger, which will label each token in the sentence for its part-of-speech. The output from the tagger written by a group at Stanford in this case is:
My_PRP$ cat_NN 's_POS name_NN is_VBZ Pat_NNP ._.
He_PRP likes_VBZ to_TO sit_VB on_IN the_DT mat_NN ._.
(Here is a good example of cat's
being treated as two tokens.)
Entity Identify is more often called Named Entity Recognition. It is the process of taking a text like ours and identifying things that are mostly proper nouns but can also include dates or anything else that you teach the recognizer to, well, recognize. For our example a Named Entity Recognition system would insert a tag like
<NAME>Pat</NAME>
for our cat's name. If there was another sentence like
Pat is a part-time consultant for IBM in Yorktown Heights, New York.
now the recognizer would label three entities (four total since Pat
would be labeled twice).
<NAME>Pat</NAME>
<ORGANIZATION>IBM</ORGANIZATION>
<LOCATION>Yorktown Heights, New York</LOCATION>
Now how all of these tools actually work is a whole other story. :)
回答3:
To answer the more specific part of your question: tokenization is breaking the text into parts (usually words), not caring too much about their meaning. POS tagging is disambiguating between possible parts of speech (noun, verb, etc.), it takes place after tokenization. Recognizing dates, names etc. is named entity recognition (NER).
来源:https://stackoverflow.com/questions/6854455/someone-can-give-a-simple-explanation-about-the-elements-of-natural-language-pro