Separate word lists for nouns, verbs, adjectives, etc

后端 未结 5 1864
眼角桃花
眼角桃花 2021-01-30 02:54

Usually word lists are 1 file that contains everything, but are there separately downloadable noun list, verb list, adjective list, etc?

I need them for English specific

相关标签:
5条回答
  • 2021-01-30 03:01

    If you download just the database files from wordnet.princeton.edu/download/current-version you can extract the words by running these commands:

    egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z_]*\s" data.adj | cut -d ' ' -f 5 > conv.data.adj
    egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z_]*\s" data.adv | cut -d ' ' -f 5 > conv.data.adv
    egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z_]*\s" data.noun | cut -d ' ' -f 5 > conv.data.noun
    egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z_]*\s" data.verb | cut -d ' ' -f 5 > conv.data.verb
    

    Or if you only want single words (no underscores)

    egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z]*\s" data.adj | cut -d ' ' -f 5 > conv.data.adj
    egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z]*\s" data.adv | cut -d ' ' -f 5 > conv.data.adv
    egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z]*\s" data.noun | cut -d ' ' -f 5 > conv.data.noun
    egrep -o "^[0-9]{8}\s[0-9]{2}\s[a-z]\s[0-9]{2}\s[a-zA-Z]*\s" data.verb | cut -d ' ' -f 5 > conv.data.verb
    
    0 讨论(0)
  • 2021-01-30 03:05

    http://icon.shef.ac.uk/Moby/mpos.html

    Each part-of-speech vocabulary entry consists of a word or phrase field followed by a field delimiter of (ASCII 215) and the part-of-speech field that is coded using the following ASCII symbols (case is significant):

    Noun                            N
    Plural                          p
    Noun Phrase                     h
    Verb (usu participle)           V
    Verb (transitive)               t
    Verb (intransitive)             i
    Adjective                       A
    Adverb                          v
    Conjunction                     C
    Preposition                     P
    Interjection                   !
    Pronoun                         r
    Definite Article                D
    Indefinite Article              I
    Nominative                      o
    
    0 讨论(0)
  • 2021-01-30 03:08

    As others have suggested, the WordNet database files are a great source for parts of speech. That said, the examples used to extract the words isn't entirely correct. Each line is actually a "synonym set" consisting of multiple synonyms and their definition. Around 30% of words only appear as synonyms, so simply extracting the first word is missing a large amount of data.

    The line format is pretty simple to parse (search.c, function parse_synset), but if all you're interested in are the words, the relevant part of the line is formatted as:

    NNNNNNNN NN a NN word N [word N ...]
    

    These correspond to:

    • Byte offset within file (8 character integer)
    • File number (2 character integer)
    • Part of speech (1 character)
    • Number of words (2 characters, hex encoded)
    • N occurrences of...
      • Word with spaces replaced with underscores, optional comment in parentheses
      • Word lexical ID (a unique occurrence ID)

    For example, from data.adj:

    00004614 00 s 02 cut 0 shortened 0 001 & 00004412 a 0000 | with parts removed; "the drastically cut film"
    
    • Byte offset within the file is 4614
    • File number is 0
    • Part of speech is s, corresponding to adjective (wnutil.c, function getpos)
    • Number of words is 2
      • First word is cut with lexical ID 0
      • Second word is shortened with lexical ID 0

    A short Perl script to simply dump the words from the data.* files:

    #!/usr/bin/perl
    
    while (my $line = <>) {
        # If no 8-digit byte offset is present, skip this line
        if ( $line !~ /^[0-9]{8}\s/ ) { next; }
        chomp($line);
    
        my @tokens = split(/ /, $line);
        shift(@tokens); # Byte offset
        shift(@tokens); # File number
        shift(@tokens); # Part of speech
    
        my $word_count = hex(shift(@tokens));
        foreach ( 1 .. $word_count ) {
            my $word = shift(@tokens);
            $word =~ tr/_/ /;
            $word =~ s/\(.*\)//;
            print $word, "\n";
    
            shift(@tokens); # Lexical ID
        }
    }
    

    A gist of the above script can be found here.
    A more robust parser which stays true to the original source can be found here.

    Both scripts are used in a similar fashion: ./wordnet_parser.pl DATA_FILE.

    0 讨论(0)
  • 2021-01-30 03:12

    See Kevin's word lists. Particularly the "Part Of Speech Database." You'll have to do some minimal text-processing on your own, in order to get the database into multiple files for yourself, but that can be done very easily with a few grep commands.

    The license terms are available on the "readme" page.

    0 讨论(0)
  • 2021-01-30 03:22

    This is a highly ranked Google result, so I'm digging up this 2 year old question to provide a far better answer than the existing one.

    The "Kevin's Word Lists" page provides old lists from the year 2000, based on WordNet 1.6.

    You are far better off going to https://wordnet.princeton.edu/download/current-version and downloading WordNet 3.0 (the Database-only version) or whatever the latest version is when you're reading this.

    Parsing it is very simple; just apply a regex of "/^(\S+?)[\s%]/" to grab every word, and then replace all "_" (underscores) in the results with spaces. Finally, dump your results to whatever storage format you want. You'll be given separate lists of adjectives, adverbs, nouns, verbs and even a special (very useless/useful depending on what you're doing) list called "senses" which relates to our senses of smell, sight, hearing, etc, i.e. words such as "shirt" or "pungent".

    Enjoy! Remember to include their copyright notice if you're using it in a project.

    0 讨论(0)
提交回复
热议问题