Python Tokenization

自闭症网瘾萝莉.ら 提交于 2019-12-20 03:52:24

问题


I am new with Python and I have a Tokenization assignment The Input is a .txt file with sentences and output is .txt file with Tokens, and When I say Token i mean: simple word, ',' , '!' , '?' , '.' ' " '

I have this function: Input: Elemnt is a word with or without Punctuation, could be word like: Hi or said: or said" StrForCheck : is an array of Punctuation that i want to separate from the words TokenFile: is my output file

def CheckIfSEmanExist(Elemnt,StrForCheck, TokenFile):

FirstOrLastIsSeman = 0

for seman in StrForCheck:
    WordSplitOnSeman = Elemnt.split(seman)
    if len(WordSplitOnSeman) > 1:
        if Elemnt[len(Elemnt)-1] == seman:
            FirstOrLastIsSeman = len(Elemnt)-1
        elif Elemnt[0] == seman:
            FirstOrLastIsSeman = 1

if FirstOrLastIsSeman == 1:
    TokenFile.write(Elemnt[0])
    TokenFile.write('\n')
    TokenFile.write(Elemnt[1:-1])
    TokenFile.write('\n')

elif FirstOrLastIsSeman == len(Elemnt)-1:
    TokenFile.write(Elemnt[0:-1])
    TokenFile.write('\n')
    TokenFile.write(Elemnt[len(Elemnt)-1])
    TokenFile.write('\n')

elif FirstOrLastIsSeman == 0:
    TokenFile.write(Elemnt)
    TokenFile.write('\n')

The Code loops over the Punctuation Array, and if he finds one, i check if the Punctuation was the first letter or the last letter in the word, and write in my output file the word and the Punctuation each in a different line

But My Problem is that it works wonderful on the whole text except those words: Jobs" , created" , public" , police"


回答1:


Note that

for l in open('some_file.txt', 'r'):
    ...

iterates over each line, so you just need to consider what to do within a line.

Consider the following function:

def tokenizer(l):
    prev_i = 0
    for (i, c) in enumerate(l):
        if c in ',.?!- ':
            if prev_i != i:
                yield l[prev_i: i]
            yield c
            prev_i = i + 1
    if prev_i != 0:
        yield l[prev_i: ]

It "spits out" tokens as it goes along. You can use it like this:

l = "hello, hello, what's all this shouting? We'll have no trouble here"
for tok in tokenizer(l):
    print tok

hello
,

hello
,

what's

all

this

shouting
?

We'll

have

no

trouble

here


来源:https://stackoverflow.com/questions/36234699/python-tokenization

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!