Tokenizing large (>70MB) TXT file using Python NLTK. Concatenation & write data to stream errors

萝らか妹 提交于 2019-12-05 01:10:13

问题


First of all, I am new to python/nltk so my apologies if the question is too basic. I have a large file that I am trying to tokenize; I get memory errors.

One solution I've read about is to read the file one line at a time, which makes sense, however, when doing that, I get the error cannot concatenate 'str' and 'list' objects. I am not sure why that error is displayed since (after reading the file, I check its type and it is in fact a string.

I have tried to split the 7MB files into 4 smaller ones, and when running that, I get: error: failed to write data to stream.

Finally, when trying a very small sample of the file (100KB or less), and running the modified code, I am able to tokenize the file.

Any insights into what's happening? Thank you.

# tokenizing large file one line at a time
import nltk
filename=open("X:\MyFile.txt","r").read()
type(raw) #str
tokens = '' 
for line in filename
        tokens+=nltk.word_tokenize(filename)
#cannot concatenate 'str' and 'list' objects

The following works with small file:

import nltk
filename=open("X:\MyFile.txt","r").read()
type(raw)
tokens = nltk.word.tokenize(filename)

回答1:


Problem n°1: You are iterating the file char by char like that. If you want to read every line efficiently simply open the file (don't read it) and iterate over file.readlines() as follows.

Problem n°2: The word_tokenize function returns a list of tokens, so you were trying to sum a str to a list of tokens. You first have to transform the list into a string and then you can sum it to another string. I'm going to use the join function to do that. Replace the comma in my code with the char you want to use as glue/separator.

import nltk
filename=open("X:\MyFile.txt","r")
type(raw) #str
tokens = '' 
for line in filename.readlines():
    tokens+=",".join(nltk.word_tokenize(line))

If instead you need the tokens in a list simply do:

import nltk
filename=open("X:\MyFile.txt","r")
type(raw) #str
tokens = []
for line in filename.readlines():
    tokens+=nltk.word_tokenize(line)

Hope that helps!




回答2:


In python, files act as iterators. So you can simply iterate over the file without having to call any methods on it. This would return one line per iteration.

Problem 1: You have created tokens as a string while word_tokenize() returns a list.

Problem 2: Simply open the file for reading by open('filename',"r").

import nltk
f=open("X:\MyFile.txt","r")
tokens=[]
for line in f:
    tokens+=nltk.word_tokenize(line)
print tokens
f.close()


来源:https://stackoverflow.com/questions/9853227/tokenizing-large-70mb-txt-file-using-python-nltk-concatenation-write-data

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!