NLP: Building (small) corpora, or “Where to get lots of not-too-specialized English-language text files?”

北城余情 提交于 2019-12-01 05:29:37
  • Use the Wikipedia dumps
    • needs lots of cleanup
  • See if anything in nltk-data helps you
    • the corpora are usually quite small
  • the Wacky people have some free corpora
    • tagged
    • you can spider your own corpus using their toolkit
  • Europarl is free and the basis of pretty much every academic MT system
    • spoken language, translated
  • The Reuters Corpora are free of charge, but only available on CD

You can always get your own, but be warned: HTML pages often need heavy cleanup, so restrict yourself to RSS feeds.

If you do this commercially, the LDC might be a viable alternative.

Wikipedia sounds like the way to go. There is an experimental Wikipedia API that might be of use, but I have no clue how it works. So far I've only scraped Wikipedia with custom spiders or even wget.

Then you could search for pages that offer their full article text in RSS feeds. RSS, because no HTML tags get in your way.

Scraping mailing lists and/or the Usenet has several disatvantages: you'll be getting AOLbonics and Techspeak, and that will tilt your corpus badly.

The classical corpora are the Penn Treebank and the British National Corpus, but they are paid for. You can read the Corpora list archives, or even ask them about it. Perhaps you will find useful data using the Web as Corpus tools.

I actually have a small project in construction, that allows linguistic processing on arbitrary web pages. It should be ready for use within the next few weeks, but it's so far not really meant to be a scraper. But I could write a module for it, I guess, the functionality is already there.

If you're willing to pay money, you should check out the data available at the Linguistic Data Consortium, such as the Penn Treebank.

Wikipedia seems to be the best way. Yes you'd have to parse the output. But thanks to wikipedia's categories you could easily get different types of articles and words. e.g. by parsing all the science categories you could get lots of science words. Details about places would be skewed towards geographic names, etc.

You've covered the obvious ones. The only other areas that I can think of too supplement:

1) News articles / blogs.

2) Magazines are posting a lot of free material online, and you can get a good cross section of topics.

Looking into the wikipedia data I noticed that they had done some analysis on bodies of tv and movie scripts. I thought that might interesting text but not readily accessible -- it turns out it is everywhere, and it is structured and predictable enough that it should be possible clean it up. This site, helpfully titled "A bunch of movie scripts and screenplays in one location on the 'net", would probably be useful to anyone who stumbles on this thread with a similar question.

You can get quotations content (in limited form) here: http://quotationsbook.com/services/

This content also happens to be on Freebase.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!