I am a bit new here. I have a project where I have to download and use Wikipedia for NLP. The questions I am facing are as follows: I have RAM of only 12 GB, but the English wik
If you want to process the XML dumps directly, you can download the multistream version.
multistream allows the use of an index to decompress sections as needed without having to decompress the entire thing.
This allows you to pull articles out of a compressed dump.
For docs, see https://meta.wikimedia.org/wiki/Data_dumps/Dump_format#Multistream_dumps. Using this information, you can get any given article out of the dumps without needing to load it into a memory.
If you want to parse all of Wikipedia, you can parse one of the multistream files (~100 articles) at a time, which should make you fit into your resources. An example on how to do it is shown at https://jamesthorne.co.uk/blog/processing-wikipedia-in-a-couple-of-hours/.