How get unique lines from a very large file in linux?

我的梦境 提交于 2019-12-24 00:36:49

问题


I have a very large data file (255G; 3,192,563,934 lines). Unfortunately I only have 204G of free space on the device (and no other devices I can use). I did a random sample and found that in a given, say, 100K lines, there are about 10K unique lines... but the file isn't sorted.

Normally I would use, say:

pv myfile.data | sort | uniq > myfile.data.uniq

and just let it run for a day or so. That won't work in this case because I don't have enough space left on the device for the temporary files.

I was thinking I could use split, perhaps, and do a streaming uniq on maybe 500K lines at a time into a new file. Is there a way to do something like that?

I thought I might be able to do something like

tail -100000 myfile.data | sort | uniq >> myfile.uniq && trunc --magicstuff myfile.data

but I couldn't figure out a way to truncate the file properly.


回答1:


Use sort -u instead of sort | uniq

This allows sort to discard duplicates earlier, and GNU coreutils is smart enough to take advantage of this.



来源:https://stackoverflow.com/questions/45357399/how-get-unique-lines-from-a-very-large-file-in-linux

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!