How to remove duplicates from a file and write to the same file?

前端 未结 5 2230
花落未央
花落未央 2021-02-20 16:58

I know my title is not much self-explanatory but let me try to explain it here.

I have a file name test.txt which has some duplicate lines. Now, what I want

相关标签:
5条回答
  • 2021-02-20 17:01

    This might work for you:

    sort -u -o test.txt test.txt
    
    0 讨论(0)
  • Redirection in the shell will not work as you are trying to read and write from the same file at the same time. Actually the file is opened for writing (> file.txt) before the sort is even executed

    @potong's answer works because the sort program itself probably stores all lines in memory, I would not rely on it because it does not explicitly specifies in the manpage that it CAN be the same as the input file (though it will likely work). Unless documented to work "in place" I would not do it (@perreal's answer would work, or you can store intermediate results in shell memory)

    0 讨论(0)
  • 2021-02-20 17:14

    this is not as inefficient as it looks:

    sort -u test.txt > test.txt.tmp && mv test.txt.tmp test.txt 
    
    0 讨论(0)
  • 2021-02-20 17:22

    You can use vim for editing file in-place:

    $ ex -s +'%!sort' -cxa test.txt
    

    Multiple files:

    $ ex -s +'bufdo!%!sort' -cxa *.*
    
    0 讨论(0)
  • 2021-02-20 17:24

    Use Sponge for Reading/Writing to Same File

    You can use the sponge utility from moreutils to soak up standard output before writing the file. This prevents you from having to shuffle files around, and approximates an in-place edit. For example:

    sort -u test.txt | sponge test.txt
    

    Sample Output

    Using your corpus, this results in the expected output.

    $ cat test.txt 
    AAAA
    BBBB
    CCCC
    
    0 讨论(0)
提交回复
热议问题