I have a text file that contains a long list of entries (one on each line). Some of these are duplicates, and I would like to know if it is possible (and if so, how) to remove
This worked for me for both .csv and .txt
.csv
.txt
awk '!seen[$0]++' >
Explanation: The first part of the command prints unique rows and the second part i.e. after the middle arrow is to save the output of the first part.
awk '!seen[$0]++'
>