How to count number of unique values of a field in a tab-delimited text file?

前端 未结 7 659
滥情空心
滥情空心 2020-12-23 14:21

I have a text file with a large amount of data which is tab delimited. I want to have a look at the data such that I can see the unique values in a column. For example,

7条回答
  •  有刺的猬
    2020-12-23 14:43

    You can make use of cut, sort and uniq commands as follows:

    cat input_file | cut -f 1 | sort | uniq
    

    gets unique values in field 1, replacing 1 by 2 will give you unique values in field 2.

    Avoiding UUOC :)

    cut -f 1 input_file | sort | uniq
    

    EDIT:

    To count the number of unique occurences you can make use of wc command in the chain as:

    cut -f 1 input_file | sort | uniq | wc -l
    

提交回复
热议问题