Getting the count of unique values in a column in bash

后端 未结 5 887
一整个雨季
一整个雨季 2021-01-30 08:14

I have tab delimited files with several columns. I want to count the frequency of occurrence of the different values in a column for all the files in a folder and sort them in d

相关标签:
5条回答
  • 2021-01-30 08:23

    Here is a way to do it in the shell:

    FIELD=2
    cut -f $FIELD * | sort| uniq -c |sort -nr
    

    This is the sort of thing bash is great at.

    0 讨论(0)
  • 2021-01-30 08:24

    To see a frequency count for column two (for example):

    awk -F '\t' '{print $2}' * | sort | uniq -c | sort -nr
    

    fileA.txt

    z    z    a
    a    b    c
    w    d    e
    

    fileB.txt

    t    r    e
    z    d    a
    a    g    c
    

    fileC.txt

    z    r    a
    v    d    c
    a    m    c
    

    Result:

      3 d
      2 r
      1 z
      1 m
      1 g
      1 b
    
    0 讨论(0)
  • 2021-01-30 08:25

    The GNU site suggests this nice awk script, which prints both the words and their frequency.

    Possible changes:

    • You can pipe through sort -nr (and reverse word and freq[word]) to see the result in descending order.
    • If you want a specific column, you can omit the for loop and simply write freq[3]++ - replace 3 with the column number.

    Here goes:

     # wordfreq.awk --- print list of word frequencies
    
     {
         $0 = tolower($0)    # remove case distinctions
         # remove punctuation
         gsub(/[^[:alnum:]_[:blank:]]/, "", $0)
         for (i = 1; i <= NF; i++)
             freq[$i]++
     }
    
     END {
         for (word in freq)
             printf "%s\t%d\n", word, freq[word]
     }
    
    0 讨论(0)
  • 2021-01-30 08:32

    Perl

    This code computes the occurrences of all columns, and prints a sorted report for each of them:

    # columnvalues.pl
    while (<>) {
        @Fields = split /\s+/;
        for $i ( 0 .. $#Fields ) {
            $result[$i]{$Fields[$i]}++
        };
    }
    for $j ( 0 .. $#result ) {
        print "column $j:\n";
        @values = keys %{$result[$j]};
        @sorted = sort { $result[$j]{$b} <=> $result[$j]{$a}  ||  $a cmp $b } @values;
        for $k ( @sorted ) {
            print " $k $result[$j]{$k}\n"
        }
    }
    

    Save the text as columnvalues.pl
    Run it as: perl columnvalues.pl files*

    Explanation

    In the top-level while loop:
    * Loop over each line of the combined input files
    * Split the line into the @Fields array
    * For every column, increment the result array-of-hashes data structure

    In the top-level for loop:
    * Loop over the result array
    * Print the column number
    * Get the values used in that column
    * Sort the values by the number of occurrences
    * Secondary sort based on the value (for example b vs g vs m vs z)
    * Iterate through the result hash, using the sorted list
    * Print the value and number of each occurrence

    Results based on the sample input files provided by @Dennis

    column 0:
     a 3
     z 3
     t 1
     v 1
     w 1
    column 1:
     d 3
     r 2
     b 1
     g 1
     m 1
     z 1
    column 2:
     c 4
     a 3
     e 2
    

    .csv input

    If your input files are .csv, change /\s+/ to /,/

    Obfuscation

    In an ugly contest, Perl is particularly well equipped.
    This one-liner does the same:

    perl -lane 'for $i (0..$#F){$g[$i]{$F[$i]}++};END{for $j (0..$#g){print "$j:";for $k (sort{$g[$j]{$b}<=>$g[$j]{$a}||$a cmp $b} keys %{$g[$j]}){print " $k $g[$j]{$k}"}}}' files*
    
    0 讨论(0)
  • 2021-01-30 08:34

    Ruby(1.9+)

    #!/usr/bin/env ruby
    Dir["*"].each do |file|
        h=Hash.new(0)
        open(file).each do |row|
            row.chomp.split("\t").each do |w|
                h[ w ] += 1
            end
        end
        h.sort{|a,b| b[1]<=>a[1] }.each{|x,y| print "#{x}:#{y}\n" }
    end
    
    0 讨论(0)
提交回复
热议问题