grep -f maximum number of patterns?

前端 未结 5 1220
悲&欢浪女
悲&欢浪女 2020-12-18 04:53

I\'d like to use grep on a text file with -f to match a long list (10,000) of patterns. Turns out that grep doesn\'t like this (who, knew?). After a day, it didn\'t produce

相关标签:
5条回答
  • 2020-12-18 05:19

    Here is a bash script you can run on your files (or if you would like, a subset of your files). It will split the key file into increasingly large blocks, and for each block attempt the grep operation. The operations are timed - right now I'm timing each grep operation, as well as the total time to process all the sub-expressions. Output is in seconds - with some effort you can get ms, but with the problem you are having it's unlikely you need that granularity. Run the script in a terminal window with a command of the form

    ./timeScript keyFile textFile 100 > outputFile

    This will run the script, using keyFile as the file where the search keys are stored, and textFile as the file where you are looking for keys, and 100 as the initial block size. On each loop the block size will be doubled.

    In a second terminal, run the command

    tail -f outputFile

    which will keep track of the output of your other process into the file outputFile

    I recommend that you open a third terminal window, and that you run top in that window. You will be able to see how much memory and CPU your process is taking - again, if you see vast amounts of memory consumed it will give you a hint that things are not going well.

    This should allow you to find out when things start to slow down - which is the answer to your question. I don't think there's a "magic number" - it probably depends on your machine, and in particular on the file size and the amount of memory you have.

    You could take the output of the script and put it through a grep:

    grep entire outputFile

    You will end up with just the summaries - block size, and time taken, e.g.

    Time for processing entire file with blocksize 800: 4 seconds

    If you plot these numbers against each other (or simply inspect the numbers), you will see when the algorithm is optimal, and when it slows down.

    Here is the code: I did not do extensive error checking but it seemed to work for me. Obviously in your ultimate solution you need to do something with the outputs of grep (instead of piping it to wc -l which I did just to see how many lines were matched)...

    #!/bin/bash
    # script to look at difference in timing
    # when grepping a file with a large number of expressions
    # assume first argument = name of file with list of expressions
    # second argument = name of file to check
    # optional third argument = initial block size (default 100)
    #
    # split f1 into chunks of 1, 2, 4, 8... expressions at a time
    # and print out how long it took to process all the lines in f2
    
    if (($# < 2 )); then
      echo Warning: need at leasttwo parameters.
      echo Usage: timeScript keyFile searchFile [initial blocksize]
      exit 0
    fi
    
    f1_linecount=`cat $1 | wc -l`
    echo linecount of file1 is $f1_linecount
    
    f2_linecount=`cat $2 | wc -l`
    echo linecount of file2 is $f2_linecount
    echo
    
    if (($# < 3 )); then
      blockLength=100
    else
      blockLength=$3
    fi
    
    while (($blockLength < f1_linecount))
    do
      echo Using blocks of $blockLength
      #split is a built in command that splits the file
      # -l tells it to break after $blockLength lines
      # and the block$blockLength parameter is a prefix for the file
      split -l $blockLength $1 block$blockLength
      Tstart="$(date +%s)"
      Tbefore=$Tstart
    
      for fn in block*
        do
          echo "grep -f $fn $2 | wc -l"
          echo number of lines matched: `grep -f $fn $2 | wc -l`
          Tnow="$(($(date +%s)))"
          echo Time taken: $(($Tnow - $Tbefore)) s
          Tbefore=$Tnow
        done
      echo Time for processing entire file with blocksize $blockLength: $(($Tnow - $Tstart)) seconds
      blockLength=$((2*$blockLength))
      # remove the split files - no longer needed
      rm block*
      echo block length is now $blockLength and f1 linecount is $f1_linecount
    done
    
    exit 0
    
    0 讨论(0)
  • 2020-12-18 05:22

    i got the same problem with approx. 4 million patterns to search for in a file with 9 million lines. Seems like it is a problem of RAM. so i got this neat little work around which might be slower than splitting and joining but it just need this one line.

     while read line; do grep $line fileToSearchIn;done < patternFile
    

    I needed to use the work around since the -F flag is no solution for that large files...

    EDIT: This seems to be really slow for large files. After some more research i found 'faSomeRecords' and really other awesome tools from Kent NGS-editing-Tools

    I tried it on my own by extracting 2 million fasta-rec from 5.5million records file. Took approx. 30 sec..

    cheers

    EDIT: direct download link

    0 讨论(0)
  • 2020-12-18 05:25

    From comments, it appears that the patterns you are matching are fixed strings. If that is the case, you should definitely use -F. That will increase the speed of the matching considerably. (Using 479,000 strings to match on an input file with 3 lines using -F takes under 1.5 seconds on a moderately powered machine. Not using -F, that same machine is not yet finished after several minutes.)

    0 讨论(0)
  • 2020-12-18 05:25

    Here is a perl script "match_many.pl" which addresses a very common subset of the "large number of keys vs. large number of records" problem. Keys are accepted one per line from stdin. The two command line parameters are the name of the file to search and the field (white space delimited) which must match a key. This subset of the original problem can be solved quickly since the location of the match (if any) in the record is known ahead of time and the key always corresponds to an entire field in the record. In one typical case it searched 9400265 records with 42899 keys, matching 42401 of the keys and emitting 1831944 records in 41s. The more general case, where the key may appear as a substring in any part of a record, is a more difficult problem that this script does not address. (If keys never include white space and always correspond to an entire word the script could be modified to handle that case by iterating over all fields per record, instead of just testing the one, at the cost of running M times slower, where M is the average field number where the matches are found.)

    #!/usr/bin/perl -w
    use strict;
    use warnings;
    my $kcount;
    my ($infile,$test_field) = @ARGV;
    if(!defined($infile) || "$infile" eq "" || !defined($test_field) || ($test_field <= 0)){
      die "syntax: match_many.pl infile field" 
    }
    my %keys;       # hash of keys
    $test_field--;  # external range (1,N) to internal range (0,N-1)
    
    $kcount=0;
    while(<STDIN>) {
       my $line = $_;
       chomp($line);
       $keys {$line} = 1;
       $kcount++
    }
    print STDERR "keys read: $kcount\n";
    
    my $records = 0;
    my $emitted = 0;
    open(INFILE, $infile )  or die "Could not open $infile";
    while(<INFILE>) {
       if(substr($_,0,1) =~ /#/){ #skip comment lines
         next;
       }
       my $line = $_;
       chomp($line);
       $line =~ s/^\s+//;
       my @fields = split(/\s+/, $line);
       if(exists($keys{$fields[$test_field]})){
          print STDOUT "$line\n";
          $emitted++;
          $keys{$fields[$test_field]}++;
       }
       $records++;
    }
    
    $kcount=0;
    while( my( $key, $value ) = each %keys ){
       if($value > 1){ 
          $kcount++; 
       }
    }
    
    close(INFILE);
    print STDERR "records read: $records, emitted: $emitted; keys matched: $kcount\n";
    
    exit;
    
    0 讨论(0)
  • 2020-12-18 05:36

    You could certainly give sed a try to see whether you get a better result, but it is a lot of work to do either way on a file of any size. You didn't provide any details on your problem, but if you have 10k patterns I would be trying to think about whether there is some way to generalize them into a smaller number of regular expressions.

    0 讨论(0)
提交回复
热议问题