I am currently reading a file and storing the data in an array named @lines
. Then, I loop through this array using a for
loop and inside the loop I
You could also use the File::Slurp module, which is convenient.
use strict;
use warnings;
use File::Slurp 'read_file';
my $fname = shift or die 'filename!';
my @lines = grep /fever/, read_file $fname; # grep with regular expression
print @lines;
If you're new to Perl, take a look at the map
and grep
operators, which are handy for processing lists.
Also, take a look at the ack utility, which is a great replacement for find
/grep
. (Actually, it's a superior alternative.)
If you have Perl 5.10 or later, you can use smart matching (~~
) :
my @patterns = (qr/foo/, qr/bar/);
if ($line ~~ @patterns) {
print "matched\n";
}
Use Tie::File. It loads the file into an array, which you can manipulate using array operations. When you untie the file, its components will be saved back in the file.
If you read a file into a list it will take everything at once
@array = <$fh>; # Reads all lines into array
Contrast this with reading into a scalar context
$singleLine = <$fh>; # Reads just one line
Reading the whole file at once can be a problem, but you get the idea.
Then you can use grep
to filter your array.
@filteredArray = grep /fever/, @array;
Then you can get the count of filtered lines using scalar
, which forces scalar (that is, single value) context on the interpretation of the array, in this case returning a count.
print scalar @filteredArray;
Putting it all together...
C:\temp>cat test.pl
use strict; use warnings; # always
my @a=<DATA>; # Read all lines from __DATA__
my @f = grep /fever/, @a; # Get just the fevered lines
print "Filtered lines = ", scalar @f; # Print how many filtered lines we got
__DATA__
abc
fevered
frier
forever
111fever111
abc
C:\temp>test.pl
Filtered lines = 2
C:\temp>