I was writing a file parser in Perl, so had to loop through file. File consists of fixed length records and I wanted to make a separate function that parses given record and cal
The issue you are raising does not have anything to do with loops. Both your A
and B
examples are the same in that regard. Rather, the issue is the difference between direct, in-line coding vs. calling the same code via a function.
Function calls do involve an unavoidable overhead. I can't speak to the issue of whether and why this overhead is costlier in Perl relative to other languages, but I can provide an illustration of a better way to measure this sort of thing:
use strict;
use warnings;
use Benchmark qw(cmpthese);
sub just_return { return }
sub get_string { my $s = sprintf "%s\n", 'abc' }
my %methods = (
direct => sub { my $s = sprintf "%s\n", 'abc' },
function => sub { my $s = get_string() },
just_return => sub { my $s = just_return() },
);
cmpthese(-2, \%methods);
Here's what I get on Perl v5.10.0 (MSWin32-x86-multi-thread). Very roughly, simply calling a function that does nothing is about as costly as directly running our sprintf
code.
Rate function just_return direct
function 1062833/s -- -70% -71%
just_return 3566639/s 236% -- -2%
direct 3629492/s 241% 2% --
In general, if you need to optimize some Perl code for speed and you're trying to squeeze out every last drop of efficiency, direct coding is the way to go -- but that often comes with a price of less maintainability and readability. Before you get into the business of such micro-optimizing, however, you want to make sure that your underlying algorithm is solid and that you have a firm grasp on where the slow parts of your code actually reside. It's easy to waste a lot of effort working on the wrong thing.