I was writing a file parser in Perl, so had to loop through file. File consists of fixed length records and I wanted to make a separate function that parses given record and cal
Perl function calls are slow. It sucks because the very thing you want to be doing, decomposing your code into maintainable functions, is the very thing that will slow your program down. Why are they slow? Perl does a lot of things when it enters a subroutine, a result of it being extremely dynamic (ie. you can mess with a lot of things at run time). It has to get the code reference for that name, check that it is a code ref, set up a new lexical scratchpad (to store my
variables), a new dynamic scope (to store local
variables), set up @_
to name a few, check what context it was called in and pass along the return value. Attempts have been made to optimize this process, but they haven't paid out. See pp_entersub in pp_hot.c for the gory details.
Also there was a bug in 5.10.0 slowing down functions. If you're using 5.10.0, upgrade.
As a result, avoid calling functions over and over again in a long loop. Especially if its nested. Can you cache the results, perhaps using Memoize? Does the work have to be done inside the loop? Does it have to be done inside the inner-most loop? For example:
for my $thing (@things) {
for my $person (@persons) {
print header($thing);
print message_for($person);
}
}
The call to header
could be moved out of the @persons
loop reducing the number of calls from @things * @persons
to just @things
.
for my $thing (@things) {
my $header = header($thing);
for my $person (@persons) {
print $header;
print message_for($person);
}
}