问题
I've below code which is used to read a csv file and convert to hash. The Keys are dependent on the number of key columns that user need.
use warnings;
use strict;
my %hash;
my $KeyCols = 2;
while (<DATA>) {
chomp;
my @cols = split /,/, $_, $KeyCols+1;
next unless @cols > $KeyCols;
my $v = pop @cols;
my $k = join '', @cols;
$hash{$k} = $v;
}
I need help in achieving the same logic using TEXT::CSV_XS package for efficiency. Please help.
回答1:
The real reason for using Text::CSV_XS is for correctness. It's not going to be faster than what you have, but it will work where yours will fail.
use Text::CSV_XS qw( );
my $csv = Text::CSV_XS->new({
auto_diag => 2,
binary => 1,
});
my %hash;
while ( my $row = $csv->getline(\*DATA) ) {
$hash{ $row->[0] . $row->[1] } = $row;
}
Concatenating the fields together directly (without a separator) seems really odd.
The above makes the value an array of fields rather than CSV. If you want CSV as in the original, you will need to re-encode them into CSV.
my %hash;
while ( my $row = $csv->getline(\*DATA) ) {
my ($k1, $k2) = splice(@$row, 0, 2);
$csv->combine(@$row);
$hash{ $k1 . $k2 } = $csv->string();
}
来源:https://stackoverflow.com/questions/62160871/to-convert-csv-file-to-hash-structure-using-textcsv-xs-module-in-perl