I\'ve got data in a large file (280 columns wide, 7 million lines long!) and I need to swap the first two columns. I think I could do this with some kind of awk for loop, to
You can do this by swapping values of the first two fields:
awk ' { t = $1; $1 = $2; $2 = t; print; } ' input_file
You could do this in Perl:
perl -F\\t -nlae 'print join("\t", @F[1,0,2..$#F])' inputfile
The -F
specifies the delimiter. In most shells you need to precede a backslash with another to escape it. On some platforms -F
automatically implies -n
and -a
so they can be dropped.
For your problem you wouldn't need to use -l
because the last columns appears last in the output. But if in a different situation, if the last column needs to appear between other columns, the newline character must be removed. The -l
switch takes care of this.
The "\t"
in join can be changed to anything else to produce a different delimiter in the output.
2..$#F
specifies a range from 2 until the last column. As you might have guessed, inside the square brackets, you can put any single column or range of columns in the desired order.
I tried the answer of perreal with cygwin on a windows system with a tab separated file. It didn't work, because the standard separator is space.
If you encounter the same problem, try this instead:
awk -F $'\t' ' { t = $1; $1 = $2; $2 = t; print; } ' OFS=$'\t' input_file
Incoming separator is defined by -F $'\t'
and the seperator for output by OFS=$'\t'
.
awk -F $'\t' ' { t = $1; $1 = $2; $2 = t; print; } ' OFS=$'\t' input_file > output_file