An efficient way to transpose a file in Bash

前端 未结 29 2125
时光说笑
时光说笑 2020-11-22 03:30

I have a huge tab-separated file formatted like this

X column1 column2 column3
row1 0 1 2
row2 3 4 5
row3 6 7 8
row4 9 10 11

I would like t

相关标签:
29条回答
  • 2020-11-22 03:56

    Pure BASH, no additional process. A nice exercise:

    declare -a array=( )                      # we build a 1-D-array
    
    read -a line < "$1"                       # read the headline
    
    COLS=${#line[@]}                          # save number of columns
    
    index=0
    while read -a line ; do
        for (( COUNTER=0; COUNTER<${#line[@]}; COUNTER++ )); do
            array[$index]=${line[$COUNTER]}
            ((index++))
        done
    done < "$1"
    
    for (( ROW = 0; ROW < COLS; ROW++ )); do
      for (( COUNTER = ROW; COUNTER < ${#array[@]}; COUNTER += COLS )); do
        printf "%s\t" ${array[$COUNTER]}
      done
      printf "\n" 
    done
    
    0 讨论(0)
  • 2020-11-22 03:56
    #!/bin/bash
    
    aline="$(head -n 1 file.txt)"
    set -- $aline
    colNum=$#
    
    #set -x
    while read line; do
      set -- $line
      for i in $(seq $colNum); do
        eval col$i="\"\$col$i \$$i\""
      done
    done < file.txt
    
    for i in $(seq $colNum); do
      eval echo \${col$i}
    done
    

    another version with set eval

    0 讨论(0)
  • 2020-11-22 03:59

    I was looking for a solution to transpose any kind of matrix (nxn or mxn) with any kind of data (numbers or data) and got the following solution:

    Row2Trans=number1
    Col2Trans=number2
    
    for ((i=1; $i <= Line2Trans; i++));do
        for ((j=1; $j <=Col2Trans ; j++));do
            awk -v var1="$i" -v var2="$j" 'BEGIN { FS = "," }  ; NR==var1 {print $((var2)) }' $ARCHIVO >> Column_$i
        done
    done
    
    paste -d',' `ls -mv Column_* | sed 's/,//g'` >> $ARCHIVO
    
    0 讨论(0)
  • 2020-11-22 04:00

    If you have sc installed, you can do:

    psc -r < inputfile | sc -W% - > outputfile
    
    0 讨论(0)
  • 2020-11-22 04:00

    If you only want to grab a single (comma delimited) line $N out of a file and turn it into a column:

    head -$N file | tail -1 | tr ',' '\n'
    
    0 讨论(0)
  • 2020-11-22 04:01

    An awk solution that store the whole array in memory

        awk '$0!~/^$/{    i++;
                      split($0,arr,FS);
                      for (j in arr) {
                          out[i,j]=arr[j];
                          if (maxr<j){ maxr=j}     # max number of output rows.
                      }
                }
        END {
            maxc=i                 # max number of output columns.
            for     (j=1; j<=maxr; j++) {
                for (i=1; i<=maxc; i++) {
                    printf( "%s:", out[i,j])
                }
                printf( "%s\n","" )
            }
        }' infile
    

    But we may "walk" the file as many times as output rows are needed:

    #!/bin/bash
    maxf="$(awk '{if (mf<NF); mf=NF}; END{print mf}' infile)"
    rowcount=maxf
    for (( i=1; i<=rowcount; i++ )); do
        awk -v i="$i" -F " " '{printf("%s\t ", $i)}' infile
        echo
    done
    

    Which (for a low count of output rows is faster than the previous code).

    0 讨论(0)
提交回复
热议问题