I read the help read
page, but still don\'t quite make sense. Don\'t know which option to use.
How can I read N lines at a time using Bash?
Just use a for
loop:
for i in $(seq 1 $N) ; do read line ; lines+=$line$'\n' ; done
In bash version 4, you can also use the mapfile
command.
Depending on what you're trying to do, you can just store the previous lines.
LINE_COUNT=0
PREVLINE1=""
PREVLINE2=""
while read LINE
do LINE_COUNT=$(($LINE_COUNT+1));
if [[ $LINE_COUNT == 3 ]]; then
LINE_COUNT=0
# do whatever you want to do with the 3 lines
done
PREVLINE2="$PREVLINE1"
PREVLINE1="$LINE"
done
done < $FILE_IN
I know you asked about bash, but I am amazed that this works with zsh
#!/usr/bin/env zsh
cat 3-lines.txt | read -d\4 my_var my_other_var my_third_var
Unfortunately, this doesn't work with bash
, at least the versions I tried.
The "magic" here is the -d\4
(this doesn't work in bash), that sets the line delimiter to be the EOT
character, which will be found at the end of your cat
. or any command that produces output.
If you want to read an array of N
items, bash has readarray
and mapfile
that can read files with N
lines and save every line in one position of the array.
EDIT
After some tries, I just found out that this works with bash:
$ read -d# a b
Hello
World
#
$ echo $a $b
Hello World
$
However, I could not make { cat /tmp/file ; echo '#'; } | read -d# a b
to work :(
After having looked at all the answers, I think the following is the simplest, ie more scripters would understand it better than any other solution, but only for small number of items:
while read -r var1 && read -r var2; do
echo "$var1" "$var2"
done < yourfile.txt
The multi-command approach is also excellent, but it is lesser known syntax, although still intuitive:
while read -r var1; read -r var2; do
echo "$var1" "$var2"
done < yourfile.txt
It has the advantage that you don't need line continuations for larger number of items:
while
read -r var1
read -r var2
...
read -r varN
do
echo "$var1" "$var2"
done < yourfile.txt
The xargs answer posted is also nice in theory, but in practice processing the combined lines is not so obvious. For example one solution I came up with using this technique is:
while read -r var1 var2; do
echo "$var1" "$var2"
done <<< $(cat yourfile.txt | xargs -L 2 )
but again this uses the lesser known <<<
operator. However this approach has the advantage that if your script was initially
while read -r var1; do
echo "$var1"
done <<< yourfile.txt
then extending it for multiple lines is somewhat natural:
while read -r var1 var2; do
echo "$var1" "$var2"
done <<< $(cat endpoints.txt | xargs -L 2 )
The straightforward solution
while read -r var1; do
read -r var2
echo "$var1" "$var2"
done < yourfile.txt
is the only other one that I would consider among the many given, for its simplicity, but syntactically it is not as expressive; compared to the &&
version or multi-command version it does not feel as right.
With Bash≥4 you can use mapfile
like so:
while mapfile -t -n 10 ary && ((${#ary[@]})); do
printf '%s\n' "${ary[@]}"
printf -- '--- SNIP ---\n'
done < file
That's to read 10 lines at a time.
I came up with something very similar to @albarji's answer, but more concise.
read_n() { for i in $(seq $1); do read || return; echo $REPLY; done; }
while lines="$(read_n 5)"; do
echo "========= 5 lines below ============"
echo "$lines"
done < input-file.txt
The read_n
function will read $1
lines from stdin
(use redirection to make it read from a file, just like the built-in read
command). Because the exit code from read
is maintained, you can use read_n
in a loop as the above example demonstrates.