I want to delete the consecutive duplicate lines. i.e. for example
**test.txt**
car
speed is good
bike
slower than car
plane
super fast
super fast
bullet train
I also wanted to keep track of how many duplicates were suppressed and only skip consecutive duplicates.
While this is not exactly what the OP asked, it is a variant which others may find useful:
perl -ne 'if (defined($pr) && ($_ eq $pr)) {$cnt++;} else {print "... (+$cnt)\n" if ($cnt); print; $cnt=0; $pr=$_;}'
It produced something like this with my data (a database restore log):
COPY 9
COPY 0
... (+2)
COPY 5
COPY 0
... (+1)
COPY 24
ALTER TABLE
... (+23)
CREATE INDEX
... (+73)
$ perl -ne 'print $_ unless $_ eq $prev; $prev = $_'
Why don't you just use uniq
?
uniq file.txt
Results:
car
speed is good
bike
slower than car
plane
super fast
bullet train
super fast
You can also do this with awk
:
awk 'line != $0; { line = $0 }' file.txt
Try:
perl -ne 'print unless (defined($prev) && ($_ eq $prev)); $prev=$_'