How to delete partial duplicate lines with AWK?

此生再无相见时 提交于 2019-12-10 11:07:52

问题


I have files with these kind of duplicate lines, where only the last field is different:

OST,0202000070,01-AUG-09,002735,6,0,0202000068,4520688,-1,0,0,0,0,0,55
ONE,0208076826,01-AUG-09,002332,316,3481.055935,0204330827,29150,200,0,0,0,0,0,5
ONE,0208076826,01-AUG-09,002332,316,3481.055935,0204330827,29150,200,0,0,0,0,0,55
OST,0202000068,01-AUG-09,003019,6,0,0202000071,4520690,-1,0,0,0,0,0,55

I need to remove the first occurrence of the line and leave the second one.

I've tried:

awk '!x[$0]++ {getline; print $0}' file.csv

but it's not working as intended, as it's also removing non duplicate lines.


回答1:


If your near-duplicates are always adjacent, you can just compare to the previous entry and avoid creating a potentially huge associative array.

#!/bin/awk -f
{
    s = substr($0, 0, match($0, /,[^,]*$/))
    if (s != prev) {
        print prev0
    }
    prev = s
    prev0 = $0
} 
END {
    print $0
}

Edit: Changed the script so it prints the last one in a group of near-duplicates (no tac needed).




回答2:


#!/bin/awk -f
{
    s = substr($0, 0, match($0, /,[^,]+$/))
    if (!seen[s]) {
        print $0
        seen[s] = 1
    }
}



回答3:


As a general strategy (I'm not much of an AWK pro despite taking classes with Aho) you might try:

  1. Concatenate all the fields except the last.
  2. Use this string as a key to a hash.
  3. Store the entire line as the value to a hash.
  4. When you have processed all lines, loop through the hash printing out the values.

This isn't AWK specific and I can't easily provide any sample code, but this is what I would first try.



来源:https://stackoverflow.com/questions/1589756/how-to-delete-partial-duplicate-lines-with-awk

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!