问题
Let's say I have two text files that I need to extract data out of. The text of the two files is as follows:
File 1:
1name - randomemail@email.com
2Name - superrandomemail@email.com
3Name - 123random@email.com
4Name - random123@email.com
File 2:
email.com
email.com
email.com
anotherwebsite.com
File 2 is File 1's list of domain names, extracted from the email addresses. These are not the same domain names by any means, and are quite random.
How can I get the results of the domain names that match File 2 from File 1?
Thank you in advance!
回答1:
Assuming that order does not matter,
grep -F -f FILE2 FILE1
should do the trick. (This works because of a little-known fact: the -F
option to grep
doesn't just mean "match this fixed string," it means "match any of these newline-separated fixed strings.")
回答2:
The recipe:
join <(sed 's/^.*@//' file1|sort -u) <(sort -u file2)
it will output the intersection of all domain names in file1 and file2
回答3:
See BashFAQ/036 for the list of usual solutions to this type of problem.
回答4:
Use VimDIFF command, this gives a nice presentation of difference
回答5:
If I got you right, you want to filter for all addresses with the host mentioned in File 2.
You could then just loop over File 2
and grep for @<line>
, accumulating the result in a new file or something similar.
Example:
cat file2 | sort -u | while read host; do grep "@$host" file1; done > filtered
来源:https://stackoverflow.com/questions/12869354/how-to-compare-two-text-files-for-the-same-exact-text-using-bash