I\'m using the following shell script to find the contents of one file into another:
#!/bin/ksh
file=\"/home/nimish/contents.txt\"
while read -r line; do
Another solution:
awk
and create your own hash
(e.g. ahash), all controlled by yourself. $0 to $i
and you can match any fields you want.awk -F"," '
{
if (nowfile==""){ nowfile = FILENAME; }
if(FILENAME == nowfile)
{
hash[$0]=$0;
}
else
{
if($0 ~ hash[$0])
{
print $0
}
}
} ' xx yy
I don't think you really need a script to perform what you're trying to do.
One command is enough. In my case, I needed an identification number in column 11 in a CSV file (with ";" as separator):
grep -f <(awk -F";" '{print $11}' FILE_TO_EXTRACT_PATTERNS_FROM.csv) TARGET_FILE.csv
grep
itself is able to do so. Simply use the flag -f
:
grep -f <patterns> <file>
<patterns>
is a file containing one pattern in each line; and <file>
is the file in which you want to search things.
Note that, to force grep
to consider each line a pattern, even if the contents of each line look like a regular expression, you should use the flag -F, --fixed-strings
.
grep -F -f <patterns> <file>
If your file is a CSV, as you said, you may do:
grep -f <(tr ',' '\n' < data.csv) <file>
As an example, consider the file "a.txt", with the following lines:
alpha
0891234
beta
Now, the file "b.txt", with the lines:
Alpha
0808080
0891234
bEtA
The output of the following command is:
grep -f "a.txt" "b.txt"
0891234
You don't need at all to for
-loop here; grep
itself offers this feature.
Now using your file names:
#!/bin/bash
patterns="/home/nimish/contents.txt"
search="/home/nimish/another_file.csv"
grep -f <(tr ',' '\n' < "${patterns}") "${search}"
You may change ','
to the separator you have in your file.