Depending on how many times you want to do the 'search', you want to use a search engine or not. If you want to search a lot of times, use a search engine, otherwise: don't. I'm going to describe how to implement both scenario's here.
When using a search engine: It sounds like you're looking for substrings, which means you should index your files as such using your favorite search engine, preferably one you can customize (lucene, terrier, etc.). The technique you need here is to index trigrams, that is: all 3-character combinations have to be indexed. F.ex.: 'foobar' will generate 'foo','oob','oba' and 'bar'. When searching, you want to do the same with your query and issue a search engine query with the AND of all these trigrams. (That will run a merge-join on the posting lists from the documents, which will return their ID's or whatever you put in the posting lists).
Alternatively, you can implement suffix arrays and index your files once. This will give a little more flexibility if you want to search for short (1-2 char) substrings, but in terms of indexes is harder to maintain. (There is some research at CWI/Amsterdam for fast indexing suffix arrays)
When you want to search only a few times, the algorithm to use is either Boyer-Moore (I usually use Boyer-moore-sunday as described in [Graham A. Stephen, String Search]) or a compiled DFA (you can construct them from an NFA, which is easier to make). However, that will only give you a small speed increase, for the simple reason that disk IO is probably your bottleneck and comparing a bunch of bytes that you need to decode anyways is quite fast.
The biggest improvement you can make is not reading your file line by line, but in blocks. You should configure NTFS to use a block size of 64 KB if you can and read the files in multiplies of 64 KB - think 4 MB or more in a single read. I'd even suggest using asynchronous IO so that you can read and process (previously read data) at the same time. If you do it correctly, that should already give you a split-second implementation for 10 MB on most modern hardware.
Last but not least, a neat trick used throughout information retrieval is also to compress you data using a fast compression algorithm. Since disk IO is slower than memory/cpu operations, this will probably help as well. Google's Snappy compressor is a good example of a fast compression algorithm.