I had an interview last week. I was stuck in one of the question in algorithm round. I answered that question, but the interviewer did not seem convinced. That\'s why I am s
You can create some kind of indexing (example: trie) to summarize the input file. Then you can check how many indices match across documents.
Eg. Create a trie for input file for length 10. For every string of length 10 (overlapping) in the text files check how many of them match in the trie.
As a suggestion for designing really capable, scalable systems for document similarity I'd suggest reading Chapter 3 of Mining Massive Datasets, which is freely available online. One approach presented there is to 'shingle' datasets by vectorizing word counts into sets, then hashing those word counts and comparing families of hashes results with Jaccard similarity to get a score between all documents. This can work on petabytes of files with high precision if done right. Explicit details with good diagrams can be read off Stanford's CS246 Slides on Locality Sensitive Hashing. Simpler approaches like word frequency counting are described in the book as well.
diff them and pass through wc -l, or implement Levenshtein distance in C++ treating each line as a single character (or any more appropriate unit condidering the subject domain)