I\'m writing a program in C that processes a text file and keeps track of each unique word (by using a struct that has a char array for the word and a count for its number of oc
First try reading one line at a time. Scan the line buffer for word boundaries and fine-tune the word counting part. Using a hash table to store the words and counts seems a good approach. Make the output optional, so you can measure read/parse/lookup performance.
Then make another program that uses the same algorithm for the core part but uses mmap
to read sizeable parts of the file and scan the block of memory. The tricky part is handling the block boundaries.
Compare output from both programs on a set of huge files, ensure the counts are identical. You can create huge files by concatenating the same file many times.
Compare timings too. Use the time
command line utility. Disable output for this benchmark to focus on the read/parse/analysis part.
Compare the timings with other programs such as wc
or cat - > /dev/null
. Once you get similar performance, the bottleneck is the speed of reading from mass storage, there is not much room left for improvement.
EDIT: looking at your code, I have these remarks:
fscanf
is probably not the right tool: at least you should protect for buffer overflow. How should you handle foo,bar
1 word or 2 words?
I would suggest using fgets()
or fread
and moving a pointer along the buffer, skipping the non word bytes, converting the word bytes to lower case with an indirection through a 256 byte array, avoiding copies.
Make the locking stuff optional via a preprocessor variable. It is not needed if the words
structure is only accessed by a single thread.
How did you implement add
? What is q
?