Here\'s a bit of code that is a considerable bottleneck after doing some measuring:
//-----------------------------------------------------------------------
My system (3.2.0-52-generic, g++-4.7 (Ubuntu/Linaro 4.7.3-2ubuntu1~12.04) 4.7.3, compiled with -O2 if not specified, CPU: i3-2125)
In my test cases I used 295068 words dictionary (so, there are 100k more words than in yours): http://dl.dropboxusercontent.com/u/4076606/words.txt
From time complexity point of view:
Practical tips:
Notice: I didn't flush my OS cache & HDD cache. The last one I can't control, but first one can be controlled with:
sync; sudo sh -c 'echo 3 > /proc/sys/vm/drop_caches'
Also I didn't omit those measurements that included a lot of context-switches and so on. So, there is space to do better measurements.
14–16 ms to read from file & insert data to a 2D char array (read & insert) n times
65-75 ms to search with binary search all the words (search n times):
Total=79-91 ms
61-78 ms to read from file & insert data to a unordered_set char array (read & insert) n times
7-9 ms to search by hash n times
Total=68-87 ms
If you search more times than you insert choose hash table (unordered_set) otherwise binary search (with simple array).
Compiled with -O2: 157-182 ms
Compiled with -O0 (if you omit -O flag, "-O" level by default is also 0): 223-248 ms
So, compiler options also matters, in this case it means 66 ms speed boost. You didn't specified any of them. So, my best guess is you didn't used it. As I try to answer to your main question.
Compiled with -O2: 142-170 ms. ~ 12-15 ms speed boost compared with your original code.
Compiled with -O0 (if you omit -O flag, "-O" level by default is also 0): 213-235 ms. ~ 10-13 ms speed boost compared with your original code.
Compiled with -O2: 99-121-[137] ms. ~ 33-43-[49] ms speed boost compared with your original code.
Implement your own hash function for your specific data input. Use char array instead of STL string. After you did it, only then write code with direct OS I/O. As you noticed (from my measurements also can be seen) that data structure is the bottleneck. If the media is very slow, but CPU is very fast, compress the file uncompress it in your program.
My code is not perfect but still it is better than anything can be seen above: http://pastebin.com/gQdkmqt8 (hash function is from the web, can be also done better)
Could you provide more details about for what system (one or range) do you plan to optimize?
Info of time complexities: Should be links... But I don't have so much reputation as I'm beginner in stackoverflow.
Is my answer still relevant to anything? Please, add a comment or vote as there is no PM as I see.
The C++ and C libraries read stuff off the disk equally fast and are already buffered to compensate for the disk I/O lag. You are not going to make it faster by adding more buffering.
The biggest difference is that C++ streams does a load of manipulations based on the locale. Character conversions/Punctuational etc/etc.
As a result the C libraries will be faster.
For some reason the linked question was deleted. So I am moving the relevant information here. The linked question was about hidden features in C++.
Though not techncially part of the STL.
The streams library is part of the standard C++ libs.
For streams:
Locales.
Very few people actually bother to learn how to correctly set and/or manipulate the locale of a stream.
The second coolest thing is the iterator templates.
Most specifically for me is the stream iterators, which basically turn the streams into very basic containers that can then be used in conjunction with the standard algorithms.
Examples:
etc.
Examples:
Believe it or not, the performance of the stdlib stream in reading data is far below that of the C library routines. If you need top IO read performance, don't use c++ streams. I discovered this the hard way on algorithm competition sites -- my code would hit the test timeout using c++ streams to read stdin, but would finish in plenty of time using plain C FILE operations.
Edit: Just try out these two programs on some sample data. I ran them on Mac OS X 10.6.6 using g++ i686-apple-darwin10-g++-4.2.1 (GCC) 4.2.1 (Apple Inc. build 5664) on a file with 1 million lines of "howdythere", and the scanf version runs consistently 5 times faster than the cin version:
#include <stdio.h>
int main()
{
int count = 0;
char buf[1024];
while ( scanf("%s", buf) == 1 )
++ count;
printf( "%d lines\n", count );
}
and
#include <iostream>
int main()
{
char buf[1024];
int count = 0;
while ( ! std::cin.eof() )
{
std::cin.getline( buf, 1023 );
if ( ! std::cin.eof() )
++count;
}
std::cout << count << " lines" << std::endl;
}
Edit: changed the data file to "howdythere" to eliminate the difference between the two cases. The timing results did not change.
Edit: I think the amount of interest (and the downvotes) in this answer shows how contrary to popular opinion the reality is. People just can't believe that the simple case of reading input in both C and streams can be so different. Before you downvote: go measure it yourself. The point is not to set tons of state (that nobody typically sets), but just the code that people most frequently write. Opinion means nothing in performance: measure, measure, measure is all that matters.