In our application we use std::map
to store (key, value) data and use serialization to store that data on disk. With this approach we are finding that the disk I/O
LevelDB just does something else than std::map.
Are you really saying you want (high performance) persistence for std::map?
look at std::map with a custom allocator. Allocate the entries from a memory mapped region and use fsync to to ensure the information hits the disk at strategic moments in time.
perhaps combine that with EASTL (which boasts a faster std::map and thrives with custom allocators - in fact they have no default allocator)
look at tuning your hash_map (std::unorderded_map); if hash_maps are slower, you should look into (a) loadfactor (b) hash function tuning
last but not least: evaluate the use of Boost Serialization for binary serialization of your map (whatever implementation you picked). In my experience Boost Serialization performance is top of the bill.
What you're doing now is this:
Say you have 1000000 records in a file. You read the whole file into std::map, this takes about ~1000000 operations. You use find/insert to locate and/or insert an element, this takes logarithmic time (about 20 comparisons). And now you save the whole file again, transferring all these 1000000 records back to the file.
The problem is that you benefit absolutely nothing from using std::map. std::map gives you fast search times (logarithmic), but initializing and serializing the whole map per each lookup nullifies it's benefits.
What you need is either redesign you program so you will load the map once at the startup and serialize it once at the termination. Or, perhaps if you need the database semantics, go for a real database implementation. I suggest using SQLite, although LevelDB might be just as good for you.