What is the overhead of using Java ORM for MongoDB, or its better that we go the at basic driver level to read or write?
We will be adding Mongo DB for one of our requir
There's quite a few thing's to mention here in general. Coming up with benchmarks for that is quite hard as you cannot really test the performance without testing your MongoDB setup as well. Thus one can pretty much can tweak and tune ones environment to deliver the results wanted.
Beyond that you have to distinguish between read and write performance. Especially writes are heavily influenced by the WriteConcern
used. Thus, what might be an overhead of 50% in a WriteConcern.NONE
scenario can easily turn down to less than 5% with a WriteConcern.SAFE
.
Yes, there definitely is an overhead in any ODM implementation as the Object <-> DBObject
mapping has to inspect the object get and set values usually via reflection. Thus a crucial point IMHO is the ability to plug in custom manually coded converters that you might want to provide for the performance critical objects. For Spring Data simply registering a custom EntityInstantiator
that does new Person(…)
instead of letting the default one do its reflection magic gives a huge boost in performance.
The Spring Data team has a build set up a build weighting performance of a OTS MongoDB instance for writes against different WriteConcern
s, and reading through the plain driver, the MongoTemplate
and the repositories abstraction. The numbers are to be taken with a grain of salt as they sometimes show the repositories reading data faster than the templates which has to be influenced by the infrastructure by some means as it's pretty much a layer on top of the template but doesn't really add any caching.