How much faster is Redis than mongoDB?

前端 未结 7 1097
既然无缘
既然无缘 2020-12-07 06:34

It\'s widely mentioned that Redis is \"Blazing Fast\" and mongoDB is fast too. But, I\'m having trouble finding actual numbers comparing the results of the two. Given simila

相关标签:
7条回答
  • 2020-12-07 07:15

    Here is an excellent article about session performance in the Tornado framework about 1 year old. It has a comparison between a few different implementations, of which Redis and MongoDB are included. The graph in the article states that Redis is behind MongoDB by about 10% in this specific use case.

    Redis comes with a built in benchmark that will analyze the performance of the machine you are on. There is a ton of raw data from it at the Benchmark wiki for Redis. But you might have to look around a bit for Mongo. Like here, here, and some random polish numbers (but it gives you a starting point for running some MongoDB benchmarks yourself).

    I believe the best solution to this problem is to perform the tests yourself in the types of situations you expect.

    0 讨论(0)
  • 2020-12-07 07:18

    Numbers are going to be hard to find as the two are not quite in the same space. The general answer is that Redis 10 - 30% faster when the data set fits within working memory of a single machine. Once that amount of data is exceeded, Redis fails. Mongo will slow down at an amount which depends on the type of load. For an insert only type of load one user recently reported a slowdown of 6 to 7 orders of magnitude (10,000 to 100,000 times) but that report also admitted that there were configuration issues, and that this was a very atypical working load. Normal read heavy loads anecdotally slow by about 10X when some of the data must be read from disk.

    Conclusion: Redis will be faster but not by a whole lot.

    0 讨论(0)
  • 2020-12-07 07:19

    In my case, what has been a determining factor in performance comparison, is the MongoDb WriteConcern that is used. Most mongo drivers nowadays will set the default WriteConcern to ACKNOWLEDGED which means 'written to RAM' (Mongo2.6.3-WriteConcern), in that regards, it was very comparable to redis for most write operations.

    But the reality is depending on your application needs and production environment setup, you may want to change this concern to WriteConcern.JOURNALED (written to oplog) or WriteConcern.FSYNCED (written to disk) or even written to replica sets (back-ups) if it is needed.

    Then you may start seeing some performance decrease. Other important factors also include, how optimized your data access patterns are, index miss % (see mongostat) and indexes in general.

    0 讨论(0)
  • 2020-12-07 07:19

    I think that the 2-3X on the shown benchmark are misleading, since if you it also depends on the hardware you run it on - from my experience, the 'stronger' the machine is, the bigger the gap (in favor of Redis) will be, probably by the fact that the benchmark hits the memory bounds limit pretty fast.

    As for the memory capacity - this is partially true, since there are also ways to go around that, there are (commercial) products that writes back Redis data to disk, and also cluster (multi-sharded) solutions that overcome the memory-size limitation.

    0 讨论(0)
  • 2020-12-07 07:24

    Good and simple benchmark

    I tried to recalculate the results again using the current versions of redis(2.6.16) and mongo(2.4.8) and here's the result

    Completed mongo_set: 100000 ops in 5.23 seconds : 19134.6 ops/sec
    Completed mongo_get: 100000 ops in 36.98 seconds : 2703.9 ops/sec
    Completed redis_set: 100000 ops in 6.50 seconds : 15389.4 ops/sec
    Completed redis_get: 100000 ops in 5.59 seconds : 17896.3 ops/sec
    

    Also this blog post compares both of them but using node.js. It shows the effect of increasing number of entries in the database along with time.

    0 讨论(0)
  • 2020-12-07 07:25

    Please check this post about Redis and MongoDB insertion performance analysis:

    Up to 5000 entries mongodb $push is faster even when compared to Redis RPUSH, then it becames incredibly slow, probably the mongodb array type has linear insertion time and so it becomes slower and slower. mongodb might gain a bit of performances by exposing a constant time insertion list type, but even with the linear time array type (which can guarantee constant time look-up) it has its applications for small sets of data.

    0 讨论(0)
提交回复
热议问题