If redis is already a part of the stack, why is Memcached still used alongside Redis?

后端 未结 2 1113
情深已故
情深已故 2021-01-29 17:49

Redis can do everything that Memcached provides (LRU cache, item expiry, and now clustering in version 3.x+, currently in beta) or by tools like twemproxy. The performance is si

2条回答
  •  伪装坚强ぢ
    2021-01-29 18:28

    The main reason I see today as an use-case for memcached over Redis is the superior memory efficiency you should be able to get with plain HTML fragments caching (or similar applications). If you need to store different fields of your objects in different memcached keys, then Redis hashes are going to be more memory efficient, but when you have a large number of key -> simple_string pairs, memcached should be able to give you more items per megabyte.

    Other things which are good points about memcached:

    • It is a very simple piece of code, so if you just need the functionality it provides, it is a reasonable alternative I guess, but I never used it in production.
    • It is multi-threaded, so if you need to scale in a single-box setup, it is a good thing and you need to talk with just one instance.

    I believe that Redis as a cache makes more and more sense as people move towards intelligent caching or when they try to preserve structure of the cached data via Redis data structures.

    Comparison between Redis LRU and memcached LRU.

    Both memcached and Redis don't perform real LRU evictions, but only an approximation of that.

    Memcache eviction is per-size class and depends on the implementation details of its slab allocator. For example if you want to add an item which fits in a given size class, memcached will try to remove expired / not-recently-used items in that class, instead to try a global attempt to understand what is the object, regardless of its size, which is the best candidate.

    Redis instead tries to pick a good object as a candidate for eviction when the maxmemory limit is reached, looking at all the objects, regardless of the size class, but is only able to provide an approximately good object, not the best object with the greater idle time.

    The way Redis does this is by sampling a few objects, picking the one which was idle (not accessed) for the longest time. Since Redis 3.0 (currently in beta) the algorithm was improved and also takes a good candidates pools across evictions, so the approximation was improved. In the Redis documentation you can find a description and graphs with details about how it works.

    Why memcached has a better memory footprint than Redis for simple string -> string maps.

    Redis is a more complex piece of software, so values in Redis are stored in a way more similar to objects in a high level programming language: they have associated type, encoding, reference counting for memory management. This makes Redis internal structure good and manageable, but has an overhead compared to memcached which only deals with strings.

    When Redis starts to be more memory efficient

    Redis is able to store small aggregate data types in a special memory saving way. For example a small Redis Hash representing an object, is stored internally not with an hash table, but as a binary unique blob. So setting multiple fields per object into an hash is more efficient than storing N separated keys into memcached.

    You can, actually, store an object into memcached as a single JSON (or binary-encoded) blob, but contrary to Redis, this will not allow you to fetch or update independent fields.

    The advantage of Redis in the context of intelligent caching.

    Because of Redis data structures, the usual pattern used with memcached of destroying objects when the cache is invalidated, to recreate it from the DB later, is a primitive way of using Redis.

    For example, imagine you need to cache the latest N news posted into Hacker News in order to populate the "Newest" section of the site. What you do with Redis is to take a list (capped to M items) with the newest news inserted. If you use another store for your data, and Redis as a cache, what you do is to populate both the views (Redis and the DB) when a new item is posted. There is no cache invalidation.

    However the application can always have logic so that if the Redis list is found to be empty, for example after a startup, the initial view can be re-created from the DB.

    By using intelligent caching it is possible to perform caching with Redis in a more efficient way compared to memcached, but not all the problems are suitable for this pattern. For example HTML fragments caching may not benefit from this technique.

提交回复
热议问题