Memcached provides a cache expiration time option, which specifies how long objects are retained in the cache. Assuming all writes are through the cache I f
If your design calls for a write-through cache, you still have an issue with coming up against the memory limit allocated to memcached which is where LRU comes into play.
LRU has two rules when determining what to kick out, and does so in the following order:
Providing different expiration dates for different groups of objects can help keep less-frequently accessed data that is more expensive to cache in memory while allowing more frequently used slabs that might still find their way to the end of the queue, but are easy to recreate, to expire.
It is also the case that many cache keys wind up becoming aggregates of other objects, and unless you employ a lookup hash for those objects, it's much easier to just let the objects expire after a few hours than to proactively update all the associated keys, and it also preserves the hit/miss ratio that you are effectively vying for by using memcached in the first place.