LRU cache design

后端 未结 11 1895
借酒劲吻你
借酒劲吻你 2020-11-27 09:26

Least Recently Used (LRU) Cache is to discard the least recently used items first How do you design and implement such a cache class? The design requirements are as follows:

相关标签:
11条回答
  • 2020-11-27 10:11

    Working of LRU Cache

    Discards the least recently used items first. This algorithm requires keeping track of what was used when which is expensive if one wants to make sure the algorithm always discards the least recently used item. General implementations of this technique require keeping "age bits" for cache-lines and track the "Least Recently Used" cache-line based on age-bits. In such an implementation, every time a cache-line is used, the age of all other cache-lines changes.

    The access sequence for the below example is A B C D E C D B.

    class Node: def init(self, k, v): self.key = k self.value = v self.next = None self.prev = None class LRU_cache: def init(self, capacity): self.capacity = capacity self.dic = dict() self.head = Node(0, 0) self.tail = Node(0, 0) self.head.next = self.tail self.tail.prev = self.head def _add(self, node): p = self.tail.prev p.next = node self.tail.prev = node node.next = self.tail node.prev = p def _remove(self, node): p = node.prev n = node.next p.next = n n.prev = p def get(self, key): if key in self.dic: n = self.dic[key] self._remove(n) self._add(n) return n.value return -1 def set(self, key, value): n = Node(key, value) self._add(n) self.dic[key] = n if len(self.dic) > self.capacity: n = self.head.next self._remove(n) del self.dic[n.key] cache = LRU_cache(3) cache.set('a', 'apple') cache.set('b', 'ball') cache.set('c', 'cat') cache.set('d', 'dog') print(cache.get('a')) print(cache.get('c'))

    0 讨论(0)
  • 2020-11-27 10:12

    I implemented a thread-safe LRU cache two years back.

    LRU is typically implemented with a HashMap and LinkedList. You can google the implementation detail. There are a lot of resources about it(Wikipedia have a good explanation too).

    In order to be thread-safe, you need put lock whenever you modify the state of the LRU.

    I will paste my C++ code here for your reference.

    Here is the implementation.

    /***
        A template thread-safe LRU container.
    
        Typically LRU cache is implemented using a doubly linked list and a hash map.
        Doubly Linked List is used to store list of pages with most recently used page
        at the start of the list. So, as more pages are added to the list,
        least recently used pages are moved to the end of the list with page
        at tail being the least recently used page in the list.
    
        Additionally, this LRU provides time-to-live feature. Each entry has an expiration
        datetime.
    ***/
    #ifndef LRU_CACHE_H
    #define LRU_CACHE_H
    
    #include <iostream>
    #include <list>
    
    #include <boost/unordered_map.hpp>
    #include <boost/shared_ptr.hpp>
    #include <boost/make_shared.hpp>
    #include <boost/date_time/posix_time/posix_time.hpp>
    #include <boost/thread/mutex.hpp>
    
    template <typename KeyType, typename ValueType>
      class LRUCache {
     private:
      typedef boost::posix_time::ptime DateTime;
    
      // Cache-entry
      struct ListItem {
      ListItem(const KeyType &key,
               const ValueType &value,
               const DateTime &expiration_datetime)
      : m_key(key), m_value(value), m_expiration_datetime(expiration_datetime){}
        KeyType m_key;
        ValueType m_value;
        DateTime m_expiration_datetime;
      };
    
      typedef boost::shared_ptr<ListItem> ListItemPtr;
      typedef std::list<ListItemPtr> LruList;
      typedef typename std::list<ListItemPtr>::iterator LruListPos;
      typedef boost::unordered_map<KeyType, LruListPos> LruMapper;
    
      // A mutext to ensuare thread-safety.
      boost::mutex m_cache_mutex;
    
      // Maximum number of entries.
      std::size_t m_capacity;
    
      // Stores cache-entries from latest to oldest.
      LruList m_list;
    
      // Mapper for key to list-position.
      LruMapper m_mapper;
    
      // Default time-to-live being add to entry every time we touch it.
      unsigned long m_ttl_in_seconds;
    
      /***
          Note : This is a helper function whose function call need to be wrapped
          within a lock. It returns true/false whether key exists and
          not expires. Delete the expired entry if necessary.
      ***/
      bool containsKeyHelper(const KeyType &key) {
        bool has_key(m_mapper.count(key) != 0);
        if (has_key) {
          LruListPos pos = m_mapper[key];
          ListItemPtr & cur_item_ptr = *pos;
    
          // Remove the entry if key expires
          if (isDateTimeExpired(cur_item_ptr->m_expiration_datetime)) {
            has_key = false;
            m_list.erase(pos);
            m_mapper.erase(key);
          }
        }
        return has_key;
      }
    
      /***
          Locate an item in list by key, and move it at the front of the list,
          which means make it the latest item.
          Note : This is a helper function whose function call need to be wrapped
          within a lock.
      ***/
      void makeEntryTheLatest(const KeyType &key) {
        if (m_mapper.count(key)) {
          // Add original item at the front of the list,
          // and update <Key, ListPosition> mapper.
          LruListPos original_list_position = m_mapper[key];
          const ListItemPtr & cur_item_ptr = *original_list_position;
          m_list.push_front(cur_item_ptr);
          m_mapper[key] = m_list.begin();
    
          // Don't forget to update its expiration datetime.
          m_list.front()->m_expiration_datetime = getExpirationDatetime(m_list.front()->m_expiration_datetime);
    
          // Erase the item at original position.
          m_list.erase(original_list_position);
        }
      }
    
     public:
    
      /***
          Cache should have capacity to limit its memory usage.
          We also add time-to-live for each cache entry to expire
          the stale information. By default, ttl is one hour.
      ***/
     LRUCache(std::size_t capacity, unsigned long ttl_in_seconds = 3600)
       : m_capacity(capacity), m_ttl_in_seconds(ttl_in_seconds) {}
    
      /***
          Return now + time-to-live
      ***/
      DateTime getExpirationDatetime(const DateTime &now) {
        static const boost::posix_time::seconds ttl(m_ttl_in_seconds);
        return now + ttl;
      }
    
      /***
          If input datetime is older than current datetime,
          then it is expired.
      ***/
      bool isDateTimeExpired(const DateTime &date_time) {
        return date_time < boost::posix_time::second_clock::local_time();
      }
    
      /***
          Return the number of entries in this cache.
       ***/
      std::size_t size() {
        boost::mutex::scoped_lock lock(m_cache_mutex);
        return m_mapper.size();
      }
    
      /***
          Get value by key.
          Return true/false whether key exists.
          If key exists, input paramter value will get updated.
      ***/
      bool get(const KeyType &key, ValueType &value) {
        boost::mutex::scoped_lock lock(m_cache_mutex);
        if (!containsKeyHelper(key)) {
          return false;
        } else {
          // Make the entry the latest and update its TTL.
          makeEntryTheLatest(key);
    
          // Then get its value.
          value = m_list.front()->m_value;
          return true;
        }
      }
    
      /***
          Add <key, value> pair if no such key exists.
          Otherwise, just update the value of old key.
      ***/
      void put(const KeyType &key, const ValueType &value) {
        boost::mutex::scoped_lock lock(m_cache_mutex);
        if (containsKeyHelper(key)) {
          // Make the entry the latest and update its TTL.
          makeEntryTheLatest(key);
    
          // Now we only need to update its value.
          m_list.front()->m_value = value;
        } else { // Key exists and is not expired.
          if (m_list.size() == m_capacity) {
            KeyType delete_key = m_list.back()->m_key;
            m_list.pop_back();
            m_mapper.erase(delete_key);
          }
    
          DateTime now = boost::posix_time::second_clock::local_time();
          m_list.push_front(boost::make_shared<ListItem>(key, value,
                                                         getExpirationDatetime(now)));
          m_mapper[key] = m_list.begin();
        }
      }
    };
    #endif
    

    Here is the unit tests.

    #include "cxx_unit.h"
    #include "lru_cache.h"
    
    struct LruCacheTest
      : public FDS::CxxUnit::TestFixture<LruCacheTest>{
      CXXUNIT_TEST_SUITE();
      CXXUNIT_TEST(LruCacheTest, testContainsKey);
      CXXUNIT_TEST(LruCacheTest, testGet);
      CXXUNIT_TEST(LruCacheTest, testPut);
      CXXUNIT_TEST_SUITE_END();
    
      void testContainsKey();
      void testGet();
      void testPut();
    };
    
    
    void LruCacheTest::testContainsKey() {
      LRUCache<int,std::string> cache(3);
      cache.put(1,"1"); // 1
      cache.put(2,"2"); // 2,1
      cache.put(3,"3"); // 3,2,1
      cache.put(4,"4"); // 4,3,2
    
      std::string value_holder("");
      CXXUNIT_ASSERT(cache.get(1, value_holder) == false); // 4,3,2
      CXXUNIT_ASSERT(value_holder == "");
    
      CXXUNIT_ASSERT(cache.get(2, value_holder) == true); // 2,4,3
      CXXUNIT_ASSERT(value_holder == "2");
    
      cache.put(5,"5"); // 5, 2, 4
    
      CXXUNIT_ASSERT(cache.get(3, value_holder) == false); // 5, 2, 4
      CXXUNIT_ASSERT(value_holder == "2"); // value_holder is still "2"
    
      CXXUNIT_ASSERT(cache.get(4, value_holder) == true); // 4, 5, 2
      CXXUNIT_ASSERT(value_holder == "4");
    
      cache.put(2,"II"); // {2, "II"}, 4, 5
    
      CXXUNIT_ASSERT(cache.get(2, value_holder) == true); // 2, 4, 5
      CXXUNIT_ASSERT(value_holder == "II");
    
      // Cache-entries : {2, "II"}, {4, "4"}, {5, "5"}
      CXXUNIT_ASSERT(cache.size() == 3);
      CXXUNIT_ASSERT(cache.get(2, value_holder) == true);
      CXXUNIT_ASSERT(cache.get(4, value_holder) == true);
      CXXUNIT_ASSERT(cache.get(5, value_holder) == true);
    }
    
    void LruCacheTest::testGet() {
      LRUCache<int,std::string> cache(3);
      cache.put(1,"1"); // 1
      cache.put(2,"2"); // 2,1
      cache.put(3,"3"); // 3,2,1
      cache.put(4,"4"); // 4,3,2
    
      std::string value_holder("");
      CXXUNIT_ASSERT(cache.get(1, value_holder) == false); // 4,3,2
      CXXUNIT_ASSERT(value_holder == "");
    
      CXXUNIT_ASSERT(cache.get(2, value_holder) == true); // 2,4,3
      CXXUNIT_ASSERT(value_holder == "2");
    
      cache.put(5,"5"); // 5,2,4
      CXXUNIT_ASSERT(cache.get(5, value_holder) == true); // 5,2,4
      CXXUNIT_ASSERT(value_holder == "5");
    
      CXXUNIT_ASSERT(cache.get(4, value_holder) == true); // 4, 5, 2
      CXXUNIT_ASSERT(value_holder == "4");
    
    
      cache.put(2,"II");
      CXXUNIT_ASSERT(cache.get(2, value_holder) == true); // {2 : "II"}, 4, 5
      CXXUNIT_ASSERT(value_holder == "II");
    
      // Cache-entries : {2, "II"}, {4, "4"}, {5, "5"}
      CXXUNIT_ASSERT(cache.size() == 3);
      CXXUNIT_ASSERT(cache.get(2, value_holder) == true);
      CXXUNIT_ASSERT(cache.get(4, value_holder) == true);
      CXXUNIT_ASSERT(cache.get(5, value_holder) == true);
    }
    
    void LruCacheTest::testPut() {
      LRUCache<int,std::string> cache(3);
      cache.put(1,"1"); // 1
      cache.put(2,"2"); // 2,1
      cache.put(3,"3"); // 3,2,1
      cache.put(4,"4"); // 4,3,2
      cache.put(5,"5"); // 5,4,3
    
      std::string value_holder("");
      CXXUNIT_ASSERT(cache.get(2, value_holder) == false); // 5,4,3
      CXXUNIT_ASSERT(value_holder == "");
    
      CXXUNIT_ASSERT(cache.get(4, value_holder) == true); // 4,5,3
      CXXUNIT_ASSERT(value_holder == "4");
    
      cache.put(2,"II");
      CXXUNIT_ASSERT(cache.get(2, value_holder) == true); // II,4,5
      CXXUNIT_ASSERT(value_holder == "II");
    
      // Cache-entries : {2, "II"}, {4, "4"}, {5, "5"}
      CXXUNIT_ASSERT(cache.size() == 3);
      CXXUNIT_ASSERT(cache.get(2, value_holder) == true);
      CXXUNIT_ASSERT(cache.get(4, value_holder) == true);
      CXXUNIT_ASSERT(cache.get(5, value_holder) == true);
    }
    
    CXXUNIT_REGISTER_TEST(LruCacheTest);
    
    0 讨论(0)
  • 2020-11-27 10:13

    This is my simple Java programmer with complexity O(1).

    //

    package com.chase.digital.mystack;
    
    import java.util.HashMap;
    import java.util.Map;
    
    public class LRUCache {
    
      private int size;
      private Map<String, Map<String, Integer>> cache = new HashMap<>();
    
      public LRUCache(int size) {
        this.size = size;
      }
    
      public void addToCache(String key, String value) {
        if (cache.size() < size) {
          Map<String, Integer> valueMap = new HashMap<>();
          valueMap.put(value, 0);
          cache.put(key, valueMap);
        } else {
          findLRUAndAdd(key, value);
        }
      }
    
    
      public String getFromCache(String key) {
        String returnValue = null;
        if (cache.get(key) == null) {
          return null;
        } else {
          Map<String, Integer> value = cache.get(key);
          for (String s : value.keySet()) {
            value.put(s, value.get(s) + 1);
            returnValue = s;
          }
        }
        return returnValue;
      }
    
      private void findLRUAndAdd(String key, String value) {
        String leastRecentUsedKey = null;
        int lastUsedValue = 500000;
        for (String s : cache.keySet()) {
          final Map<String, Integer> stringIntegerMap = cache.get(s);
          for (String s1 : stringIntegerMap.keySet()) {
            final Integer integer = stringIntegerMap.get(s1);
            if (integer < lastUsedValue) {
              lastUsedValue = integer;
              leastRecentUsedKey = s;
            }
          }
        }
        cache.remove(leastRecentUsedKey);
        Map<String, Integer> valueMap = new HashMap<>();
        valueMap.put(value, 0);
        cache.put(key, valueMap);
      }
    
    
    }
    
    0 讨论(0)
  • 2020-11-27 10:14

    I have a LRU implementation here. The interface follows std::map so it should not be that hard to use. Additionally you can provide a custom backup handler, that is used if data is invalidated in the cache.

    sweet::Cache<std::string,std::vector<int>, 48> c1;
    c1.insert("key1", std::vector<int>());
    c1.insert("key2", std::vector<int>());
    assert(c1.contains("key1"));
    
    0 讨论(0)
  • 2020-11-27 10:15

    This is my simple sample c++ implementation for LRU cache, with the combination of hash(unordered_map), and list. Items on list have key to access map, and items on map have iterator of list to access list.

    #include <list>
    #include <unordered_map>
    #include <assert.h>
    
    using namespace std;
    
    template <class KEY_T, class VAL_T> class LRUCache{
    private:
            list< pair<KEY_T,VAL_T> > item_list;
            unordered_map<KEY_T, decltype(item_list.begin()) > item_map;
            size_t cache_size;
    private:
            void clean(void){
                    while(item_map.size()>cache_size){
                            auto last_it = item_list.end(); last_it --;
                            item_map.erase(last_it->first);
                            item_list.pop_back();
                    }
            };
    public:
            LRUCache(int cache_size_):cache_size(cache_size_){
                    ;
            };
    
            void put(const KEY_T &key, const VAL_T &val){
                    auto it = item_map.find(key);
                    if(it != item_map.end()){
                            item_list.erase(it->second);
                            item_map.erase(it);
                    }
                    item_list.push_front(make_pair(key,val));
                    item_map.insert(make_pair(key, item_list.begin()));
                    clean();
            };
            bool exist(const KEY_T &key){
                    return (item_map.count(key)>0);
            };
            VAL_T get(const KEY_T &key){
                    assert(exist(key));
                    auto it = item_map.find(key);
                    item_list.splice(item_list.begin(), item_list, it->second);
                    return it->second->second;
            };
    
    };
    
    0 讨论(0)
提交回复
热议问题