Generic cache of objects

前端 未结 3 1279
日久生厌
日久生厌 2021-01-11 14:16

Does anyone know any implementation of a templated cache of objects?

  • You use a key to find object (the same as in std::map<>)
  • You specify a maximum
相关标签:
3条回答
  • 2021-01-11 14:26

    In an application I can hardly imagine it would speed/boost up performance to store objects that apparently can be re-created (hip: since they can be automatically discarded, when the cache tops). A sw cache would require memory-fetching through associativism code, surely slower then memory allocation and constructor running (mostly memory initializations).

    With the exception of manual user configuration to avoid the paging mechanism (precisely to boost performance, btw), most OS "caches" memory for you on disk... it's "paging", a form of "high-cost caching", because nothing gets thrown away, and it's done by specific HW, a sub-processing unit called Memory Management Unit...

    A caching-code, in the big picture, would be slowing processes while being redundant.

    0 讨论(0)
  • 2021-01-11 14:32

    Ive put together a relatively simple LRU cache built from a map and a linked list:

    template<typename K, typename V, typename Map = std::unordered_map<K, typename std::list<K>::iterator>>
    class LRUCache
    {
        size_t maxSize;
        Map data;
        std::list<K> usageOrder;
        std::function<void(std::pair<K, V>)> onEject = [](std::pair<K, V> x){};
    
        void moveToFront(typename std::list<K>::iterator itr)
        {
            if(itr != usageOrder.begin())
                usageOrder.splice(usageOrder.begin(), usageOrder, itr);
        }
    
    
        void trimToSize()
        {
            while(data.size() > maxSize)
            {
                auto itr = data.find(usageOrder.back());
    
                onEject(std::pair<K, V>(itr->first, *(itr->second)));
                data.erase(usageOrder.back());
                usageOrder.erase(--usageOrder.end());
            }
        }
    
    public:
        typedef std::pair<const K, V> value_type;
        typedef K key_type;
        typedef V mapped_type;
    
    
        LRUCache(size_t maxEntries) : maxSize(maxEntries)
        {
            data.reserve(maxEntries);
        }
    
        size_t size() const
        {
            return data.size();
        }
    
        void insert(const value_type& v)
        {
            usageOrder.push_front(v.first);
            data.insert(typename Map::value_type(v.first, usageOrder.begin()));
    
            trimToSize();
        }
    
        bool contains(const K& k) const
        {
            return data.count(k) != 0;
        }
    
        V& at(const K& k)
        {
            auto itr = data.at(k);
            moveToFront(itr);
            return *itr;
        }
    
    
        void setMaxEntries(size_t maxEntries)
        {
            maxSize = maxEntries;
            trimToSize();
        }
    
        void touch(const K& k)
        {
            at(k);
        }
    
        template<typename Compute>
        V& getOrCompute(const K& k)
        {
            if(!data.contains(k)) insert(value_type(k, Compute()));
            return(at(k));
        }
    
        void setOnEject(decltype(onEject) f)
        {
            onEject = f;
        }
    };
    

    Which I believe meets your criteria. Anything need to be added, or changed?

    0 讨论(0)
  • 2021-01-11 14:34

    You can use the Boost.MultiIndex library. It is easy to implement a MRU cache.

    0 讨论(0)
提交回复
热议问题