Limiting the max size of a HashMap in Java

后端 未结 6 1727
时光说笑
时光说笑 2020-11-29 01:50

I want to limit the maximum size of a HashMap to take metrics on a variety of hashing algorithms that I\'m implementing. I looked at the loadfactor in one of

相关标签:
6条回答
  • 2020-11-29 01:58

    I tried setting the loadFactor to 0.0f in the constructor (meaning that I don't want the HashMap to grow in size EVER) but javac calls this invalid

    A loadFactor of 1 means "don't grow until the HashMap is 100% full". A loadFactor of 0 would mean "grow exponentially" if it were accepted.

    From the HashMap docs:

    The capacity is the number of buckets in the hash table, and the initial capacity is simply the capacity at the time the hash table is created. The load factor is a measure of how full the hash table is allowed to get before its capacity is automatically increased. When the number of entries in the hash table exceeds the product of the load factor and the current capacity, the hash table is rehashed (that is, internal data structures are rebuilt) so that the hash table has approximately twice the number of buckets.

    Example: A HashMap initialized with default settings has a capacity of 16 and a load factor of 0.75. Capacity * load factor = 16 * 0.75 = 12. So adding the 13th item to the HashMap will cause it to grow to (approximately) 32 buckets.

    Invalid example: A HashMap initialized with a capacity of 16 and a load factor of 0. Capacity * load factor = 16 * 0 = 0. So every attempt to add an item would trigger a rehash and doubling of size, until you ran out of memory.

    What you originally wanted:

    If the initial capacity is greater than the maximum number of entries divided by the load factor, no rehash operations will ever occur.

    If you create a HashMap with a capacity M > N, a load factor of 1, and add N items, it will not grow.

    0 讨论(0)
  • 2020-11-29 01:59

    You could create a new class like this to limit the size of a HashMap:

    public class MaxSizeHashMap<K, V> extends LinkedHashMap<K, V> {
        private final int maxSize;
    
        public MaxSizeHashMap(int maxSize) {
            this.maxSize = maxSize;
        }
    
        @Override
        protected boolean removeEldestEntry(Map.Entry<K, V> eldest) {
            return size() > maxSize;
        }
    }
    
    0 讨论(0)
  • 2020-11-29 02:03

    Simple solution is usually the best, so use unmodifiable or Immutable hashmap.

    If you can not change amount of elements, then the size will be fixed - problem solved.

    0 讨论(0)
  • 2020-11-29 02:08

    The method put in the HashMap class is the one in charge of adding the elements into the HashMap and it does it by calling a method named addEntry which code is as follows:

       void addEntry(int hash, K key, V value, int bucketIndex) {
            Entry<K,V> e = table[bucketIndex];
            table[bucketIndex] = new Entry<K,V>(hash, key, value, e);
            if (size++ >= threshold)
                resize(2 * table.length);
        } 
    

    As you can see in this method is where the HashMap is resized if the threshold has been exceeded, so I would try extending the class HashMap and writing my own methods for put and addEntry in order to remove the resizing. Something like:

    package java.util;
    
    public class MyHashMap<K, V> extends HashMap {
    
    
        private V myPutForNullKey(V value) {
            for (Entry<K, V> e = table[0]; e != null; e = e.next) {
                if (e.key == null) {
                    V oldValue = e.value;
                    e.value = value;
                    e.recordAccess(this);
                    return oldValue;
                }
            }
            modCount++;
            myAddEntry(0, null, value, 0);
            return null;
        }
    
        public V myPut(K key, V value) {
            if (key == null)
                return myPutForNullKey(value);
            if (size < table.length) { 
                int hash = hash(key.hashCode());
                int i = indexFor(hash, table.length);
                for (Entry<K, V> e = table[i]; e != null; e = e.next) {
                    Object k;
                    if (e.hash == hash && ((k = e.key) == key || key.equals(k))) {
                        V oldValue = e.value;
                        e.value = value;
                        e.recordAccess(this);
                        return oldValue;
                    }
                }
    
                modCount++;
                myAddEntry(hash, key, value, i);
            }
            return null;
        }
    
        void myAddEntry(int hash, K key, V value, int bucketIndex) {
            Entry<K, V> e = table[bucketIndex];
            table[bucketIndex] = new Entry<K, V>(hash, key, value, e);
            size++;
        }
    }
    

    You would need to write your own methods since put and addEntry cannot be overriding and you would also need to do the same for putForNullKey since it is called inside put. A validation in put is required to verify that we are not trying to put an object if the table is full.

    0 讨论(0)
  • 2020-11-29 02:09

    Sometimes simpler is better.

    public class InstrumentedHashMap<K, V> implements Map<K, V> {
    
        private Map<K, V> map;
    
        public InstrumentedHashMap() {
            map = new HashMap<K, V>();
        }
    
        public boolean put(K key, V value) {
            if (map.size() >= MAX && !map.containsKey(key)) {
                 return false;
            } else {
                 map.put(key, value);
                 return true;
            }
        }
    
        ...
    }
    
    0 讨论(0)
  • 2020-11-29 02:11
    public class Cache {
        private LinkedHashMap<String, String> Cache = null;
        private final int cacheSize;  
        private ReadWriteLock readWriteLock=null;
        public Cache(LinkedHashMap<String, String> psCacheMap, int size) {
            this.Cache = psCacheMap;
            cacheSize = size;
            readWriteLock=new ReentrantReadWriteLock();
        }
    
        public void put(String sql, String pstmt) throws SQLException{
            if(Cache.size() >= cacheSize && cacheSize > 0){
                String oldStmt=null;
                String oldSql = Cache.keySet().iterator().next();
                oldStmt = remove(oldSql);
                oldStmt.inCache(false);
                oldStmt.close();
    
            }
            Cache.put(sql, pstmt);
        }
    
        public String get(String sql){
            Lock readLock=readWriteLock.readLock();
            try{
                readLock.lock();
                return Cache.get(sql);
            }finally{
                readLock.unlock();
            }
        }
    
        public boolean containsKey(String sql){
            Lock readLock=readWriteLock.readLock();
            try{
                readLock.lock();
                return Cache.containsKey(sql);
            }finally{
                readLock.unlock();
            }
        }
    
        public String remove(String key){
            Lock writeLock=readWriteLock.writeLock();
            try{
                writeLock.lock();
                return Cache.remove(key);
            }finally{
                writeLock.unlock();
            }
        }
    
        public LinkedHashMap<String, String> getCache() {
            return Cache;
        }
    
        public void setCache(
                LinkedHashMap<String, String> Cache) {
            this.Cache = Cache;
        }
    
    
    }
    
    0 讨论(0)
提交回复
热议问题