Background:
I maintain several Winforms apps and class libraries that either could or already do benefit from caching. I\'m also aware of the Cachi
I implemented a simple library named MemoryCacheT. It's on GitHub and NuGet. It basically stores items in a ConcurrentDictionary and you can specify expiration strategy when adding items. Any feedback, review, suggestion is welcome.
It looks like the .NET 4.0 concurrent collections utilize new synchronization primitives that spin before switching context, in case a resource is freed quickly. So they're still locking, just in a more opportunistic way. If you think you data retrieval logic is shorter than the timeslice, then it seems like this would be highly beneficial. But you mentioned network, which makes me think this doesn't apply.
I would wait till you have a simple, synchronized solution in place, and measure the performance and behavior before assuming you will have performance issues related to concurrency.
If you're really concerned about cache contention, you can utilize an existing cache infrastructure and logically partition it into regions. Then synchronize access to each region independently.
An example strategy if your data set consists of items that are keyed on numeric IDs, and you want to partition your cache into 10 regions, you can (mod 10) the ID to determine which region they are in. You'd keep an array of 10 objects to lock on. All of the code can be written for a variable number of regions, which can be set via configuration, or determined at app start depending on the total number of items you predict/intend to cache.
If your cache hits are keyed in an abnormal way, you'll have to come up with some custom heuristic to partition the cache.
Update (per comment):
Well this has been fun. I think the following is about as fine-grained locking as you can hope for without going totally insane (or maintaining/synchronizing a dictionary of locks for each cache key). I haven't tested it so there are probably bugs, but the idea should be illustrated. Track a list of requested IDs, and then use that to decide if you need to get the item yourself, or if you merely need to wait for a previous request to finish. Waiting (and cache insertion) is synchronized with tightly-scoped thread blocking and signaling using Wait
and PulseAll
. Access to the requested ID list is synchronized with a tightly-scopedReaderWriterLockSlim
.
This is a read-only cache. If you doing creates/updates/deletes, you'll have to make sure you remove IDs from requestedIds
once they're received (before the call to Monitor.PulseAll(_cache)
you'll want to add another try..finally
and acquire the _requestedIdsLock
write-lock). Also, with creates/updates/deletes, the easiest way to manage the cache would be to merely remove the existing item from _cache
if/when the underlying create/update/delete operation succeeds.
(Oops, see update 2 below.)
public class Item
{
public int ID { get; set; }
}
public class AsyncCache
{
protected static readonly Dictionary<int, Item> _externalDataStoreProxy = new Dictionary<int, Item>();
protected static readonly Dictionary<int, Item> _cache = new Dictionary<int, Item>();
protected static readonly HashSet<int> _requestedIds = new HashSet<int>();
protected static readonly ReaderWriterLockSlim _requestedIdsLock = new ReaderWriterLockSlim();
public Item Get(int id)
{
// if item does not exist in cache
if (!_cache.ContainsKey(id))
{
_requestedIdsLock.EnterUpgradeableReadLock();
try
{
// if item was already requested by another thread
if (_requestedIds.Contains(id))
{
_requestedIdsLock.ExitUpgradeableReadLock();
lock (_cache)
{
while (!_cache.ContainsKey(id))
Monitor.Wait(_cache);
// once we get here, _cache has our item
}
}
// else, item has not yet been requested by a thread
else
{
_requestedIdsLock.EnterWriteLock();
try
{
// record the current request
_requestedIds.Add(id);
_requestedIdsLock.ExitWriteLock();
_requestedIdsLock.ExitUpgradeableReadLock();
// get the data from the external resource
#region fake implementation - replace with real code
var item = _externalDataStoreProxy[id];
Thread.Sleep(10000);
#endregion
lock (_cache)
{
_cache.Add(id, item);
Monitor.PulseAll(_cache);
}
}
finally
{
// let go of any held locks
if (_requestedIdsLock.IsWriteLockHeld)
_requestedIdsLock.ExitWriteLock();
}
}
}
finally
{
// let go of any held locks
if (_requestedIdsLock.IsUpgradeableReadLockHeld)
_requestedIdsLock.ExitReadLock();
}
}
return _cache[id];
}
public Collection<Item> Get(Collection<int> ids)
{
var notInCache = ids.Except(_cache.Keys);
// if some items don't exist in cache
if (notInCache.Count() > 0)
{
_requestedIdsLock.EnterUpgradeableReadLock();
try
{
var needToGet = notInCache.Except(_requestedIds);
// if any items have not yet been requested by other threads
if (needToGet.Count() > 0)
{
_requestedIdsLock.EnterWriteLock();
try
{
// record the current request
foreach (var id in ids)
_requestedIds.Add(id);
_requestedIdsLock.ExitWriteLock();
_requestedIdsLock.ExitUpgradeableReadLock();
// get the data from the external resource
#region fake implementation - replace with real code
var data = new Collection<Item>();
foreach (var id in needToGet)
{
var item = _externalDataStoreProxy[id];
data.Add(item);
}
Thread.Sleep(10000);
#endregion
lock (_cache)
{
foreach (var item in data)
_cache.Add(item.ID, item);
Monitor.PulseAll(_cache);
}
}
finally
{
// let go of any held locks
if (_requestedIdsLock.IsWriteLockHeld)
_requestedIdsLock.ExitWriteLock();
}
}
if (requestedIdsLock.IsUpgradeableReadLockHeld)
_requestedIdsLock.ExitUpgradeableReadLock();
var waitingFor = notInCache.Except(needToGet);
// if any remaining items were already requested by other threads
if (waitingFor.Count() > 0)
{
lock (_cache)
{
while (waitingFor.Count() > 0)
{
Monitor.Wait(_cache);
waitingFor = waitingFor.Except(_cache.Keys);
}
// once we get here, _cache has all our items
}
}
}
finally
{
// let go of any held locks
if (_requestedIdsLock.IsUpgradeableReadLockHeld)
_requestedIdsLock.ExitReadLock();
}
}
return new Collection<Item>(ids.Select(id => _cache[id]).ToList());
}
}
Update 2:
I misunderstood the behavior of UpgradeableReadLock... only one thread at a time can hold an UpgradeableReadLock. So the above should be refactored to only grab Read locks initially, and to completely relinquish them and acquire a full-fledged Write lock when adding items to _requestedIds
.
Finally came up with a workable solution to this, thanks to some dialogue in the comments. What I did was create a wrapper, which is a partially-implemented abstract base class that uses any standard cache library as the backing cache (just needs to implement the Contains
, Get
, Put
, and Remove
methods). At the moment I'm using the EntLib Caching Application Block for that, and it took a while to get this up and running because some aspects of that library are... well... not that well-thought-out.
Anyway, the total code is now close to 1k lines so I'm not going to post the entire thing here, but the basic idea is:
Intercept all calls to the Get
, Put/Add
, and Remove
methods.
Instead of adding the original item, add an "entry" item which contains a ManualResetEvent
in addition to a Value
property. As per some advice given to me on an earlier question today, the entry implements a countdown latch, which is incremented whenever the entry is acquired and decremented whenever it is released. Both the loader and all future lookups participate in the countdown latch, so when the counter hits zero, the data is guaranteed to be available and the ManualResetEvent
is destroyed in order to conserve resources.
When an entry has to be lazy-loaded, the entry is created and added to the backing cache right away, with the event in an unsignaled state. Subsequent calls to either the new GetOrAdd
method or the intercepted Get
methods will find this entry, and either wait on the event (if the event exists) or return the associated value immediately (if the event does not exist).
The Put
method adds an entry with no event; these look the same as entries for which lazy-loading has already been completed.
Because the GetOrAdd
still implements a Get
followed by an optional Put
, this method is synchronized (serialized) against the Put
and Remove
methods, but only to add the incomplete entry, not for the entire duration of the lazy load. The Get
methods are not serialized; effectively the entire interface works like an automatic reader-writer lock.
It's still a work in progress, but I've run it through a dozen unit tests and it seems to be holding up. It behaves correctly for both the scenarios described in the question. In other words:
A call to long-running lazy-load (GetOrAdd
) for key X (simulated by Thread.Sleep
) which takes 10 seconds, followed by another GetOrAdd
for the same key X on a different thread exactly 9 seconds later, results in both threads receiving the correct data at the same time (10 seconds from T0). Loads are not duplicated.
Immediately loading a value for key X, then starting a long-running lazy-load for key Y, then requesting key X on another thread (before Y is finished), immediately gives back the value for X. Blocking calls are isolated to the relevant key.
It also gives what I think is the most intuitive result for when you begin a lazy-load and then immediately remove the key from the cache; the thread that originally requested the value will get the real value, but any other threads that request the same key at any time after the removal will get nothing back (null
) and return immediately.
All in all I'm pretty happy with it. I still wish there was a library that did this for me, but I suppose, if you want something done right... well, you know.
I know your pain as I am one of the Architects of Dedoose. I have messed around with a lot of caching libraries and ended up building this one after much tribulation. The one assumption for this Cache Manager is that all collections stored by this class implement an interface to get a Guid as a "Id" property on each object. Being that this is for a RIA it includes a lot of methods for adding /updating /removing items from these collections.
Here's my CollectionCacheManager
public class CollectionCacheManager
{
private static readonly object _objLockPeek = new object();
private static readonly Dictionary<String, object> _htLocksByKey = new Dictionary<string, object>();
private static readonly Dictionary<String, CollectionCacheEntry> _htCollectionCache = new Dictionary<string, CollectionCacheEntry>();
private static DateTime _dtLastPurgeCheck;
public static List<T> FetchAndCache<T>(string sKey, Func<List<T>> fGetCollectionDelegate) where T : IUniqueIdActiveRecord
{
List<T> colItems = new List<T>();
lock (GetKeyLock(sKey))
{
if (_htCollectionCache.Keys.Contains(sKey) == true)
{
CollectionCacheEntry objCacheEntry = _htCollectionCache[sKey];
colItems = (List<T>) objCacheEntry.Collection;
objCacheEntry.LastAccess = DateTime.Now;
}
else
{
colItems = fGetCollectionDelegate();
SaveCollection<T>(sKey, colItems);
}
}
List<T> objReturnCollection = CloneCollection<T>(colItems);
return objReturnCollection;
}
public static List<Guid> FetchAndCache(string sKey, Func<List<Guid>> fGetCollectionDelegate)
{
List<Guid> colIds = new List<Guid>();
lock (GetKeyLock(sKey))
{
if (_htCollectionCache.Keys.Contains(sKey) == true)
{
CollectionCacheEntry objCacheEntry = _htCollectionCache[sKey];
colIds = (List<Guid>)objCacheEntry.Collection;
objCacheEntry.LastAccess = DateTime.Now;
}
else
{
colIds = fGetCollectionDelegate();
SaveCollection(sKey, colIds);
}
}
List<Guid> colReturnIds = CloneCollection(colIds);
return colReturnIds;
}
private static List<T> GetCollection<T>(string sKey) where T : IUniqueIdActiveRecord
{
List<T> objReturnCollection = null;
if (_htCollectionCache.Keys.Contains(sKey) == true)
{
CollectionCacheEntry objCacheEntry = null;
lock (GetKeyLock(sKey))
{
objCacheEntry = _htCollectionCache[sKey];
objCacheEntry.LastAccess = DateTime.Now;
}
if (objCacheEntry.Collection != null && objCacheEntry.Collection is List<T>)
{
objReturnCollection = CloneCollection<T>((List<T>)objCacheEntry.Collection);
}
}
return objReturnCollection;
}
public static void SaveCollection<T>(string sKey, List<T> colItems) where T : IUniqueIdActiveRecord
{
CollectionCacheEntry objCacheEntry = new CollectionCacheEntry();
objCacheEntry.Key = sKey;
objCacheEntry.CacheEntry = DateTime.Now;
objCacheEntry.LastAccess = DateTime.Now;
objCacheEntry.LastUpdate = DateTime.Now;
objCacheEntry.Collection = CloneCollection(colItems);
lock (GetKeyLock(sKey))
{
_htCollectionCache[sKey] = objCacheEntry;
}
}
public static void SaveCollection(string sKey, List<Guid> colIDs)
{
CollectionCacheEntry objCacheEntry = new CollectionCacheEntry();
objCacheEntry.Key = sKey;
objCacheEntry.CacheEntry = DateTime.Now;
objCacheEntry.LastAccess = DateTime.Now;
objCacheEntry.LastUpdate = DateTime.Now;
objCacheEntry.Collection = CloneCollection(colIDs);
lock (GetKeyLock(sKey))
{
_htCollectionCache[sKey] = objCacheEntry;
}
}
public static void UpdateCollection<T>(string sKey, List<T> colItems) where T : IUniqueIdActiveRecord
{
lock (GetKeyLock(sKey))
{
if (_htCollectionCache.ContainsKey(sKey) == true)
{
CollectionCacheEntry objCacheEntry = _htCollectionCache[sKey];
objCacheEntry.LastAccess = DateTime.Now;
objCacheEntry.LastUpdate = DateTime.Now;
objCacheEntry.Collection = new List<T>();
//Clone the collection before insertion to ensure it can't be touched
foreach (T objItem in colItems)
{
objCacheEntry.Collection.Add(objItem);
}
_htCollectionCache[sKey] = objCacheEntry;
}
else
{
SaveCollection<T>(sKey, colItems);
}
}
}
public static void UpdateItem<T>(string sKey, T objItem) where T : IUniqueIdActiveRecord
{
lock (GetKeyLock(sKey))
{
if (_htCollectionCache.ContainsKey(sKey) == true)
{
CollectionCacheEntry objCacheEntry = _htCollectionCache[sKey];
List<T> colItems = (List<T>)objCacheEntry.Collection;
colItems.RemoveAll(o => o.Id == objItem.Id);
colItems.Add(objItem);
objCacheEntry.Collection = colItems;
objCacheEntry.LastAccess = DateTime.Now;
objCacheEntry.LastUpdate = DateTime.Now;
}
}
}
public static void UpdateItems<T>(string sKey, List<T> colItemsToUpdate) where T : IUniqueIdActiveRecord
{
lock (GetKeyLock(sKey))
{
if (_htCollectionCache.ContainsKey(sKey) == true)
{
CollectionCacheEntry objCacheEntry = _htCollectionCache[sKey];
List<T> colCachedItems = (List<T>)objCacheEntry.Collection;
foreach (T objItem in colItemsToUpdate)
{
colCachedItems.RemoveAll(o => o.Id == objItem.Id);
colCachedItems.Add(objItem);
}
objCacheEntry.Collection = colCachedItems;
objCacheEntry.LastAccess = DateTime.Now;
objCacheEntry.LastUpdate = DateTime.Now;
}
}
}
public static void RemoveItemFromCollection<T>(string sKey, T objItem) where T : IUniqueIdActiveRecord
{
lock (GetKeyLock(sKey))
{
List<T> objCollection = GetCollection<T>(sKey);
if (objCollection != null && objCollection.Count(o => o.Id == objItem.Id) > 0)
{
objCollection.RemoveAll(o => o.Id == objItem.Id);
UpdateCollection<T>(sKey, objCollection);
}
}
}
public static void RemoveItemsFromCollection<T>(string sKey, List<T> colItemsToAdd) where T : IUniqueIdActiveRecord
{
lock (GetKeyLock(sKey))
{
Boolean bCollectionChanged = false;
List<T> objCollection = GetCollection<T>(sKey);
foreach (T objItem in colItemsToAdd)
{
if (objCollection != null && objCollection.Count(o => o.Id == objItem.Id) > 0)
{
objCollection.RemoveAll(o => o.Id == objItem.Id);
bCollectionChanged = true;
}
}
if (bCollectionChanged == true)
{
UpdateCollection<T>(sKey, objCollection);
}
}
}
public static void AddItemToCollection<T>(string sKey, T objItem) where T : IUniqueIdActiveRecord
{
lock (GetKeyLock(sKey))
{
List<T> objCollection = GetCollection<T>(sKey);
if (objCollection != null && objCollection.Count(o => o.Id == objItem.Id) == 0)
{
objCollection.Add(objItem);
UpdateCollection<T>(sKey, objCollection);
}
}
}
public static void AddItemsToCollection<T>(string sKey, List<T> colItemsToAdd) where T : IUniqueIdActiveRecord
{
lock (GetKeyLock(sKey))
{
List<T> objCollection = GetCollection<T>(sKey);
Boolean bCollectionChanged = false;
foreach (T objItem in colItemsToAdd)
{
if (objCollection != null && objCollection.Count(o => o.Id == objItem.Id) == 0)
{
objCollection.Add(objItem);
bCollectionChanged = true;
}
}
if (bCollectionChanged == true)
{
UpdateCollection<T>(sKey, objCollection);
}
}
}
public static void PurgeCollectionByMaxLastAccessInMinutes(int iMinutesSinceLastAccess)
{
DateTime dtThreshHold = DateTime.Now.AddMinutes(iMinutesSinceLastAccess * -1);
if (_dtLastPurgeCheck == null || dtThreshHold > _dtLastPurgeCheck)
{
lock (_objLockPeek)
{
CollectionCacheEntry objCacheEntry;
List<String> colKeysToRemove = new List<string>();
foreach (string sCollectionKey in _htCollectionCache.Keys)
{
objCacheEntry = _htCollectionCache[sCollectionKey];
if (objCacheEntry.LastAccess < dtThreshHold)
{
colKeysToRemove.Add(sCollectionKey);
}
}
foreach (String sKeyToRemove in colKeysToRemove)
{
_htCollectionCache.Remove(sKeyToRemove);
}
}
_dtLastPurgeCheck = DateTime.Now;
}
}
public static void ClearCollection(String sKey)
{
lock (GetKeyLock(sKey))
{
lock (_objLockPeek)
{
if (_htCollectionCache.ContainsKey(sKey) == true)
{
_htCollectionCache.Remove(sKey);
}
}
}
}
#region Helper Methods
private static object GetKeyLock(String sKey)
{
//Ensure even if hell freezes over this lock exists
if (_htLocksByKey.Keys.Contains(sKey) == false)
{
lock (_objLockPeek)
{
if (_htLocksByKey.Keys.Contains(sKey) == false)
{
_htLocksByKey[sKey] = new object();
}
}
}
return _htLocksByKey[sKey];
}
private static List<T> CloneCollection<T>(List<T> colItems) where T : IUniqueIdActiveRecord
{
List<T> objReturnCollection = new List<T>();
//Clone the list - NEVER return the internal cache list
if (colItems != null && colItems.Count > 0)
{
List<T> colCachedItems = (List<T>)colItems;
foreach (T objItem in colCachedItems)
{
objReturnCollection.Add(objItem);
}
}
return objReturnCollection;
}
private static List<Guid> CloneCollection(List<Guid> colIds)
{
List<Guid> colReturnIds = new List<Guid>();
//Clone the list - NEVER return the internal cache list
if (colIds != null && colIds.Count > 0)
{
List<Guid> colCachedItems = (List<Guid>)colIds;
foreach (Guid gId in colCachedItems)
{
colReturnIds.Add(gId);
}
}
return colReturnIds;
}
#endregion
#region Admin Functions
public static List<CollectionCacheEntry> GetAllCacheEntries()
{
return _htCollectionCache.Values.ToList();
}
public static void ClearEntireCache()
{
_htCollectionCache.Clear();
}
#endregion
}
public sealed class CollectionCacheEntry
{
public String Key;
public DateTime CacheEntry;
public DateTime LastUpdate;
public DateTime LastAccess;
public IList Collection;
}
Here is an example of how I use it:
public static class ResourceCacheController
{
#region Cached Methods
public static List<Resource> GetResourcesByProject(Guid gProjectId)
{
String sKey = GetCacheKeyProjectResources(gProjectId);
List<Resource> colItems = CollectionCacheManager.FetchAndCache<Resource>(sKey, delegate() { return ResourceAccess.GetResourcesByProject(gProjectId); });
return colItems;
}
#endregion
#region Cache Dependant Methods
public static int GetResourceCountByProject(Guid gProjectId)
{
return GetResourcesByProject(gProjectId).Count;
}
public static List<Resource> GetResourcesByIds(Guid gProjectId, List<Guid> colResourceIds)
{
if (colResourceIds == null || colResourceIds.Count == 0)
{
return null;
}
return GetResourcesByProject(gProjectId).FindAll(objRes => colResourceIds.Any(gId => objRes.Id == gId)).ToList();
}
public static Resource GetResourceById(Guid gProjectId, Guid gResourceId)
{
return GetResourcesByProject(gProjectId).SingleOrDefault(o => o.Id == gResourceId);
}
#endregion
#region Cache Keys and Clear
public static void ClearCacheProjectResources(Guid gProjectId)
{ CollectionCacheManager.ClearCollection(GetCacheKeyProjectResources(gProjectId));
}
public static string GetCacheKeyProjectResources(Guid gProjectId)
{
return string.Concat("ResourceCacheController.ProjectResources.", gProjectId.ToString());
}
#endregion
internal static void ProcessDeleteResource(Guid gProjectId, Guid gResourceId)
{
Resource objRes = GetResourceById(gProjectId, gResourceId);
if (objRes != null)
{ CollectionCacheManager.RemoveItemFromCollection(GetCacheKeyProjectResources(gProjectId), objRes);
}
}
internal static void ProcessUpdateResource(Resource objResource)
{
CollectionCacheManager.UpdateItem(GetCacheKeyProjectResources(objResource.Id), objResource);
}
internal static void ProcessAddResource(Guid gProjectId, Resource objResource)
{
CollectionCacheManager.AddItemToCollection(GetCacheKeyProjectResources(gProjectId), objResource);
}
}
Here's the Interface in question:
public interface IUniqueIdActiveRecord
{
Guid Id { get; set; }
}
Hope this helps, I've been through hell and back a few times to finally arrive at this as the solution, and for us It's been a godsend, but I cannot guarantee that it's perfect, only that we haven't found an issue yet.