So I\'ve seen a lot of articles now claiming that on C++ double checked locking, commonly used to prevent multiple threads from trying to initialize a lazily created singlet
There's some great reading about this (although it's .net/c# oriented) here: http://msdn.microsoft.com/en-us/magazine/cc163715.aspx
What it boils down to is that you need to be able to tell the CPU that it cannot reorder your reads/writes for this variable access (ever since the original Pentium, the CPU can reorder certain instructions if it thinks that the logic would be unaffected), and that it needs to ensure that the cache is consistent (don't forget about that -- we devs get to pretend that all memory is just one flat resource, but in reality, each CPU core has cache, some unshared (L1), some might be shared sometimes (L2)) -- your initizlization might write to main RAM, but another core might have the uninitialized value in cache. If you don't have any concurrency semantics, the CPU may not know that it's cache is dirty.
I don't know the C++ side, but in .net, you would designate the variable as volatile in order to protect access to it (or you would use the Memory read/write barrier methods in System.Threading).
As an aside, I've read that in .net 2.0, double checked locking is guaranteed to work without "volatile" variables (for any .net readers out there) -- that doesn't help you with your c++ code.
If you want to be safe, you will need to do the c++ equivalent of marking a variable as volatile in c#.