I am running into a huge performance bottleneck when using Azure table storage. My desire is to use tables as a sort of cache, so a long process may result in anywhere from hund
Ok, 3rd answers a charm?
http://blogs.msdn.com/b/windowsazurestorage/archive/2010/11/06/how-to-get-most-out-of-windows-azure-tables.aspx
A couple things - the storage emulator - from a friend that did some serious digging into it.
"Everything is hitting a single table in a single database (more partitions doesn't affect anything). Each table insert operation is at least 3 sql operations. Every batch is inside a transaction. Depending on transaction isolation level, those batches will have limited ability to execute in parallel.
Serial batches should be faster than individual inserts due to sql server behavior. (Individual inserts are essentially little transactions that each flush to disk, while a real transaction flushes to disk as a group)."
IE using multiple partitions dosen't affect performance on the emulator while it does against real azure storage.
Also enable logging and check your logs a little - c:\users\username\appdata\local\developmentstorage
Batch size of 100 seems to offer the best real performance, turn off naggle, turn off expect 100, beef up the connection limit.
Also make damn sure you are not accidentally inserting duplicates, that will cause an error and slow everything way way way down.
and test against real storage. There's a pretty decent library out there that handles most of this for you - http://www.nuget.org/packages/WindowsAzure.StorageExtensions/, just make sure you actually call ToList on the adds and such as it won't really execute till enumerated. Also that library uses dynamictableentity and thus there's a small perf hit for the serialization, but it does allow you to use pure POCO objects with no TableEntity stuff.
~ JT