I want to create a web service hosted in Windows Azure. The clients will upload files for processing, the cloud will process those files, produce resulting files, the client wil
I believe this problem is non technology specific.
Since your processing jobs are long running, I suggest these jobs should report their progress during execution. In this way a job which has not reported progress for a substantial substantial duration becomes a clear candidate for cleanup and then can be restarted on another worker role.
How you record progress and do job swapping is upto you. One approach is to use database as recording mechanism and creating an agent worker process that pings the job progress table. In case the worker process determines any problems it can take corrective actions.
Other approach would be to associate the worker role identification with the long running process. The worker roles can communicate their health status using some sort of heart beat.
Had the jobs not been long running you could have flagged the start time of job instead on status flag and could have used the timeout mechanism to determine whether the processing has failed.
The problem you describe is best handled with Azure Queues, as Azure Table Storage won't give you any type of management mechanism.
Using Azure Queues, you set a timeout when you get an item of the queue (default: 30 seconds). Once you read a queue item (e.g. "process file x waiting for you in blob at url y"), that queue item becomes invisible for the time period specified. This means that other worker role instances won't try to grab it at the same time. Once you complete processing, you simply delete the queue item.
Now: Let's say you're almost done and haven't deleted the queue item yet. All of a sudden, your role instance unexpectedly crashes (or the hardware fails, or you're rebooted for some reason). The queue-item processing code has now stopped. Eventually, when time passes since originally reading the queue item, equivalent to the timeout value you set, that queue item becomes visible again. One of your worker role instances will once again read the queue item and can process it.
A few things to keep in mind:
EDIT: Per Ryan's answer - Azure queue messages max out at a 2-hour timeout. Service Bus queue messages have a far-greater timeout. This feature just went CTP a few days ago.
The main issue you have is that queues cannot set a visibility timeout larger than 2 hrs today. So, you need another mechanism to indicate that active work is in progress. I would suggest a blob lease. For every file you process, you either lease the blob itself or a 0-byte marker blob. Your workers scan the available blobs and attempt to lease them. If they get the lease, it means it is not being processed and they go ahead and process. If they fail the lease, another worker must actively be working on it.
Once the worker has completed processing the file, it simply copies the file into another container in blob storage (or deletes it if you wish) so that it is not scanned again.
Leases are really your only answer here until queue messages can be renewed.
edit: I should clarify that the reason that leases would work here is that a lease must be actively maintained every 30 seconds or so, so you have a very small window where you know if someone has died or is still working on it.
Your role's OnStop() could be part of the solution, but there are some circumstances (hardware failure) where it won't get called. To cover that case, have your OnStart() mark everything with the same RoleInstanceID as abandoned, because it wouldn't be called if anything was still happening. (You can observe that Azure reuses its role instance IDs, luckily.)