I\'ll go first.
I\'m 100% in the set-operations camp. But what happens when the set logic on the entire desired input domain leads to a such a large retrieval that
Along with what David B said, I, too, prefer the loop/table approach.
With that out of the way, one use case for cursors and the loop/table approach involves extremely large updates. Let's say you have to update 1 billion rows. In many instances, this may not need to be transactional. For example, it might be a data warehouse aggregation where you have the potential to reconstruct from source files if things go south.
In this case, it may be best to do the update in "chunks", perhaps 1 million or 10 million rows at a time. This helps keep resource usage to a minimum, and allows concurrent use of the machine to be maximized while you update that billion rows. A looped/chunked approach might be best here. Billion row updates on less-than-stellar hardware tend to cause problems.