Are there any algorithms that are commonly used for achieving eventual consistency in distributed systems?
There are algorithms that have been developed for ACID transactions in distributed systems, Paxos in particular, but is there a similar body of theory that has been developed for BASE scenarios, with weaker consistency guarantees?
Edit: This appears to be an area of academic research that is only beginning to be developed. Mcdowella's answer shows that there has been at least some work in this area.
If "Anti-entropy protocols for repairing replicated data, which operate by comparing replicas and reconciling differences." fits your definition look at http://en.wikipedia.org/wiki/Gossip_protocol
BASE and weaker consistency boils down to the convergence of copies in a replication scenario. There is a large literature on replication in distributed system, with eiter eager or lazy replication, with group or master copy, etc.
Consensus is one problem that can be precisely formulated. Several solutions/algorithms can be proposed. Lazy replication with convergence of copies isn't. I feel like it's more an architectural issue. But as I just said, there is a large body of work on replication or distributed storage, which might be what you are looking for.
Here are nevertheless a few links which I found interesting:
来源:https://stackoverflow.com/questions/2038282/are-there-any-general-algorithms-for-achieving-eventual-consistency-in-distribut