consensus

How to make sense of Phase 2 in Paxos distributed consensus algorithm?

狂风中的少年 提交于 2019-12-13 20:10:39
问题 I have pasted pseudocode for a paxos algorithm here: What is a "view" in the Paxos consensus algorithm? and was wondering if someone could point me in the right direction. The algorithm says that each node has a "state" which contains a bunch of information the node should keep track of. Suppose we have two nodes: Node #1 and Node #2. In the simplest case Node #2 joins Node #1 and they both play paxos. What exactly happens to the states of Node #1 and Node #2 after 2 joins 1? When does the

What is a “view” in the Paxos consensus algorithm?

一个人想着一个人 提交于 2019-12-11 08:23:32
问题 I have pasted pseudocode for a paxos algorithm below and was wondering if someone could point me in the right direction. I am trying to implement the algorithm below, but I'm a confused what exactly "views" represents below. I know the comment says it is a "map of past view numbers to values", but if someone could explain to me what exactly these "values" are and what "view numbers" are. state: num_h: highest proposal # seen in a prepare num_a, val_a: highest value and proposal # which node

Does paxos “ignore” the request for updating the value if it is not in sync with highest proposal number sent by acceptor?

主宰稳场 提交于 2019-12-11 02:53:46
问题 Title here could be misleading. I will try my best to explain my doubt through an example. I am reading about paxos algorithm from wiki and other sources. 1) Imagine a situation where a client's request to update a value ( X in below example) is processed. After one round of Paxos, a value Vb is chosen because Acceptors reply's to Proposers contain their previously accepted Proposal number and the corresponding value. In the case below, the three acceptors send (8,Va),(9,Vb),(7,Vc) to

Does Corda really require a notary to achieve uniqueness consensus?

微笑、不失礼 提交于 2019-12-10 23:06:01
问题 The Corda introduction to consensus says "uniqueness consensus is provided by notaries." Are we saying that without a notary that it would be possible for A to convince B to commit a transaction to its ledger involving a state X as an input and at the same time, or later, convince C to commit a different transaction involving X to its ledger? In this situation the ledger of A would be inconsistent with that of C (or B or both depending on what transaction, if any, it chooses to commit) and A

In RAFT is it possible to have a majority consensus on a log entry but the entry is not committed?

妖精的绣舞 提交于 2019-12-10 09:52:25
问题 Consider this simulation in the official raft webpage Why is term 2 index 1 not committed despite S2 (leader) , S3 and S4 agreeing on the log? I run this multiple minutes to make sure all communication has taken place. Weirdly enough, if I add one more log entry term 6 index 2 then term 2 index 1 will be committed. Does anyone know what is the rule that is preventing term 2 index 1 to be committed? 回答1: Your leader is in term 6, but none of the log entries are from term 6; this invokes a

PBFT: Why cant the replicas perform the request after 2/3 have prepared? why do we need commit phase?

孤者浪人 提交于 2019-12-09 19:39:58
问题 I know there are some questions on this website that asks the same questions. However the answer is never clear: In PBFT, why cant the replicas execute the requests after 2/3s have prepared? why is commit phase needed? if 2/3 + 1 replica have agreed to prepared, then I owuld think they can execute the request without broadcasting again? 回答1: (Edited) In addition to previous (incomplete) answer, a quote from from Practical Byzantine Fault Tolerance and Proactive Recovery might help. Note that

Existence of a 0- and 1-valent configurations in the proof of FLP impossibility result

怎甘沉沦 提交于 2019-12-09 05:21:50
问题 In the known paper Impossibility of Distributed Consensus with one Faulty Process (JACM85), FLP (Fisher, Lynch and Paterson) proved the surprising result that no completely asynchronous consensus protocol can tolerate even a single unannounced process death. In Lemma 3, after showing that D contains both 0-valent and 1-valent configurations, it says: Call two configurations neighbors if one results from the other in a single step . By an easy induction, there exist neighbors C₀, C₁ ∈ C such

How does kafka handle network partitions?

情到浓时终转凉″ 提交于 2019-12-08 17:14:41
问题 Kafka has the concept of a in-sync replica set, which is the set of nodes that aren't too far behind the leader. What happens if the network cleanly partitions so that a minority containing the leader is on one side, and a majority containing the other in-sync nodes on the other side? The minority/leader-side presumably thinks that it lost a bunch of nodes, reduces the ISR size accordingly, and happily carries on. The other side probably thinks that it lost the leader, so it elects a new one

Who is a validating peer?

ⅰ亾dé卋堺 提交于 2019-12-08 13:17:10
问题 I don't see a definition of the terms Validating Peer and Non-Validating Peer in the Glossary. It is important to have this definition as a good deal of literature seems to depend on these types of peers. Coming to my main question. Looking at the Blockchain as a data-store, it is clear that, this datastore will expose functions to change and read the state of its store. Therefore, is the validating peer an entity that will verify the fact that, X was before state, T was the transaction

PBFT: Why cant the replicas perform the request after 2/3 have prepared? why do we need commit phase?

时间秒杀一切 提交于 2019-12-04 17:38:30
I know there are some questions on this website that asks the same questions. However the answer is never clear: In PBFT, why cant the replicas execute the requests after 2/3s have prepared? why is commit phase needed? if 2/3 + 1 replica have agreed to prepared, then I owuld think they can execute the request without broadcasting again? (Edited) In addition to previous (incomplete) answer, a quote from from Practical Byzantine Fault Tolerance and Proactive Recovery might help. Note that author claims that Prepare phase is enough for ordering requests in same view, but it is not enough for