I know there are some questions on this website that asks the same questions. However the answer is never clear:
In PBFT, why cant the replicas execute the requests after 2/3s have prepared? why is commit phase needed? if 2/3 + 1 replica have agreed to prepared, then I owuld think they can execute the request without broadcasting again?
(Edited) In addition to previous (incomplete) answer, a quote from from Practical Byzantine Fault Tolerance and Proactive Recovery might help. Note that author claims that Prepare phase is enough for ordering requests in same view, but it is not enough for ordering requests across view changes, so that is why Commit phase is needed.
This ensures that replicas agree on a total order for requests in the same view but it is not sufficient to ensure a total order for requests across view changes. Replicas may collect prepared certificates in different views with the same sequence number and different requests. The commit phase solves this problem as follows.
Client's requests should be totally-ordered and should be executed in the exactly same order. Replicas reach consensus on the order of requests in prepare phase by collecting prepare messages by quorum size you mentioned, but doesn't execute it right away in that phase because they have to execute the same request in the same order. (In State Replication Machine system, all the state machines have to deterministically execute the same request in the same order to satisfy safety condition; Execution order affects the state machine's state)
So in commit phase, they have to reach consensus on the execution timing so that they execute the same request in the same time unit for safety condition.
Following your comment "Once the replicas received 2/3 prepared, they can commit", the internal state of each state machines(PBFT's node) would diverge, violating safety condition. That is why commit phase is needed.
Answer to your comment;
Above situation is possible when the replicas execute the request as soon as they get the prepare messages by quorum size. I think the important fact that PBFT assumes partial synchrony; messages can be arbitrarily delayed due to unstable network speed or adversary, but eventually received. So each replica can execute the request message at different time point, and the one example situation is illustrated.
Answer to your second comment
I think I need to elaborate the answer with illustrating coordinated attack of malicious replicas including primary. Let's say n replicas where n = 3f + 1 = 100, f = 33 in Byzantine fault tolerant system. In the system, the system can tolerate f number of Byzantine faulty replica. Now I give an counter-example to answer your question. Consider the following setting; I partitioned n replicas into three group;
- G1 = {b1, b2, ... , b33} for Byzantine faulty replicas including Byzantine primary(b1), |G1| = 33
- G2 = {r1, r2, ... , r33} for correct replica group, |G2| = 33
- G3 = {r34, r35, ... , r67} for correct replica group, |G3| = 34
Because n = |G1| + |G2| + |G3| = 33 + 33 + 34 = 100, above partition makes sense. And G1 is entirely controlled in a coordinated way by super-genius hacker who are especially interested in destroy the protocol.
Now, I will demonstrate how above setting violates safety condition if the commit-phase disappears from the protocol; (The safety condition means that the state of G2 and G3 should be the same). For simple description, the consensus value is simplified as a binary value, not the request with sequence number.
- [Pre-Prepare phase]: Primary(b1) sends a 0 value to G2 and 1 value to G3. This situation is not a problem cause we assume Byzantine primary.
[Prepare phase]: Now replicas in G2 and G3 exchange the message from the primary to check if they both have the same message. But, In this phase, replicas from G1 sends a 0 value to G2 and sends a 1 value to G3. After message exchange, the situation is as follows
a. Replicas in G2 received following results; 66 votes for 0 value, 34 votes for 1 value.
b. Replicas in G3 received following results; 33 votes for 0 value, 33+34=67 votes for 1 value.
Because quorum size is 2f+1 = 67, replicas in G3 accepts the proposed value from Byzantine primary who coordinates with Byzantine replicas while replicas in G2 doesn't.
So in the system, even though the system can tolerate up to 33 Byzantine faulty replicas including primary, it immediately fails in your assumption.
来源:https://stackoverflow.com/questions/51125238/pbft-why-cant-the-replicas-perform-the-request-after-2-3-have-prepared-why-do