Page 46 - MSDN Magazine, October 2019
P. 46

The pBFT algorithm demonstrated that it could provide both liveness and safety as long as a maximum of (n - 1) / 3 services were faulty in the distributed system. pBFT cycles through a succession of “views” with each view having one primary service acting as the leader and the remaining services acting as backups. At a concep- tual level, the pBFT algorithm works as follows:
1. The client sends a request to the primary (leader) service. 2. The primary (leader) service broadcasts the request to all
of the backup services.
3. The primary and the backup services perform the work
requested and then send back a reply to the client.
4. The request is served successfully when the client receives m+1 responses from the different services spanning the distributed system with the same result, where m is the
maximum number of faulty services allowed.
The primary (leader) service is changed during every view (round of consensus) and may be substituted if a predefined quantity of time has passed without the leader broadcasting a request to the backups. As long as the leader is non-faulty, pBFT works reasonably well; however, the process of replacing a faulty
leader is highly inefficient.
pBFT improved on the existing theory, but in practice it’s not
suitable for real-world scenarios due to its inherent scalability challenges and its inability to distinguish malicious behavior from transient communication faults.
Delegated Byzantine Fault Tolerance
Enter Delegated Byzantine Fault Tolerance (dBFT), which was proposed by Erik Zhang, founder of the NEO blockchain in 2014. dBFT extends pBFT concepts to the state machine replication scenario, and provides the first practical, public access to fast single-block data finality (about 15 seconds). dBFT is now in use by the NEO blockchain, the Binance Exchange and other major platforms globally.
The key innovation in Zhang’s proposal was to distinguish consen- sus nodes (services that can participate in the consensus algorithm to propose new state changes and vote) and ordinary nodes (services that can execute the atomic transactions and transition state, but don’t take part in the consensus algorithm and can’t propose new
Figure 1 The MakePrepareRequest Method
state changes). In doing so, dBFT became the first practical BFT to function at scale, addressing the challenges inherent in pBFT.
The C# implementation of dBFT is available in the public domain (MIT License) on GitHub at bit.ly/2Zl1Sem.
Erik Zhang, the founder of the NEO blockchain, proposed the Delegated Byzantine Fault Tolerance algorithm (dBFT) in 2014, extending pBFT concepts to the state machine replication scenario.
The dBFT algorithm comprises three distinct phases, Pre-Prepare, Prepare and Persist. Before we explore each of these phases, let’s take a moment to clarify terminology and the algorithmic steps involved.
N: The number of active consensus nodes
f: The number of Byzantine (that is, malicious) nodes, with f beingnomorethan(N-1)/3
v: The current view number (each view is a new round or attempt at consensus)
b: The proposed block of atomic transactions, the execution of which transitions the system to the next valid state
p: The index of the speaker, that is the leader for this view which proposes the block. The speaker and the remaining delegates together comprise the N consensus nodes.
At a conceptual level, dBFT comprises the following steps:
1. A cryptographically signed transaction is “broadcast” by a
client to the nodes in the distributed system.
2. The N consensus nodes receive the transaction and collect
them into their in-memory pool of transactions.
3. For the current view, the unique speaker p packages the trans-
Figure 2 The SendPrepareRequest Method
public ConsensusPayload MakePrepareRequest() \{
byte\[\] buffer = new byte\[sizeof(ulong)\]; random.NextBytes(buffer); List<Transaction> transactions =
Blockchain.Singleton.MemPool.GetSortedVerifiedTransactions() .Take((int)NativeContract.Policy.GetMaxTransactionsPerBlock(Snapshot)) .ToList();
TransactionHashes = transactions.Select(p => p.Hash).ToArray(); Transactions = transactions.ToDictionary(p => p.Hash);
Block.Timestamp = Math.Max(TimeProvider.Current.UtcNow.ToTimestampMS(),
PrevHeader.Timestamp + 1);
Block.ConsensusData.Nonce = BitConverter.ToUInt64(buffer, 0);
return PreparationPayloads\[MyIndex\] = MakeSignedPayload(new PrepareRequest \{
Timestamp = Block.Timestamp,
Nonce = Block.ConsensusData.Nonce, TransactionHashes = TransactionHashes
\}); \}
private void SendPrepareRequest() \{
Log($"send prepare request: height=\{context.Block.Index\} view=\{context. ViewNumber\}");
localNode.Tell(new LocalNode.SendDirectly \{ Inventory = context. MakePrepareRequest() \});
if (context.Validators.Length == 1) CheckPreparations();
if (context.TransactionHashes.Length > 0) \{
foreach (InvPayload payload in InvPayload.CreateGroup(InventoryType.TX, context.TransactionHashes)) localNode.Tell(Message.Create(MessageCommand.Inv, payload));
\}
ChangeTimer(TimeSpan.FromMilliseconds((Blockchain.MillisecondsPerBlock <<
(context.ViewNumber + 1)) - (context.ViewNumber == 0 ?
Blockchain.MillisecondsPerBlock : 0))); \}
42 msdn magazine
Blockchain













































   44   45   46   47   48