Skip to main content

Cogsworth: Byzantine View Synchronization

Published onOct 22, 2021
Cogsworth: Byzantine View Synchronization


Most methods for Byzantine fault tolerance (BFT) in the partial synchrony setting divide the local state of the nodes into views, and the transition from one view to the next dictates a leader change. In order to provide liveness, all honest nodes need to stay in the same view for a sufficiently long time. This requires view synchronization, a requisite of BFT that we extract and formally define here.

Existing approaches for Byzantine view synchronization incur quadratic communication (in nn, the number of parties). A cascade of O(n)O(n) view changes may thus result in O(n3)O(n^3) communication complexity. This paper presents a new Byzantine view synchronization algorithm named Cogsworth,\text{Cogsworth}, that has optimistically linear communication complexity and constant latency. Faced with benign failures, Cogsworth\text{Cogsworth} has expected linear communication and constant latency.

The result here serves as an important step towards reaching solutions that have overall quadratic communication, the known lower bound on Byzantine fault tolerant consensus. Cogsworth\text{Cogsworth} is particularly useful for a family of BFT protocols that already exhibit linear communication under various circumstances, but suffer quadratic overhead due to view synchronization.

Keywords: Distributed systems

1. Introduction

Logical synchronization is a requisite for progress to be made in asynchronous state machine replication (SMR). Previous Byzantine fault tolerant (BFT) synchronization mechanisms incur quadratic message complexities, frequently dominating over the linear cost of the consensus cores of BFT solutions. In this work, we define the view synchronization problem and provide the first solution in the Byzantine setting, whose latency is bounded and communication cost is linear, under a broad set of scenarios.

1.1 Background and Motivation

Many practical reliable distributed systems do not rely on network synchrony because networks go through outages and periods of Distributed Denial-of-Service (DDoS) attacks; and because synchronous protocols have hard coded steps that wait for a maximum delay. Instead, asynchronous replication solutions via state machine replication (SMR) [1] usually optimize for stability periods. This approach is modeled as partial synchrony [2]. It allows for periods of asynchrony in which progress might be compromised, but consistency never does.

In the crash-failure model, this paradigm underlies most successful industrial solutions, for example, the Google Chubbie lock service [1], Yahoo’s ZooKeeper [1], etcd [3], Google’s Spanner [1], Apache Cassandra [4], and others. The algorithmic cores of these systems, e.g., Paxos [5], Viewstamp Replication [6], or Raft [7], revolve around a view-based paradigm. In the Byzantine model, where parties may act arbitrarily, this paradigm underlies many blockchain systems, including VMware’s Concord [8], Hyperledger Fabric [9], Cypherium [10][11], Celo [12], PaLa [13], and Libra [14]. The algorithmic cores of these BFT system are view-based, e.g., PBFT [15], SBFT [16], and HotStuff [17].

The advantage of the view-based paradigm is that each view has a designated leader from the parties that can drive a decision efficiently. Indeed, in both models, there are protocols that have per-view linear message and communication complexity, which is optimal.

In order to guarantee progress, nodes must give up when a view does not reach a decision after a certain timeout period. Mechanisms for changing the view whose communication is linear exist both for the crash model (all the above) and, recently, for the Byzantine model (HotStuff [17]). An additional requirement for progress is that all nodes overlap for a sufficiently long period. Unfortunately, all of the above protocols incur quadratic message complexity for view synchronization.

In order to address this, we first define the view synchronization problem independently of any specific protocol and in a fault-model agnostic manner. We then introduce a view synchronization algorithm called Cogsworth\text{Cogsworth} whose message complexity is linear in expectation, as well as in the worst-case under a broad set of conditions.

1.2 The View Synchronization Problem

We introduce the problem of view synchronization. All nodes start at view zero. A view change occurs as an interplay between the synchronizer, which implements a view synchronization algorithm, and the outer consensus solutions. The consensus solution must signal that it wishes to end the current view via a wish_to_advance()\textsf{wish\_to\_advance}() notification. The synchronizer eventually invokes a consensus  propose_view(v){\textsf{ propose\_view}}(v) signal to indicate when a new view vv starts. View synchronization requires to eventually bring all honest nodes to execute the same view for a sufficiently long time, for the outer consensus protocol to be able to drive progress.

The two measures of interest to us are latency and communication complexity between these two events. Latency is measured only during periods of synchrony, when a bound δ{\delta} on message transmission delays is known to all nodes, and is expressed in δ{\delta} units.

View synchronization extends the PaceMaker abstraction presented in [17], formally defines the problem it solves, and captures it as a separate component. It is also related to the seminal work of Chandra & Toueg [18], [19] on failure detectors. Like failure detectors, it is an abstraction capturing the conditions under which progress is guaranteed, without involving explicit engineering solutions details such as packet transmission delays, timers, and computation. Specifically, Chandra & Toueg define a leader election abstraction, denoted Ω\Omega, where eventually all non-faulty nodes trust the same non-faulty node as the leader. Ω\Omega was shown to be the weakest failure detector needed in order to solve consensus. Whereas Chandra & Toueg’s seminal work focuses on the possibility/impossibility of an eventually elected leader, here we care about how quickly it takes for a good leader to emerge (i.e., the latency), at what communication cost, and how to do so repeatedly, allowing the extension of one time single-shot consensus to SMR.

We tackle the view synchronization problem against asynchrony and the most severe type of faults, Byzantine [20][21]. This makes the synchronizers we develop particularly suited for Byzantine Fault Tolerance (BFT) consensus systems relevant in today’s cryptoeconomic systems.

More specifically, we assume a system of nn nodes that need to form a sequence of consensus decisions that implement SMR. We assume up to f<n/3{f < n/3} nodes are Byzantine, the upper bound on the number of Byzantine nodes in which Byzantine agreement is solvable [22]. The challenge is that during “happy” periods, progress might be made among a group of Byzantine nodes cooperating with a “fast” sub-group of the honest nodes. Indeed, many solutions advance when a leader succeeds in proposing a value to a quorum of 2f+12f+1 nodes, but it is possible that only the f+1f+1 “fast” honest nodes learn it and progress to the next view. The remaining ff “slow” honest nodes might stay behind, and may not even advance views at all. Then at some point, the ff Byzantine nodes may stop cooperating. A mechanism is needed to bring the “slow” nodes to the same view as the f+1f+1 “fast” ones.

Thus, our formalism and algorithms may be valuable for the consensus protocols mentioned above, as well as others, such as Casper [23] and Tendermint [24][25], which reported problems around liveness [26][27].

1.3 View Synchronization Algorithms

We first extract two synchronization mechanisms that borrow from previous BFT consensus protocols, casting them into our formalism and analyzing them.

One is a straw-man mechanism that requires no communication at all and achieves synchronization albeit with unbounded latency. This synchronizer works simply by doubling the duration of each view. Eventually, it guarantees a sufficiently long period in which all the nodes are in the same view.

The second is the broadcast-based synchronization mechanism built into PBFT [15] and similar Byzantine protocols, such as [16]. This synchronizer borrows from the Bracha reliable broadcast algorithm [28]. Once a node hears of f+1f+1 nodes who wish to enter the same view, it relays the wish reliably so that all the honest nodes enter the view within a bounded time.

The properties of these synchronizers in terms of latency and communication costs are summarized in Table 1. For brevity, these algorithms and their analysis are deferred to Appendix A.

Table 1: Comparison of the different protocols for view synchronization. t is the number of failures, δ is the upper bound on message delivery after GST.

Cogsworth\text{Cogsworth}: leader-based synchronizer

The main contribution of our work is Cogsworth\text{Cogsworth}, which is a leader-based view synchronization algorithm. Cogsworth\text{Cogsworth} utilizes views that have an honest leader to relay messages, instead of broadcasting them. When a node wishes to advance a view, it sends the message to the leader of the view, and not to all the other nodes. If the leader is honest, it will gather the messages from the nodes and multicast them (send the same message to all the other nodes) using a threshold signature [29][30][31] to the rest of the nodes, incurring only a linear communication cost. The protocol implements additional mechanisms to advance views despite faulty leaders.

The latency and communication complexity of this algorithm depend on the number of actual failures and their type. In the best case, the latency is constant and communication is linear. Faced with tt benign failures, in expectation, the communication is linear and in worst-case O(tn)O(t {\cdot} n), as mandated by the lower bound of Dolev & Reischuk [32]; the latency is expected constant and O(tδ)O(t {\cdot}\delta) in the worst-case. Byzantine failures do not change the latency, but they can drive the communication to an expected O(n2)O(n^2) complexity and in the worst-case up to O(tn2)O(t{\cdot}n^2). It remains open whether a worst-case linear synchronizer whose latency is constant is possible.

To summarize, Cogsworth\text{Cogsworth} performs just as well as a broadcast-based synchronizer in terms of latency and message complexity, and in certain scenarios shows up-to O(n)O(n) better results in terms of message complexity. summarizes the properties of all three synchronizers.

1.4 Contributions

The contributions of this paper are as follows:

  • To the best of our knowledge, this is the first paper to formally define the problem of view synchronization.

  • It includes two natural synchronizer algorithms cast into this framework and uses them as a basis for comparison.

  • It introduces Cogsworth\text{Cogsworth}, a leader-based Byzantine synchronizer exhibiting faultless and expected linear communication complexity and constant latency.


The rest of this paper is structured as follows: Section 2 discusses the model; Section 3 formally presents the view synchronization problem; Section 4 presents the Cogsworth\text{Cogsworth} view synchronization algorithm with formal correctness proof latency and communication cost analysis; Section 5 describes real-world implementations where the view synchronization algorithms can be integrated; Section 6 presents related work; and Section 7 concludes the paper. The description of the two natural view synchronization algorithms, view doubling and broadcast-based, are presented in Appendix A.

2. Model

We follow the eventually synchronous model [2] in which the execution is divided into two durations: first, an unbounded period of asynchrony, where messages do not have a bounded time until delivered; and then, a period of synchrony, where messages are delivered within a bounded time, denoted as δ.\delta. The switch between the first and second periods occurs at a moment named Global Stabilization Time (GST\text{GST}). We assume all messages sent before GST arrive at or before GST+δ.{\text{GST}+ \delta}.

Our model consists of a set Π={Pi}i=1n{\Pi = \left\lbrace \mathcal{P}_{i} \right\rbrace_{i=1}^n} of nn nodes, and a known mapping, denoted by Leader()\text{Leader}(\cdot)NΠ{\mathbb{N} \mapsto \Pi} that continuously rotates among the nodes. Formally, j0 ⁣:i=jLeader(i)=Π{\forall j \geq 0 \colon \bigcup_{i=j}^{\infty} \text{Leader}(i) = \Pi}. We use a cryptographic signing scheme, a public key infrastructure (PKI) to validate signatures, as well as a threshold signing scheme [29][30][31]. The threshold signing scheme is used in order to create a compact signature of kk-of-nn nodes and is used in other consensus protocols such as [30]. Usually k=f+1k = f+1 or k=2f+1k=2f+1.

We assume a non-adaptive adversary who can corrupt up to f<n/3f < n/3 nodes at the beginning of the execution. This corruption is done without the knowledge of the mapping Leader()\text{Leader}(\cdot). The set of remaining nfn-f honest nodes is denoted HH. We assume the honest nodes may start their local execution at different times.

In addition, as in [1][30], we assume the adversary is polynomial-time bounded, i.e., the probability it will break the cryptographic assumptions in this paper (e.g., the cryptographic signatures, threshold signatures, etc.) is negligible.

3. Problem Definition

We define a synchronizer, which solves the view synchronization problem, to be a long-lived task with an API that includes a wish_to_advance()\textsf{wish\_to\_advance}() operation and a  propose_view(v){\textsf{ propose\_view}}(v) signal, where vNv \in \mathbb{N}. Nodes may repeatedly invoke wish_to_advance()\textsf{wish\_to\_advance}(), and in return get a possibly infinite sequence of  propose_view(){\textsf{ propose\_view}}(\cdot) signals. Informally, the synchronizer should be used by a high-level abstraction (e.g., BFT state-machine replication protocol) to synchronize view numbers in the following way: All nodes start in view 00, and whenever they wish to move to the next view they invoke wish_to_advance()\textsf{wish\_to\_advance}(). However, they move to view vv only when they get a  propose_view(v){\textsf{ propose\_view}}(v) signal.

Formally, a time interval I\mathcal{I} consists of a starting time t1t_1 and an ending time t2t1t_2 \ge t_1 and all the time points between them. I\mathcal{I}’s length is I=t2t1\left| \mathcal{I} \right|=t_2-t_1. We say II\mathcal{I}' \subseteq \mathcal{I}'' if I\mathcal{I}' begins after or when I\mathcal{I}' begins, and ends before or when I\mathcal{I}'' ends. We denote by tP,vpropt^{\textit{prop}}_{\mathcal{P}_{}, v} the time when node P\mathcal{P}_{} gets the signal  propose_view(v){\textsf{ propose\_view}}(v), and assume that all nodes get  propose_view(0){\textsf{ propose\_view}}(0) at the beginning of their execution. We denote time t=0t=0 as the time when the last honest node began its execution, formally maxPHtP,0prop=0\max_{\mathcal{P}_{} \in H} t^{\textit{prop}}_{\mathcal{P}_{},0}=0. We further denote ΔP,vexec\Delta^{\textit{exec}}_{\mathcal{P}_{},v} as the time interval in which node P\mathcal{P}_{} is in view vv, i.e., ΔP,vexec\Delta^{\textit{exec}}_{\mathcal{P}_{},v} begins at tP,vpropt^{\textit{prop}}_{\mathcal{P}_{},v} and ends at tP,vendminv>v{tP,vprop}t^{\textit{end}}_{\mathcal{P}_{},v} \triangleq \min_{v' > v}\left\lbrace t^{\textit{prop}}_{\mathcal{P}_{},v'} \right\rbrace. We say node P\mathcal{P}_{} is at view vv at time tt, or executes view vv at time tt, if tΔP,vexect \in \Delta^{\textit{exec}}_{\mathcal{P}_{},v}.

We are now ready to define the two properties that any synchronizer must achieve. The first property, named view synchronization ensures that there is an infinite number of views with an honest leader that all the correct nodes execute for a sufficiently long time:

Property 1 (View Synchronization): For every c0c \ge 0 there exists α>0\alpha> 0 and an infinite number of time intervals and views {Ik,vk}k=1\left\lbrace \mathcal{I}_k, v_k \right\rbrace_{k=1}^{\infty}, such that if the interval between every two consecutive calls to wish_to_advance()\textsf{wish\_to\_advance}() by an honest node is α\alpha, then for any k1k \ge 1 and any PH\mathcal{P}_{} \in H the following holds:

  1. Ikc\left| \mathcal{I}_k \right| \ge c

  2. IkΔP,vkexec\mathcal{I}_k \subseteq \Delta^{\textit{exec}}_{\mathcal{P}_{},v_k}

  3. Leader(vk)H\text{Leader}(v_k) \in H

The second property ensures that a synchronizer will only signal a new view if an honest node wished to advance to it. Formally:

Property 2 (Synchronization Validity): The synchronizer signals  propose_view(v){\textsf{ propose\_view}}(v') only if there exists an honest node PH\mathcal{P}_{} \in H and some view vv s.t. P\mathcal{P}_{} calls wish_to_advance()\textsf{wish\_to\_advance}() at least vvv' - v times while executing view vv.


The parameter α\alpha, which is used in Property 1 is the time an honest node waits between two successive invocations of wish_to_advance()\textsf{wish\_to\_advance}(), and may differ between view synchronization algorithms. This parameter is needed to make sure that wish_to_advance()\textsf{wish\_to\_advance}() is called an infinite number of times in an infinite run. In reality, it is likely that in most view synchronization algorithms α\alpha is larger than some value dd which is a function of the message delivery bound δ\delta, and also of cc from Property 1, i.e., the synchronization algorithm will work for any αd(δ,c)\alpha\geq d \left( \delta, c \right). In this case, a consensus protocol using the synchronizer can execute the same view as long as progress is made, and trigger a new view synchronization in case liveness is lost. See Appendix A.3 for concrete examples.

The requirement that the leader of all the synchronized views is honest is needed to ensure that once a view is synchronized, the leader of that view will drive progress in the upper-layer protocol, thus ensuring liveness. Without this condition, a synchronizer might only synchronize views with faulty leaders.

Synchronization validity (Property 2) ensures that the synchronizer does not suggest a new view to the upper-layer protocol unless an honest node running that upper-layer protocol wanted to advance to that view.

Latency and communication complexity

In order to define how the latency and message communication complexity are calculated, we first define Ikstart\mathcal{I}_k^{\textit{start}} to be the time at which the kk-th view synchronization is reached. Formally, Ikstart\mathcal{I}_k^{\textit{start}} maxPH{tP,vkprop}\triangleq \max_{\mathcal{P}_{} \in H} \left\lbrace t^{\textit{prop}}_{\mathcal{P}_{},v_k}\right\rbrace, where vkv_k is defined according to Property 1.

With this we can define the latency of a synchronizer implementation:

Definition 3.1 (Synchronizer Latency): The latency of a synchronizer is defined as lim((I1startGST)+k=2IkstartIk1start)/\lim_{\ell \to \infty} \left( \left( \mathcal{I}_1^{\textit{start}} - \text{GST}\right) + \sum_{k=2}^{\ell} \mathcal{I}_{k}^{\textit{start}} - \mathcal{I}_{k-1}^{\textit{start}} \right) / \ell.

Next, in order to define communication complexity, we first need to introduce a few more notations. Let MP,v1v2M_{\mathcal{P}_{},v_1 \to v_2} be the total number of messages P\mathcal{P}_{} sent between t=P,v1propt^{\textit{prop}}_{=\mathcal{P}_{}, v_1} and tP,v2propt^{\textit{prop}}_{\mathcal{P}_{},v_2}. In addition, denote MP,vM_{\mathcal{P}_{},\to v} as the total number of messages sent by P\mathcal{P}_{} from the beginning of P\mathcal{P}_{}’s execution and tP,vpropt^{\textit{prop}}_{\mathcal{P}_{},v}.

With this, we define the communication complexity of a synchronizer implementation:

Definition 3.2 (Synchronizer communication complexity): Denote vkv_k the kthk-th view in which view synchronization occurs (Property 1). The message communication cost of a synchronizer is defined as lim(PHMP,v1+k=2(PHMP,vk1vk))/\lim_{\ell \to \infty} \left( \sum_{\mathcal{P}_{} \in H} M_{\mathcal{P}_{},\to v_1} + \sum_{k=2}^{\ell} \left( \sum_{\mathcal{P}_{} \in H} M_{\mathcal{P}_{},v_{k-1} \to v_{k}} \right) \right) / \ell.

This concludes the formal definition of the view synchronization problem. Next, we present Cogsworth\text{Cogsworth}, a view synchronization algorithm with expected constant latency and linear communication complexity in a variety of scenarios.

4. Cogsworth: Leader-Based Synchronizer

Before presenting Cogsworth\text{Cogsworth}, it is worth mentioning that we assume that all messages between nodes are signed and verified; for brevity, we omit the details about the cryptographic signatures. In the algorithm, when a node collects messages from xx senders, it is implied that these messages carry xx distinct signatures. We also assume that the Leader()\text{Leader}(\cdot) mapping is based on a permutation of the nodes such that every consecutive f+1f+1 views have at least one honest leader, e.g., Leader(v)=(vmodn)+1{\text{Leader}(v) = \left( v \bmod n \right) + 1}. The algorithm can be easily altered to a scenario where this is not the case.

4.1 Overview

Cogsworth\text{Cogsworth} is a new approach to view synchronization that leverages leaders to optimistically achieve linear communication. The key idea is that instead of nodes broadcasting synchronization messages all-to-all and incurring quadratic communication, nodes send messages to the leader of the view they wish to enter. If the leader is honest, it will relay a single broadcast containing an aggregate of all the messages it received, thus incurring only linear communication.

If a leader of a view vv is Byzantine, it might not help as a relay. In this case, the nodes time out and then try to enlist the leaders of subsequent views, one by one, up to view v+f+1v+f+1, to help with relaying. Since at least one of those leaders is honest, one of them will successfully relay the aggregate.

The full protocol is presented in , and consists of several message types. The first two are sent from a node to a leader. They are used to signal to the leader that the node is ready to advance to the next stage in the protocol. Those messages are named WISH,v\text{``}\textsf{WISH},v\text{''} and VOTE,v\text{``} \textsf{VOTE},v \text{''} where vv is the view the message refers to.

Algorithm 1

The other two message types are ones that are sent from leaders to nodes. The first is called TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} (short for “Time Certificate”) and is sent when the leader receives f+1f+1 WISH,v\text{``}\textsf{WISH},v\text{''} messages; and the second is called QC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} (short for “Quorum Certificate”) and is sent when the leader receives 2f+12f+1 VOTE,v\text{``} \textsf{VOTE},v \text{''} messages. In both cases, a leader aggregates the messages it receives using threshold signatures such that each broadcast message from the leader contains only one signature.

The general flow of the protocol is as follows: When wish_to_advance()\textsf{wish\_to\_advance}() is invoked, the node sends WISH,v\text{``}\textsf{WISH},v\text{''} to Leader(v)\text{Leader}(v), where vv is the view succeeding curr\textit{curr} (Line 5). Next, there are two options: (i) If Leader(v)\text{Leader}(v) forms a TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} , it broadcast it to all nodes (Line 7). The nodes then respond with VOTE,v\text{``} \textsf{VOTE},v \text{''} message to the leader (Line 10) (ii) Otherwise, if 2δ2\delta time elapses after sending WISH,v\text{``}\textsf{WISH},v\text{''} to Leader(v)\text{Leader}(v) without receiving TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} , a node gives up and sends WISH,v\text{``}\textsf{WISH},v\text{''} to the next leader, i.e., Leader(v+1)\text{Leader}(v+1) (Line 24). It then waits again 2δ2\delta before forwarding WISH,v\text{``}\textsf{WISH},v\text{''} to Leader(v+2)\text{Leader}(v+2), and so on, until TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} is received.

Whenever TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} has been received, a node sends VOTE,v\text{``} \textsf{VOTE},v \text{''} (even if it did not send WISH,v\text{``}\textsf{WISH},v\text{''}) to Leader(v)\text{Leader}(v). Additionally, as above, it enlists leaders one by one until QC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} is obtained. Here, the node sends leaders TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} as well as VOTE,v\text{``} \textsf{VOTE},v \text{''}. When a node finally receives QC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} from a leader, it enters view vv immediately (Line 17).

4.2 Correctness

We will prove that Cogsworth\text{Cogsworth} achieves eventual view synchronization (Property 1) for any α4δ\alpha\ge 4\delta as well as synchronization validity (Property 2). Thus, the claims and lemmas below assume this.

We start by proving that if an honest node entered a new view, and the leader of that view is honest, then all the other honest nodes will also enter that view within a bounded time.

Claim 4.1: After GST\text{GST}, if an honest node enters view vv at time tt, and the leader of view vv is honest then all the honest nodes enter view vv by t+4δt+4\delta, i.e., if Leader(v)H\text{Leader}(v) \in H then maxPiH{tPi,vprop}minPjH{tPj,vprop}4δ{\max_{\mathcal{P}_{i} \in H} \left\lbrace t^{\textit{prop}}_{\mathcal{P}_{i},v}\right\rbrace - \min_{\mathcal{P}_{j} \in H} \left\lbrace t^{\textit{prop}}_{\mathcal{P}_{j},v} \right\rbrace \le 4\delta} .

PROOF: Let Pi\mathcal{P}_{i} be the first honest node that entered view vv at time tt. Pi\mathcal{P}_{i} entered view vv since it received QC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} from Leader(r)\text{Leader}(r) such that vrv+f+1v \le r \le v+f+1 (Line 17).

If r=vr=v then we are done, since when Leader(v)\text{Leader}(v) sent QC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} it also sent it to all the other honest nodes (Line 16), which will be received by t+δt + \delta, and all the honest nodes will enter view vv.

Next, if r>kr > k then the only way for Leader(v)\text{Leader}(v) to send QC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} is if it gathered 2f+12f+1 VOTE,v\text{``} \textsf{VOTE},v \text{''} messages (), meaning at least f+1f+1 of the VOTE,v\text{``} \textsf{VOTE},v \text{''} messages were sent by honest nodes. An honest node will send a VOTE,v\text{``} \textsf{VOTE},v \text{''} message only after first receiving TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} from Leader(r)\text{Leader}(r') s.t. vrv+f+1v \le r' \le v+f+1 (Line 10).

Since when receiving a TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} an honest node sends the TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} to Leader(v)\text{Leader}(v)(Line 12), then Leader(v)\text{Leader}(v) will receive TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} by t+δt+\delta, will forward it to all other nodes by t+2δt+2\delta, who will send VOTE,v\text{``} \textsf{VOTE},v \text{''} to Leader(v)\text{Leader}(v) by t+3δt+3\delta and by t+4δt+4\delta all honest nodes will receive QC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} from Leader(v)\text{Leader}(v) and enter view vv.

Next, assuming an honest node entered a new view, we bound the time it takes to at least f+1f+1 honest nodes to enter the same view. Note that this time we do not assume anything on the leader of the new view, and it might be Byzantine.

Claim 4.2: After GST\text{GST}, when an honest node enters view vv at time tt, at least f+1f+1 honest nodes enter view vv by t+2δ(f+2)t+2\delta(f+2), i.e., after GST\text{GST} for every vv there exists a group S of honest nodes s.t. Sf+1\left| S \right| \ge f+1 and maxPiS{tPi,vprop}minPjS{tPj,vprop}2δ(f+2)\max_{\mathcal{P}_{i} \in S} \left\lbrace t^{\textit{prop}}_{\mathcal{P}_{i},v}\right\rbrace - \min_{\mathcal{P}_{j} \in S} \left\lbrace t^{\textit{prop}}_{\mathcal{P}_{j},v} \right\rbrace \le 2\delta(f+2).

PROOF: Let Pi\mathcal{P}_{i} be the first node that entered view vv at time tt. Pi\mathcal{P}_{i} entered vv since it received QC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} from Leader(r)\text{Leader}(r) and vrv+f+1v \le r \le v+f+1 (Line 17). If Leader(r)\text{Leader}(r) is honest then we are done, since Leader(r)\text{Leader}(r) multicasted QC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} to all honest nodes (Line 16), and within δ\delta all honest nodes will also enter view vv by t+δt+\delta.

Next, if Leader(r)\text{Leader}(r) is Byzantine, then it might have sent QC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} to a subset of the honest nodes, potentially only to Pi\mathcal{P}_{i}. In order to form a QC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} , Leader(r)\text{Leader}(r) had to receive 2f+12f+1 VOTE,v\text{``} \textsf{VOTE},v \text{''} messages (Line 14), meaning that at least f+1f+1 honest nodes sent VOTE,v\text{``} \textsf{VOTE},v \text{''} to Leader(r)\text{Leader}(r). Denote SS as the group of those f+1f+1 honest nodes.

Each node in SS sent VOTE,v\text{``} \textsf{VOTE},v \text{''} message since it received TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} from Leader(r)\text{Leader}(r') for vrv+f+1{v \le r' \le v+f+1} (Line 10). Note that different nodes in SS might have received TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} from a different leader, i.e., Leader(r)\text{Leader}(r') might not be the same leader for each node in SS.

After a node in SS sent VOTE,v\text{``} \textsf{VOTE},v \text{''} it will either receive a QC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} within 2δ2\delta and enter view vv, or timeout after 2δ2\delta and send VOTE,v\text{``} \textsf{VOTE},v \text{''} with TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} to Leader(v+1)\text{Leader}(v+1) (Line 30). They will continue to do so when not receiving QC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} for the next f+1f+1 views after vv. This ensures that at least one honest leader will receive TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} after at most t+2δf+δt+2\delta f + \delta. Then, this honest leader will multicast the TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} it received (Line 7) and at most by t+2δ(f+1)t+2\delta(f+1), all the honest nodes will receive TC\text{``}\textsf{TC}\text{''} TC,v.\text{``} \textsf{TC},v\text{''}. The honest nodes will then send VOTE,v\text{``} \textsf{VOTE},v \text{''} to the honest leader, which will be able to create QC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} and multicast it. The QC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} will thus be received by all the honest nodes by t+2δ(f+2)t+2\delta(f+2) and we are done.

Next, we show that during the execution, an honest node will enter some new view.

Claim 4.3: After GST\text{GST}, some honest node Pi\mathcal{P}_{i} enters a new view.

PROOF: From Claim 4.2, if an honest node enters some view vv, the time by which at least another ff other honest nodes also enter vv is bounded. Eventually, those honest nodes will timeout and wish_to_advance()\textsf{wish\_to\_advance}() will be invoked (Line 5), which will cause them to send WISH,v+1\text{``}\textsf{WISH},v+1\text{''} to Leader(v+1)\text{Leader}(v+1).

If Leader(v+1)\text{Leader}(v+1) is honest, then it will send a TC\text{``}\textsf{TC}\text{''} TC,v+1\text{``} \textsf{TC},v+1\text{''} to all the nodes (Line 7) which will be followed by the leader sending a QC\text{``} \textsf{QC}\text{''} QC,v+1\text{``} \textsf{QC},v+1\text{''} (), and all honest nodes will enter view v+1v+1.

If Leader(v+1)\text{Leader}(v+1) is not honest then the protocol dictates that the honest nodes that wished to enter v+1v+1 will continue to forward their WISH,v+1\text{``}\textsf{WISH},v+1\text{''} message to the next leaders (up to Leader(v+f+1)\text{Leader}(v+f+1), ) until each of them receives TC\text{``}\textsf{TC}\text{''} TC,v+1.\text{``} \textsf{TC},v+1\text{''}. This is guaranteed since at least one of those f+1f+1 leaders is honest.

The same process is then followed for QC\text{``} \textsf{QC}\text{''} QC,v+1\text{``} \textsf{QC},v+1\text{''} (Line 28), and eventually all of those v+1v+1 honest nodes will enter view v+1v+1.

Lemma 4.4: Cogsworth\text{Cogsworth} achieves eventual view synchronization (Property 1).

PROOF: From Claim 4.3 an honest node eventually will enter a new view, and by at least f+1f+1 honest nodes will enter the same view within a bounded time. By applying Claim 4.3 recursively and again, eventually, a view with an honest leader is reached and by Claim 4.1 all honest nodes will enter the view within 4δ4\delta.

Thus, for any c0c \ge 0, if the Cogsworth\text{Cogsworth} protocol is run with α=4δ+c\alpha= 4\delta+ c it is guaranteed that all honest nodes will eventually execute the same view for I=c\left| \mathcal{I}\right| = c.

The above arguments can be applied inductively, i.e., there exists an infinite number of such intervals and views in which view synchronization is reached, also ensuring that the views that synchronized also have an honest leader.

Lemma 4.5: Cogsworth\text{Cogsworth} achieves synchronization validity (Property 2).

To enter a new view vvQC\text{``} \textsf{QC}\text{''} QC,v\text{``} \textsf{QC},v\text{''} is needed, which is consisted of 2f+12f+1 VOTE,v\text{``} \textsf{VOTE},v \text{''} messages i.e., at least f+1f+1 are from honest nodes. An honest node will send VOTE,v\text{``} \textsf{VOTE},v \text{''} message only when it receives a TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} message, that requires f+1f+1 WISH,v\text{``}\textsf{WISH},v\text{''} message, meaning at least one of those messages came from an honest node.

An honest node will send WISH,v\text{``}\textsf{WISH},v\text{''} when the upper-layer protocol invokes wish_to_advance()\textsf{wish\_to\_advance}() while it was in view v1v-1.

This concludes the proof that Cogsworth\text{Cogsworth} is a synchronizer for any α4δ\alpha\ge 4\delta. Similar to the broadcast-based synchronizer, it allows upper-layer protocols to determine the time they spend in each view.

4.3 Latency and communication

Let vmaxGSTv^{\text{GST}}_{\textit{max}} be the maximum view an honest node is in at GST\text{GST}, and let XX denote the number of consecutive Byzantine leaders after vmaxGSTv^{\text{GST}}_{\textit{max}}. Assuming that leaders are randomly allocated to views, then XX is a random variable of a geometrical distribution with a mean of n/(nf)n / (n-f). This means that in the worst case of t=f=n/3t = f = \left\lfloor n/3 \right\rfloor, then E(X)=(3f+1)/(2f+1)3/2{\mathbb{E}(X) = (3f+1)/(2f+1) \approx 3/2}.

Since when f+1f+1 honest nodes at view vv want to advance to view v+1v+1, and if Leader(v+1)\text{Leader}(v+1) is honest, all honest nodes enter view v+1v+1 in constant time (Claim 4.1), the latency for view synchronization, in general, is O(Xδ)O(X {\cdot} \delta). For the same reasoning, this is also the case for any two intervals between view synchronizations (see Definition 3.1).

In the worst-case of X=tX = t, where tt is the number of actual failures during the run, then latency is linear in the view duration, i.e., O(tδ)O(t {\cdot} \delta). But, in the expected case of a constant number of consecutive Byzantine leaders after vmaxGSTv^{\text{GST}}_{\textit{max}}, the expected latency is O(δ)O(\delta).

For communication complexity, there is a difference between Byzantine failures and benign ones. If a Byzantine leader of a view rr obtains TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} for r(f+1)vrr-(f+1) \le v \le r, then it can forward the TC\text{``}\textsf{TC}\text{''} TC,v\text{``} \textsf{TC},v\text{''} to all the f+1f+1 leaders that follow view vv and those leaders will multicast the message (Line 7), leading to expected O(n2)O(n^2) communication complexity, in the case of at least one Byzantine leader after vmaxGSTv^{\text{GST}}_{\textit{max}}. In the worst-case of a cascade of tt failures after vmaxGSTv^{\text{GST}}_{\textit{max}}, the communication complexity is O(tn2)O(t{\cdot}n^2).

In the case of benign failures, communication complexity is dependent on XX, since the first correct leader after vmaxGSTv^{\text{GST}}_{\textit{max}} will get all nodes to enter his view and achieve view synchronization, and the benign leaders before it will only cause delays in terms of latency, but will not increase the overall number of messages sent. Thus, in general, the communication complexity with benign failures is O(Xn)O(X {\cdot} n). In the worst-case of X=tX = t communication complexity is O(tn)O(t {\cdot} n), but in the average case it is linear, i.e.O(n)O(n). For the same reasoning, this is also the case between any consecutive occurrences of view synchronization (see Definition 3.2).

To sum up, the expected latency for both benign and Byzantine failures is O(δ)O(\delta), and worst-case O(tδ){O(t {\cdot} \delta)}. Communication complexity for Byzantine nodes is optimistically O(n)O(n), expected O(n2){O(n^2)}, and worst-case O(tn2)O(t{\cdot}n^2) and for benign failures is expected O(n)O(n) and worst-case O(tn)O(t {\cdot} n).


Cogsworth\text{Cogsworth} achieves expected constant latency and linear communication under a broad set of assumptions. It is another step in the direction of reaching the quadratic communication lower bound of Byzantine consensus in an asynchronous model [32].

In addition to Cogsworth\text{Cogsworth} we present in Appendix A two more view synchronization algorithms. The first one is view doubling, where nodes simply double their view duration when entering a new view, which guarantees that eventually all nodes will be in the same view for sufficiently long. The other algorithm is borrowed from consensus protocols such as PBFT [15] and SBFT [16]. In Appendix A.3 we present a comprehensive discussion on all three algorithms.

5. Usages and Implementations of Synchronizers

In this section, we describe real-world usages of the view synchronization algorithms. Many times, in different works, the terms “phase,” “round,” and “view” are mixed. In this work when “view” is mentioned, the meaning is that all the nodes agree on some integer value, mapped to a specific node that acts as the leader.

There are SMR protocols where as long as the leader is driving progress in the protocol it is not changed. This will correspond to all the nodes staying in the same view, and this view can be divided into many phases, e.g., in PBFT [15] a single-shot consensus consists of two phases. In an SMR protocol based on PBFT a view can consist of many more phases, all with the same leader as long as progress is made, and there is no bound on the view duration.

As mentioned in Section 1.2, in HotStuff [17], the view synchronization logic is encapsulated in a module named a PaceMaker, but does not provide a formal definition of what the PaceMaker does, nor an implementation. The most developed work which adopted HotStuff as the core of its consensus protocol is LibraBFT [33]. In LibraBFT, a module also named a PaceMaker is in charge of advancing views. In this module whenever a node timeouts of its current view, say view vv, it sends a message named “TimeoutMsg, vv”, and whenever it receives 2f+12f+1 of these messages, it advances to view vv. In addition, the node sends an aggregated signature of these messages to the leader of view vv, which according to the paper, if the leader of vv is honest, guarantees that all other nodes will enter view vv within 2δ2 \delta. The current implementation of the PaceMaker is linear communication as long as there are honest leaders, but quadratic upon reaching a view with a Byzantine one. The latency is constant.

Many other works on consensus rely on view synchronization as part of their design. For example, in [34] a doubling view synchronization technique is used: “For the view-change process, each replica will start with a timeout δ\delta and double this timeout after each view-change (exponential backoff). When communication becomes reliable, exponential backoff guarantees that all replicas will eventually view-change to the same view at the same time.”

6. Related Work

View synchronization in consensus protocols

The idea of doubling round duration to cope with partial synchrony borrows from the DLS work [2], and has been employed in PBFT [15] and in various works based on DLS/PBFT [33][25][17]. In these works, nodes double the length of each view when no progress is made. The broadcast-based synchronization algorithm is also employed as part of the consensus protocol in works such as PBFT.

HotStuff [17] encapsulates view synchronization in a separate module named a PaceMaker. Here, we provide a formal definition, concrete solutions, and performance analysis of such a module. HotStuff is the core consensus protocol of various works such as Cypherium [11], PaLa [13], and LibraBFT [33]. Other consensus protocols such as Tendermint [25] and Casper [23] reported issues related to the liveness of their design [26][27].

Notion of time in distributed systems

Causal ordering is a notion designed to give partial ordering to events in a distributed system. The most known protocols to provide such ordering are Lamport Timestamps [35] and vector clocks [36]. Both works assume a non-crash setting.

Another line of work stemmed from Awerbuch’s work on synchronizers [37]. The synchronizer in Awerbuch’s work is designed to allow an algorithm that is designed to run in a synchronous network to run in an asynchronous network without any changes to the synchronous protocol itself. This work is orthogonal to the work in this paper.

Recently, Ford published preliminary work on Threshold Logical Clocks (TLC) [38]. In a crash-fail asynchronous setting, TLC places a barrier on view advancement, i.e., nodes advance to view v+1v+1 only after a threshold of them reached view vv. A few techniques are also described on how to convert TLCs to work in the presence of Byzantine nodes. The TLC notion of a view “barrier” is orthogonal to view synchronization, though a 2-phase TLC is very similar to our reliable broadcast synchronizer.

Failure detectors

The seminal work of Chandra & Toueg [18], [19] introduces the leader election abstraction, denoted Ω\Omega, and proves it is the weakest failure detector needed to solve consensus. By using Ω\Omega, consensus protocols can usually be written in a more natural way. The view synchronization problem is similar to Ω\Omega, but differs in several ways. First, it lacks any notion of leader and isolates the view synchronization component. Second, view synchronization adds recurrence to the problem definition. Third, it has a built-in notion of view-duration: nodes commit to spend a constant tine in a view before moving to the next. Last, this paper focuses on latency and communication costs of synchronizer implementations.

Latency and message communication for consensus

Dutta et al[39] look at the number of rounds it takes to reach consensus in the crash-fail model after a time defined as GSR (Global Stabilization Round) which only correct nodes enter. This work provides an upper and a lower bound for reaching consensus in this setting. Other works such as [40][41] further discuss the latency for reaching consensus in the crash-fail model. These works focus on the latency for reaching consensus after GST\text{GST}. Both bounds are tangential to our performance measures, as they analyze round latency. GIRAF [42][43] is a view-based framework to analyze consensus protocols, and specifically analyzes protocols in the crash-fail model.

Dolev et al[32] showed a quadratic lower bound on communication complexity to reach deterministic Byzantine broadcast, which can be reduced to consensus. This lower bound is an intuitive baseline for work like ours, though it remains open to prove a quadratic lower bound on view synchronization per se.

Clock synchronization

The clock synchronization problem [44] in a distributed system requires that the maximum difference between the local clock of the participating nodes is bounded throughout the execution, which is possible since most works assume a synchronous setting. The clock synchronization problem is well-defined and well-treaded, and there are many different algorithms to ensure this in different models, e.g.[45][46][47]. In practical distributed networks, the most prevalent protocol is NTP [48]. Again, clock synchronization is an orthogonal notion to view synchronization, the latter guaranteeing to and stay in a view within a bounded window, but does not place any bound on the views of different nodes at any point in time.

7. Conclusion

We formally defined the Byzantine view synchronization problem, which bridges classic works on failure detectors aimed to solve one-time consensus, and SMR which consists of multiple one-time consensus instances. We presented Cogsworth\text{Cogsworth} which is a view synchronization algorithm that displays linear communication cost and constant latency under a broad variety of scenarios.


This project was partially funded by a grant from the Technion Hiroshi Fujiwara Cyber Security Research Center.


A. Protocols for View Synchronization

In this section we place into the view synchronization framework two view synchronization algorithms which are used in various consensus protocols, and prove their correctness, as well as discuss their latency and message complexity.

All protocol messages between nodes are signed and verified; for brevity, we omit the details about the cryptographic signatures.

A.1 View Doubling Synchronizer

A.1.1 Overview

A solution approach inspired by PBFT [15] is to use view doubling as the view synchronization technique. In this approach, each view has a timer, and if no progress is made the node tries to move to the next view and doubles the timer time for the next view. Whenever progress is made, the node resets its timer. This approach is intertwined with the consensus protocol itself, making it hard to separate, as the messages of the consensus protocol are part of the mechanism used to reset the timer.

We adopt this approach and turn it into an independent synchronizer that requires no messages. Fist, the nodes need to agree on some predefined constant β>0\beta> 0 which is the duration of the first view. Next, there exists some global view duration mapping VD():NR+\textit{VD}(\cdot): \mathbb{N} \mapsto \mathbb{R}^+, which maps a view vv to its duration: VD(v)=2vβ\textit{VD}(v) =2^v \beta. A node in a certain view must move to the next view once this duration passes, regardless of the outer protocol actions.

The view doubling protocol is described in Algorithm 2. A node starts at view 00 () and a view duration of β>0\beta> 0 (Line 4). Next, when wish_to_advance()\textsf{wish\_to\_advance}() is called, a counter named wish\textit{wish} is incremented (Line 5). This counter guarantees validity by moving to a view vv only when the wish\textit{wish} counter reaches vv. Every time a view ends (Line 7), an internal counter curr\textit{curr} is incremented, and if the wish\textit{wish} allows it, the synchronizer outputs  propose_view(v){\textsf{ propose\_view}}(v) with a new view vv.

Algorithm 2

A.1.2 Correctness

We show that the view doubling protocol achieves the properties required by a synchronizer.

Lemma A.1: The view doubling protocol achieves view synchronization (Property 1).

PROOF: Since this protocol does not require sending messages between nodes, the Byzantine nodes cannot affect the behavior of the honest nodes, and we can treat all nodes as honest.

Recall that t=0t=0 denotes the time by which all the honest nodes started their local execution of . Let initi\textit{init}_{i} be the view at which node Pi\mathcal{P}_{i} is at during t=0t=0. W.l.o.g assume init1init2initn{\textit{init}_{1} \le \textit{init}_{2} \le \cdots \le \textit{init}_{n}} at time t=0t=0. It follows from the definition of initi\textit{init}_{i} and the sum of a geometric series that

tPi,vprop=β(2v2initi).t^{\textit{prop}}_{\mathcal{P}_{i},v} = \beta\left( 2^v -2^{\textit{init}_{i}} \right).(1)

We begin by showing that for every iji \le j the following condition holds: tPi,vproptPj,vpropt^{\textit{prop}}_{\mathcal{P}_{i},v} \ge t^{\textit{prop}}_{\mathcal{P}_{j},v} for any view vv. Let k=initik = \textit{init}_{i} and l=initjl = \textit{init}_{j}. From the ordering of the node starting times, for all klk \le l. We get:

tPi,vproptPj,vpropβ(2v2k)β(2v2l)lk.t^{\textit{prop}}_{\mathcal{P}_{i},v} \ge t^{\textit{prop}}_{\mathcal{P}_{j},v} \Leftrightarrow \beta\left( 2^v-2^k \right) \ge \beta\left( 2^v - 2^l \right) \Leftrightarrow l \ge k.

Hence, for iji \le j, since at t=0t=0 node Pj\mathcal{P}_{j} had a view number larger than Pi\mathcal{P}_{i}, then Pj\mathcal{P}_{j} will start all future views before Pi\mathcal{P}_{i}.

Next, let k=init1k =\textit{init}_{1} and l=initnl = \textit{init}_{n}, i.e., the minimal view and the maximal view at t=0t=0 respectively. To prove that the first interval of view synchronization is achieved, it suffices to show that for any constant c0c \ge 0 there exists a time interval I\mathcal{I} and a view vv such that Ic\left| \mathcal{I}\right| \ge c and tn,v+1propt1,vpropIt^{\textit{prop}}_{n,v+1} - t^{\textit{prop}}_{1,v} \ge | \mathcal{I}|. Using this, we will show that there exists an infinite number of such intervals and views that will conclude the proof. This also ensures that there is an infinite number of such views with honest leaders.

Indeed, first note that as shown above, node Pn\mathcal{P}_{n} will start view vv before any other node in the system. The left-hand side of the equation is the time length in which both node Pn\mathcal{P}_{n} and node P1\mathcal{P}_{1} execute together view vv. If the left-hand side is negative, then there does not exist an overlap, and if it is positive then an overlap exists.

We get

tn,v+1propt1,vpropIβ(2v+12l)β(2v2k)t^{\textit{prop}}_{n,v+1} - t^{\textit{prop}}_{1,v} \ge | \mathcal{I}| \Leftrightarrow \beta\left( 2^{v+1} -2^l \right) - \beta\left( 2^v -2^k \right)(2)
Iβ[2v+(2k2l)]I. \ge | \mathcal{I}| \Leftrightarrow \beta\left[ 2^v + \left( 2^k -2^l \right) \right] \ge | \mathcal{I}|.

For any c0c \ge 0 there exists a minimum view number vv' such that the inequality holds, and since kk is the minimum view number at t=0t = 0 this solution holds for any other node Pi\mathcal{P}_{i} as well. In addition, for any vvv \ge v' the inequality also holds, meaning there is an infinite number of solutions for it, including an infinite number of views with an honest leader.

If wish_to_advance()\textsf{wish\_to\_advance}() is called in intervals with 0<αβ0< \alpha \le \beta then by the time the value of curr\textit{curr} reaches some view value vv, wish\textit{wish} will always be bigger than curr\textit{curr}, meaning the condition in will always be true, and the synchronizer will always propose view vv by the time stated in Equation 1.

Lemma A.2: The view doubling protocol achieves synchronization validity (Property 2).

PROOF: The if condition in Line 10 ensures that the output of the synchronizer will always be a view that a node wished to advance to.

This concludes the proof that view doubling is a synchronizer for any 0<αβ0 < \alpha\le \beta.

A.1.3 Latency and communication

Since the protocol sends no messages between the nodes, it is immediate that the communication complexity is 00.

As for latency, the minimal vv^* satisfying Equation 2 grows with c(2initn2init1)c \left(2^{\textit{init}_{n}} - 2^{\textit{init}_{1}} \right). Since the initial view-gap initninit1\textit{init}_{n} - \textit{init}_{1} is unbounded, so is the view vv^* in which synchronization is reached. The latency to synchronization is tP1,vprop=2v2init1t^{\textit{prop}}_{\mathcal{P}_{1}, v^*} = 2^{v^*} - 2^{\textit{init}_{1}}, also unbounded.

A.2 Broadcast-Based Synchronizer

A.2.1 Overview

Another leaderless approach is based on the Bracha reliable broadcast protocol [28] and is presented in Algorithm 3. In this protocol, when a node wants to advance to the next view vv it multicasts a WISH,v\text{``}\textsf{WISH},v\text{''} message (multicast means to send the message to all the nodes including the sender) (Line 3). When at least f+1f+1 WISH,v\text{``}\textsf{WISH},v\text{''} messages are received by an honest node, it multicasts WISH,v\text{``}\textsf{WISH},v\text{''} as well (Line 5). A node advances to view vv upon receiving 2f+12f+1 WISH,v\text{``}\textsf{WISH},v\text{''} messages (Line 7).

Algorithm 3

A.2.2 Correctness

We start by showing that the broadcast-based synchronizer achieves eventual view synchronization (Property 1) for any α2δ\alpha\geq 2\delta. Thus, the claims and lemmas below assume this.

Claim A.3: After GST, whenever an honest node enters view vv at time tt, all other honest nodes enter view vv by t+2δt+2\delta, i.e., maxPiH{tPi,vprop}minPjH{tPj,vprop}2δ.\max_{\mathcal{P}_{i} \in H} \left\lbrace t^{\textit{prop}}_{\mathcal{P}_{i}, v} \right\rbrace - \min_{\mathcal{P}_{j} \in H} \left\lbrace t^{\textit{prop}}_{\mathcal{P}_{j},v} \right\rbrace \le 2 \delta.

PROOF: Suppose an honest node PiH\mathcal{P}_{i} \in H enters view vv at time tPi,vprop=tt^{\textit{prop}}_{\mathcal{P}_{i}, v} = t, then it received 2f+12f+1 WISH,v\text{``}\textsf{WISH},v\text{''} messages, from at least f+1{f+1} honest nodes (Line 7).

Since the only option for an honest node to disseminate WISH,v\text{``}\textsf{WISH},v\text{''} message is by multicasting it, then by t+δt + \delta all nodes will receive at least f+1f+1 WISH,v\text{``}\textsf{WISH},v\text{''} messages. Then, any left honest nodes (at most ff nodes) will thus receive enough WISH,v\text{``}\textsf{WISH},v\text{''} to multicast the message on their own (Line 5) which will be received by t+2δt + 2 \delta by all the nodes. This ensures that all the honest nodes receive 2f+12f+1 WISH,v\text{``}\textsf{WISH},v\text{''} messages and enter view vv by t+2δt + 2\delta.

Claim A.4: After GST, eventually an honest node Pi\mathcal{P}_{i} enters some new view.

PROOF: All honest nodes begin their local execution at view 00, potentially at different times. Based on the protocol eventually at least f+1f+1 nodes (some of them might be Byzantine) send WISH,1\text{``}\textsf{WISH},1\text{''}. This is because wish_to_advance()\textsf{wish\_to\_advance}() is called every α\alpha. Thus, eventually all honest nodes will reach view 11, and from Claim A.3 the difference between their entry is at most 2δ2\delta after GST\text{GST}.

The above argument can be applied inductively. Suppose at time tt node Pi\mathcal{P}_{i} is at view vv. We again know that by t+2δt+2\delta all other honest nodes are also at view vv, and once f+1f+1 WISH,v+1\text{``}\textsf{WISH},v+1\text{''} are sent all honest nodes will eventually enter view v+1v+1, and we are done.

Lemma A.5: The broadcast-based protocol achieves view synchronization (Property 1).

Proof. From Claim A.4 an honest node will eventually advance to some new view vv and from Claim A.3 after 2δ2\delta all other honest nodes will join it. For any c0c \ge 0, if the honest nodes call wish_to_advance()\textsf{wish\_to\_advance}() every α=2δ+c\alpha= 2\delta+ c then it is guaranteed that all the honest nodes will execute view vv together for at least I=c\left| \mathcal{I}\right| = c time, since it requires f+1f+1 messages to move to view v+1v+1, i.e., at least one message is sent from an honest node.

This argument can be applied inductively, and each view after GST\text{GST} is synchronized, thus making an infinite number of time intervals and views which all honest leaders execute at the same time.

Lemma A.6: The broadcast-based synchronizer achieves synchronization validity (Property 2).

PROOF: In order for an honest node to advance to view vv it has to receive 2f+12f+1 WISH,v\text{``}\textsf{WISH},v\text{''} messages (Line 7). From those, at least f+1f+1 originated from honest nodes. An honest node can send WISH,v\text{``}\textsf{WISH},v\text{''} on two scenarios:

(i) wish_to_advance()\textsf{wish\_to\_advance}() was called when the node was at view v1v-1 (Line 3) and we are done.

(ii) It received f+1f+1 WISH,v\text{``}\textsf{WISH},v\text{''} messages (Line 5), meaning at least one honest node which already sent the message was at view v1v-1 and called wish_to_advance()\textsf{wish\_to\_advance}() and again we are done.

This concludes the proof that the broadcast-based synchronizer is a view synchronizer for any α2δ{\alpha\ge 2\delta}.

A.2.3 Latency and communication

The broadcast-based algorithm synchronizes every view after GST\text{GST} within 2δ.2\delta. Since the leaders of each view are allocated by the mapping Leader()\text{Leader}(\cdot), in expectation every 3/2\approx 3/2 nodes have an honest leader (see the communication complexity analysis done for Cogsworth\text{Cogsworth} in Section 4). Therefore, for latency, the broadcast-based synchronizer will take an expected constant time to reach view synchronization after GST\text{GST}, as we have proved, and also the same between every two consecutive occurrences of view synchronization. Thus, the latency of this protocol is expected O(δ)O(\delta). In the worst-case of tt consecutive failures, the latency is O(tδ)O(t{\cdot}\delta).

For communication costs, the protocol requires that every node sends one WISH,v\text{``}\textsf{WISH},v\text{''} message to all the other nodes, and since the latency is expected constant, the overall communication costs are also expected quadratic, i.e.O(n2)O(n^2). In the worst-case of tt consecutive failures, the communication complexity is O(tn2)O(t{\cdot}n^2).

A.3 Discussion

The three presented synchronizers in the paper have tradeoffs in their latency and communication costs, which are summarized in Table 1. Hence, a protocol designer may choose a synchronizer based on its needs and constraints. It might be possible to create combinations of the three protocols and achieve hybrid characteristics; we leave such variations for future work.

In addition, there are differences in the constraints on the parameter α\alpha in these protocols, which is the time interval between two successive calls to wish_to_advance()\textsf{wish\_to\_advance}() (see Property 1). The view doubling synchronizer prescribes a precise α\alpha, which results in each view duration to be exactly twice as its predecessor. In the other two synchronizers there is only a lower bound on α\alpha: in the broadcast-based it is 2δ2\delta, and in Cogsworth\text{Cogsworth} it is 4δ4\delta.

This difference is significant. Suppose an upper-layer protocol utilizing the synchronizer wishes to spend an unbounded amount of time in each view as long as progress is made, and triggers a view-change upon detecting that progress is lost. While the broadcast-based and Cogsworth\text{Cogsworth} algorithms allow this upper-layer behavior, the view doubling technique does not, and thus may influence the decision on which view synchronization algorithm to choose.

Another difference between the algorithms is that the view-doubling and the broadcast-based synchronizers both guarantee that after the first synchronized view, all subsequent views are also synchronized, regardless if the leaders are honest or not. Cogsworth\text{Cogsworth} only guarantees synchronization after GST in views that have an honest leader. For most leader-based consensus protocols, this guarantee suffices to ensure progress, other protocols using a synchronizer might find the strengthened guarantee preferable.

No comments here
Why not start the discussion?