Smart contracts allow flexible new forms of trust-free cooperation. This gives mechanism designers a new platform in which they can apply game-theoretical principles to engineer desirable outcomes. However, as we have seen with the emergence of “dark DAOs,” hostile attackers can just as easily deploy smart contracts that coordinate collusion and otherwise attempt to undermine the goals of a given game-theoretical structure. Indeed, attacks on incentive structures that would previously have seemed elaborate and difficult to organize can now execute automatically via smart contracts, requiring no trust in the attacker. In this work, we will analyze the “ attack” on Schelling point-based cryptoeconomic systems. Particularly, we will consider design choices that one can apply, slightly modifying these systems, to increase resistance to these types of attacks.
In a coordination game, participants are allowed to choose from several options, and they have an incentive to choose the option that is also chosen by other participants. A standard example of this phenomenon is the process of deciding whether to drive on the left or right side of a road. To the degree that the payoff matrix is symmetric between the various options, which option participants coordinate on can be thought of as somewhat arbitrary.
However, as was noted by Thomas Schelling [1], when one of the options is somehow distinguished, it can serve as a “focal point” or “Schelling point” on which participants can coordinate. This effect is particularly relevant when participants lack other means of communication that they could use to coordinate. Then, a “Schelling game” is a coordination game where one option is such a focal point, that is distinguished for reasons external to the payoff structure of the game. This is in contrast to approaches based on participants coordinating based on asymmetries in the payoff structure of the game, such as when equilibria are Pareto-ranked; see e.g., [2], [3], [4] for work in this direction.
Schelling games are the basis for many blockchain “oracles,” i.e., systems used to import information onto a blockchain platform that is not checked as part of the consensus protocol of that blockchain. Here, the principle is that if a group of voters are asked to choose between options while reporting on some question about the external world, they will tend to gravitate towards the true response, if only because they view it as a distinguished answer that other voters will expect them to choose. Such oracles are important for reporting on-chain results of elections for prediction market contracts, rainfall amounts for drought insurance contracts, etc. Moreover, this structure also arises implicitly in fundamental ways in blockchain systems; in [5] it is pointed out that in consensus algorithms based on proof-of-work and the longest chain rule such as Bitcoin [6], the choice of which fork to follow can be thought of as a “vote” having the structure of a Schelling game. See Section 4 for further details on these examples.
An intriguing new attack, the attack, designed to warp the incentive structures of these games was proposed in a blog post by Vitalik Buterin [5], crediting the idea for the attack to Andrew Miller. In general, a attack that attempts to make the malicious choice win should pay to participants who vote an additional more than what they would receive for voting , contingent on winning. In the case of the game of Figure 1, this yields a payoff matrix for the participant of:
Then voting is a weakly dominant strategy in this game, so the attacker might hope that the vast majority of participants will vote causing the attacker’s choice to win even as she is not required to pay out any bribes. Hence, when participants are rational and self-interested this attack can warp the incentives of the system with no costs to the attacker, i.e., “for free.” Indeed, one observes that there are (at least) the following two equilibria in this game:
all voters vote
votes for and votes for .
In [5], several defenses to attempt to resist such attacks are considered, including, notably, a defense that builds on the second type of equilibrium above, where one is at a “tipping point” between and Here, if a rational actor with votes thinks, based on her knowledge of how other participants were likely to vote, that she has the decisive vote, then she maximizes her return by splitting up their votes so that wins by a single vote. Thus, as many votes as possible receive the additional while remaining in the situation where the attacker loses and actually has to pay out the bribes. Then this idea, when considered from the perspective of a group of voters coordinating rather than a single participant, is the basis for the response of “counter-coordination” considered in [5]. As pointed out in [5], a problem with counter-coordination in games of this form is that each counter-coordinator, if they believe that the counter-coordination will succeed and the attack will fail independently of whether they individually participate or not, is strictly better off taking the full bribe. Hence, counter-coordination can only function to the degree that participants are either altruistic or believe their participation is likely to make a difference in the success of the endeavor. Thus, this situation exhibits characteristics of a tragedy of the commons [7].
More generally, this idea of a bribe that is only paid out in the case that the bribe is unsuccessful in changing the outcome was already considered in a thought experiment proposed by Warren Buffett. As part of an editorial written for the New York Times in 2000, Buffet highlighted absurd aspects of the United States campaign finance system and arguing for campaign finance reform [8]. It has been remarked that situations similar to the have existed in other pre-blockchain situations, particularly surrounding share buying offers in acquisitions of companies; indeed, there is even an example of such a situation where the intended victim of the attack launched a second attack in support of its side [9]. Also, the attack has been considered in classifications of types of attacks on cryptoeconomic systems [10] particularly towards the perspective of whether these attacks are viable on proof-of-stake consensus systems as well as proof-of-work systems [11], [12]. Nonetheless, considering the widespread use of Schelling-based structures (explicitly or implicitly) in blockchain-based systems, this attack has so far received relatively little attention from the research community.
In this section, we will recall various elements that can be considered in the design of a Schelling game. In all of these situations we will take .
Users can be required to place a deposit which they lose if their vote disagrees with the ultimate result. Thus, this allows penalties to voters as well as rewards.
We can also have appeals. Appeals result in delaying the result of a vote by an additional blocks per appeal, up to some maximum number of appeal rounds . Each appeal will typically consume some additional resources, with the idea that more resources can be put towards this ruling. So, ideally, this ruling is more likely to be correct. Concretely, if we suppose that there are voters in the th appeal round, a Schelling game of the form of Figure 1 might require resources to provide for payouts to voters. Of particular note for our purposes, we will generally consider the winner of a game, as used for determining which payouts are made, to be the option that receives the most votes in the last appeal round.
Appeals can be triggered in a variety of ways, such as by allowing users to pay a fee, or by random chance.1 Some of our results will assume the model where users can trigger an appeal in the th appeal round by paying a fee of (at most) . Nevertheless, a user that decides to trigger such an appeal may ultimately pay less than the ; for example, if there are multiple users who decide to appeal, they might be able to split the fee between them, one of them might be randomly chosen to pay the full fee, or they might all have to pay the full fee with the excess amounts burned, depending on the appeal system used. While our results are general enough to handle these different possibilities, we nevertheless assume the following continuity property: if are users that appeal in round with probabilities respectively, then
is continuous in . Furthermore, we assume .
In our work, we will only consider binary Schelling games: namely, games where a set of voters select between two different outcomes. Nonetheless, one can construct different payoff matrices for games that preserve the basic idea of a Schelling game. The payoff matrix that we saw in the Introduction, when modified to include deposits is:
Definition 2.1: We define a simple Schelling game to be one where a participant can vote one of two options with the payoffs given by:
A attack transforms the payoff matrix to:
Concretely, this is achieved as the attacker pays to an amount of if finds herself in the case of the bottom-left entry of the table.
Another alternative that is employed in some applications (see Section 4) is the following structure:
Definition 2.2: We define a symbiotic Schelling game to be one where a participant contributes a deposit and can vote for one of two options with the payoffs given by:
where is the number of votes (other than that of ) for .
A attack on this game gives the following payoff table:
Finally, as another alternative, we could also divide all amounts—both the positive sum resources that the participants can receive and the lost deposits—between voters (in a given appeal round) on the winning side. This has the effect that rewards are larger in the event of narrow decisions.
Definition 2.3: We define a redistributive Schelling game to be one where a participant contributes a deposit and can vote for one of two options with the payoffs given by:
where is the number of votes (other than that of ) for .
A attack on this game gives the following payoff table:
Note that, in a redistributive Schelling game, if a given voter is the only person to vote for in her round and then ultimately wins, she receives all of the and all of the lost deposits for her round. Then, in order for an attacker to make voting a dominant strategy and to incentivize voters to not take a chance on trying to get this “lone voice of reason jackpot,” an attacker needs to be willing to offer bribes that are to each voter, for a total of , in the most extreme case.
Note that these three structures are normalized so that they have the same maximum total payout to voters - . However, the full is only guaranteed to be paid out in the redistributive Schelling game case.
We will see in Section 6 and Section 7 that redistributive Schelling games with appeal systems have a certain level of robustness against attacks, essentially due to the nature of “lone voice of reason jackpot.” Hence, we also propose adapted attacks, which, while they can be applied to any of the payoff structures discussed above, are designed to preserve the spirit of a attack while removing or limiting the “lone voice of reason jackpot” in redistributive games:
“Last round only attack”: an attacker can only pay bribes to participants in the last, decisive appeal round—without specifying what this round is in advance
“Capped attack”: an attacker can offer a bribe in all rounds, but she caps how much a given voter can receive as a bribe by .
Deposits and (a form of) appeal systems,2 were all already considered as possible defenses against attacks in [5] in the context of Schelling games of the form of Definition 2.1. The possibility that participants can lose deposits has already been evoked as a basis for increased security against attacks in the context of proof-of-stake consensus algorithms [11] (compared to proof-of-work where miners do not typically stake a deposit3). However, the interplay between these elements, and particularly how they interact with the choice of games of the form of Definition 2.1, Definition 2.2, or Definition 2.3, seems to have not been fully explored.
In this work we extend the analysis presented in [5] on simple Schelling games of the form of Definition 2.1 to symbiotic Schelling games of the form of Definition 2.2 and redistributive Schelling games of the form of Definition 2.3. Moreover, we will see how various defenses against attacks, such as deposits and appeal systems, interact with each other and with the choice of payoff structure.
In Section 6 and Section 7, we see that, under a variety of measures of difficulty of an attack, redistributive games, particularly when there is an appeal structure, have increased resistance to attacks. Specifically, in Section 6, we see that, under certain conditions, a attack on a redistributive Schelling game with an appeal system has budget requirements which grow quadratically in the number of participants and hence are likely to be prohibitive in many cases. In Section 7, we see that there are “counterattacks” against attacks on redistributive Schelling games with appeal that are stable in a mixed strategy equilibrium and result in the attack failing with positive probability greater than a lower bound that we are able to calculate explicitly.4
However, in Section 9, we will analyze the “last round only” and “capped” attacks, where the attacker adapts the spirit of a attack to avoid some of these defenses. While we will see that these adapted versions are less prohibitive, even in the context of a redistributive Schelling game with an appeal system, accepting the bribe is no longer a dominant strategy, and hence these attacks are no longer “free.” Thus, these attacks seem to be somewhat less dangerous. In Section 8, we consider the situation where rational participants assign a moral (or cognitive effort) cost to accepting the bribe, and we observe mixed strategy equilibria where the attack fails some percentage of the time.
Finally, in Section 10, we will consider the results of some preliminary experiments around attacks as conducted on the Kleros platform.
A number of blockchain-based systems involve voting that, implicitly or explicitly, can be seen in the context of Schelling point based games.
A number of token-curated lists involve voting on candidates for admission to the list where voters are rewarded via a Schelling point game, see [13]. Notable examples include lists of true or fake news for decentralized news aggregation services or lists of products that meet certain criteria for sale in decentralized marketplaces. Here the voters for the list, typically a pool of token holders, vote on whether a given entry deserves admission to the list, with rewards (and in some cases penalties) applied to voters based on whether or not they agree with the majority vote. This process serves as an oracle for blockchain applications that use these lists to gain information about whether submissions satisfy the listing criteria. See [14], [15] for examples.
Truthcoin [16] proposes a more general purpose oracle system based on a Schelling point mechanism; however, its payoff structures are somewhat more complicated than those considered in Section 2 as Truthcoin creates an interdependence of votes on different questions, which are then resolved using a singular value decomposition on the matrix of votes.
Kleros [17] is a blockchain-based system for dispute resolution that functions on a Schelling point-based mechanism. It is, then, a type of oracle that is specialized in bringing on-chain information necessary to resolve disputes. In any given case, for example between two parties to a small-scale freelancing dispute, a number of “jurors” are drawn randomly from among a pool of token holders. These “jurors” then vote on the result of the dispute, and they are rewarded or penalized based on whether or not they vote for the majority result. Note that Kleros uses a redistributive Schelling game structure and has an explicit appeal mechanism satisfying the properties described in Section 1.
One of the original examples of [5] is that of the proof-of-work consensus mechanism. A given block producer can expect to receive in block rewards, but only if her blocks are ultimately in the longest chain. Then suppose that we have two rival chains on which one can mine, chains and where on each chain a transaction has been accepted that conflicts with the other chain. Over short time frames (compared to the difficulty adjustment period), then, mining a block on each chain would require approximately equal resources and produce a constant reward dependent only on the corresponding chain ultimately winning out. Hence, for each block someone accepting the bribe mines on the chain assuming the chain ultimately becomes the longest chain, the attacker must pay the block rewards plus . This structure then resembles that of Definition 2.1.
However, imagine heuristically that we have very short block times (relative to the period of time in which the attacker needs to wait to have a sufficient number of block confirmations) and that difficulty adjusts continuously. Then suppose a single miner Bob that represents a proportion of all mining resources is mining on the chain for some period of time while all other miners, representing resources, are mining on the chain. This can be thought of as percent of the “votes” being cast for and percent being cast for over this period. By considering longer amounts of time (heuristically discretizing them into distinct periods) we can think of the votes as being broken into appeal periods in the sense of Section 1. From the perspective of the chain, Bob should get all of the block rewards over this period of time. If the option ultimately gives the longest chain (beyond the number of blocks required for “coinbase maturity”), then all of these rewards that Bob receives will be usable. In order to incentivize rational miners who are willing to accept bribes, but do not think that the attacker has the resources to maintain the attack over the long term, to not be tempted by this “lone voice of reason jackpot,” it is not sufficient for the attacker to pay the miners who support on a per-block basis. Rather, she must pay them the amount that they would have received by mining on the other chain during this period, up to the total block rewards plus epsilon. This structure resembles that of Definition 2.3.
The Serenity proof-of-stake version of Ethereum [18] (currently in Phase 0 of its release) uses the symbiotic Schelling game model of Definition 2.2. This choice was motivated to make the protocol less vulnerable to “discouragement attacks” [18], [19]. Note that as this protocol, in contrast to proof-of-work, is designed to have finalization guarantees [20], one cannot consider waiting extra blocks as an “appeal” in the same way.
In this work, our attacker will generally have the ability to offer some type of bribe. In situations where there is an appeal structure, one typically thinks of this structure as being a defense; however, is a double-edged sword as we then may also allow our attacker the ability to make malicious appeals. However, we do not consider attackers with the ability to tamper directly with votes. In particular, we do not consider attackers capable of performing attacks or denial of service attacks on the consensus protocol of an underlying blockchain platform where the vote is taking place, nor do we consider attackers that have votes themselves or that are capable of directly controlling users’ votes, e.g., via malware.
Moreover, in Section 7 where we discuss “counterattacks” against attacks, we will need to assume a more rigid pattern for attacker behavior. Indeed, there we consider a attack in a multi-round situation, and we model the attacker as offering the bribe in each round until
Her attack succeeds and wins the game; or
The attacker stops, either because she has reached the maximum appeal round or because the required bribes exceed predetermined some budget,5
and not taking other actions.
For some results we will require specific behavioral assumptions of voters. A particular set of hypotheses that we will often make on voters is the following:
Definition 5.1: Voters are said to satisfy * if:
Each vote (across all rounds) is controlled by a unique entity.6
Voters are economically rational, namely, they take actions that maximize their expected return in a risk-neutral fashion (possibly assigning extra, specified internal costs to immoral or cognitively difficult actions, see Section 8).
The structures of the games, including the presence of attacks, is common knowledge among the voters.7
If the payoff matrix is symmetric between the “honest choice” and the “dishonest choice” , for example for participants in an appeal round where the bribe is no longer provided, voters vote for the “honest choice” with probability one.
In this section we will begin to consider measures of the difficulty for an attacker to launch attacks in the models of Definition 2.1, Definition 2.2, and Definition 2.3, with and without appeal. Particularly, we will calculate the required capital that must be locked up in each of these scenarios.
Proposition 6.1:
A attack on a simple Schelling game of the form of Definition 2.1 with voters and no appeal requires a budget of . More generally, a attack on a simple Schelling game of the form of Definition 2.1 with successive appeal rounds of voters respectively requires a budget of
A attack on a symbiotic Schelling game of the form of Definition 2.2 with voters and no appeal requires a budget of . More generally, a attack on a symbiotic Schelling game of the form of Definition 2.2 with successive appeal rounds of voters respectively requires a budget of
A attack on a redistributive Schelling game of the form of Definition 2.3 with voters and no appeal requires a budget of
with the asymptotic as increases.
More generally, a attack on a redistributive Schelling game of the form of Definition 2.3 with successive appeal rounds of voters respectively requires a budget of
PROOF: We consider the redistributive Schelling game case. Take to be the number of participants who vote for in the th round. The attacker must pay out to each of the participants who vote in the th round. However, the attacker only has to make these payments if , causing the attack to fail. However, one can take any for . Note that this upper bound is in fact realized for for and The other cases are similar.
Hence, even when and with no appeal mechanism, moving from the simple Schelling game to the redistributive model roughly doubles the budget required by the attacker from to . The difference in budget required between a redistributive Schelling game versus the other models is much more dramatic in the case of appeals. The simple Schelling game has a required budget that grows linearly in for all , whereas the redistributive Schelling game has a required budget that grows quadratically in for .
Example 6.2: We consider the implications of Proposition 6.1 on some of the examples we considered in Section 4, particularly those related to consensus mechanisms. Namely, for each proof-of-work as it exists for both Bitcoin and Ethereum, the heuristic version of proof-of-work presented in Section 4 where difficulty adjusts continuously, and the Serenity proof-of-stake system, we will consider the budget required to do each of two different attacks: a censoring attack and a double spend attack. See Figure 2 for a summary of the results of these calculations.
In a censorship attack on a proof-of-work system, the attacker incentivizes all miners to mine a fork where a given transaction is not included; denote this branch by . A priori, starting from any block in a miner could insert in a child block resulting in several different branches competing with to be accepted by the network. However, we consider the worst case for the attacker, where all mining resources that accept are concentrated in a single branch . Then, if the attacker wants to censor in the branch accepted by the network at a fixed time , this is equivalent to having win a simple Schelling game with no appeal and equal to the number of blocks until , allowing one to apply Proposition 6.1 directly. As of April 2021, for Bitcoin these values are an average block time of roughly minutes and a block reward of BTC plus average fees of approximately BTC, and for Ethereum one has an average block time of seconds with an average block reward of roughly ETH, including fees. [21]
A double spend attack on a proof-of-work network has a similar structure. Here an attacker wants two conflicting transactions and to receive enough confirmations to be accepted by two different vendors so that she receives products corresponding to each of these transactions, while only paying for one of them. We analyze the budget required for an attacker strategy where the attacker waits for one of the transactions, for example , to receive confirmations on branch and then launches a attack to incentivize miners to revert to the last block where was not included and mine an alternative branch until it surpasses to become the longest chain and also has (at least) confirmations. Suppose the attacker offers the bribe to miners who mine blocks (the attacker could offer the bribe over a wider range, allowing for her to still win even if some minority of miners mine on at the expense of requiring a larger budget). Then, this corresponds to a simple Schelling game between and with votes where the first votes are already set to . Due to this restriction on the voting pattern, one cannot apply Proposition 6.1 directly; nonetheless one sees by a similar argument that the payouts required by this attack in the most extreme case where exactly of the remaining participants mine on gives the same required total payouts as those in Proposition 6.1. For the values for Bitcoin that we present in Figure 2, we take the standard choice of [6] and for Ethereum we take , corresponding to the number of confirmations required by large exchanges to process deposits [22].
We now consider similar attacks under the heuristic model of proof-of-work with rapidly adjusting difficulty discussed in Section 4. We develop on this heuristic, noting additional approximations as necessary. Particularly, we suppose that the collective mining power of the community consists of identical ASICs each of which can attempt nonces per minute. This collective mining power remains fixed over the period of the attack. Furthermore, as the difficulty of different blocks varies over the course of a confirmation period, we assume that nodes verifying the confirmation status of a block check for a total amount of difficulty in subsequent blocks rather than simply a given number of subsequent blocks. Suppose is the time at which the attacker wants the transaction to be censored. Denote by the time at which becomes known, and denote by time a point at which the difficulty on the fork where is included has adjusted to the point so that a given nonce has a chance of producing a block. (In particular, this supposes that enough miners mine on during the transition between and to trigger the difficulty adjustment.) Then suppose that the attacker manages to incentivize all miners to mine on the fork where is not included until time when all miners switch to mining the fork . For the attacker to incentivize miners to not defect, she needs to provide them with a reward of at least
at each of the increments per minute at which they mine between and . Hence, to cover bribes in this scenario, the attacker needs
where we note that, as all miners are mining on between and and are all mining on after , discounting the assumed to be short periods of difficulty adjustment, these periods that produce roughly equal total difficulty are of roughly equal length.
We take the assumptions related to difficulty adjustment from the heuristic above and have the same structure of double spending attacks as for standard proof-of-work. Similarly to above, we denote by the point at which is mined in a block and by the point at which the chain has accumulated enough difficulty to be viewed as confirmed and the attacker launches the attack. Denote by the time at which the difficulty on chain has adjusted to reflect the entire community mining on it and at which the difficulty on chain has adjusted so that a given nonce has a chance of producing a block. We consider a scenario where all miners mine on until some time when exactly total difficulty is mined on chain , at which point all miners switch back to mining on . For the attacker to incentivize miners to not defect, she now needs to be able to cover total bribes of
where we note that, as all miners are mining on between and and discounting the assumed to be short periods of difficulty adjustment, is roughly the time that would normally be required for a transaction to be confirmed.
We now consider a similar censorship attack on the proof-of-stake system of Serenity [18], adapted to account for its somewhat different structure. Here blocks are grouped into “epochs” and the participants of the validator pool are tasked with providing an attestation to one block per epoch. Then the protocol attempts to determine a sequence of “checkpoint” blocks, consisting of a single checkpoint per epoch. Attestations contain references: to the previous block, a “target” checkpoint for the current epoch, and a “source” checkpoint from a a previous epoch [23]. Then a checkpoint is considered “justified” if at least two thirds of validators (weighted by deposit) provide an attestation with target and source , where is a justified checkpoint for a previous epoch. An epoch is considered “finalized” if its checkpoint is justified and at least two thirds of validators attest to a checkpoint in the immediately following epoch with provided as the source [24]. In the event of a fork, (self-consistent) attestations to the different branches would disagree on the previous block, the previous block and the target checkpoint, or all three elements, depending on the depth of the divergence point of the forks; validators are rewarded or penalized based on how many of these elements they correctly specify, conforming to the state of the chain as it is eventually finalized in some future epoch.8
In the censorship attack that we consider, we suppose that the attacker’s goal is to prevent a given transaction from appearing in a block in a finalized epoch by a fixed time . Suppose that the transaction to be censored9 appears in epoch . Then, considering that in order to be finalized, a checkpoint on some fork must be included as a target and then a source in two-thirds of attestations of two consecutive epochs, to prevent finalization of an epoch including it is sufficient for an attacker to offer bribes to validators who attest a fork that does not include in every other epoch: Hence we assume that the attacker only offers bribes for these epochs.10 Note that if a fork that includes in epoch ultimately finalizes, then any attestation for will be incorrect on the previous block, the target, and the source (assuming that is justified) in any of the epochs considered. Serenity has a base reward that is calculated in terms of the number of active validators [18]. It uses11 a symbiotic Schelling game under “normal operations” with the choice between and corresponding to and . However, if more than epochs pass without being finalized, then the incentive structure changes to a simple Schelling game with and , where is the “inactivity leak,” an amount proportional to the validator’s remaining balance that increases with the number of epochs that are not finalized. Then, in the most extreme case where all validators accept the bribe in each epoch for , but is finalized by having checkpoints in epochs and justified, then the attacker must pay out:
where taking the continuous approximation to the inactivity leak of [23]
with a fixed parameter and is the total bribe that must be paid out in the last round. As the last round finalizes and hence converts back to a symbiotic Schelling game, we have12 . In Figure 2, we take , gwei (based on current values as of April 2021 [25] calculating from using the formulas of [23]), , Initial Validator Balance= ETH, Epoch Time=6.4 min, and , the (anticipated, long-term) value of [23].
To the degree that a double spend on a system using Serenity would require two conflicting transactions to be finalized, this is only possible insofar as it leads to a permanent fork of the network. One could consider attacks that attempt to trigger such a split by incentivizing validators to attest in such a way such that the inactivity leak eventually results in each fork perceiving the validators on the other forks as being penalized to the point where the remaining validators suffice to finalize [18], [24]. However, as such an attack requires some, but not all, validators voting for each fork, it does not fit into structures that we have considered in this section.
So to the degree that the specific censorship attacks we consider on the PoW and the PoS versions of Ethereum are comparable, we see in Figure 2 that censorship attacks on the PoS version require smaller budgets at short intervals of time, but substantially larger budgets on longer time scales due to the inactivity leak. On the other hand, the structure of Serenity provides greater resistance to double spending attacks, even removing the types of attacks we consider on PoW as possibilities. Note that it remains possible for an attacker to launch adapted versions of these attacks that might have different budget requirements; see Section 9 and particularly Example 9.1, where we consider some modified versions of the censorship attack that provide a different set of tradeoffs for the attacker.
In the context of performing a attack on a proof-of-work system, one can compare these deposit sizes to those of other smart contract-enforced attacks described in [26]. Note that in some such applications on proof-of-work systems, particularly those in which a payout is made to “uncle blocks” as in Ethereum, an attack could reduce their payouts by these amounts, hence reducing the costs of a failed attack in a manner similar to the uncle-mining based attacks presented in [27]. Nonetheless, as uncle block rewards on Ethereum are only made to the extent that the uncle blocks are subsequently referenced in blocks in the main chain [28], and are hence not guaranteed, we have not taken them into account in the budget requirements shown in Figure 2. Note that the quadratic costs manifest themselves in the heuristic version of PoW that we consider with quickly adjusting difficulty in that, even when one accounts for the balancing average block reward against to produce some average reward per unit time, there is an additional term of , the number of active miners, in the required budgets.
Remark 6.3: Note that in systems such as [29] and [17] where deposits are made in a system token, bribes must also be paid in this token.13 Then, by having quadratic budget requirements in the redistributive case, one can have situations where the budget requirement of a sufficiently high appeal round exceeds the number of total tokens in circulation and is hence impossible. For example, in the Kleros “general court” where (as of April 2021) PNK tokens of which the total circulating supply is 524 million [30], an appealable redistributive Schelling game vote with participants would require a bribe budget of at least , exceeding the supply.
Remark 6.4: The large budget requirements of attacks on redistributive Schelling games with an appeal system represent an implicit cost to the attacker from the opportunity costs of having their capital locked up during the vote.
We will further see that the large budgets required to bribe participants in a redistributive Schelling game with appeals can be used as a weapon against the attacker. Specifically, we describe a counterattack that is possible in Schelling games where the appeal system is such that a voter in round can trigger an appeal from the th round to thest round by paying . Here the actions available to voters in round are:
Vote or vote
Independent of her choice of vote, to trigger an appeal or not
See Section 5 for our model of possible actions taken by the attacker. Then, voters can take the following strategy:
Namely, voters are said to be counterattackers if they vote to accept the bribe and then appeal in their vote round. Counterattack 1 depends on the idea that if there is an appeal to a round where the attacker can no longer commit to enough money to pay required bribes, voters will revert back to the honest choice, . Then the attacker’s own bribes are turned against her as voters are willing to pay appeal fees in exchange for some percent chance of ultimately receiving the bribe. Consequently, as we will see below, this counterattack is particularly effective in redistributive Schelling games. See Figure 3 for an illustration of this idea.
Concretely, if an attacker has a given budget , this imposes limits on the number of appeal rounds for which she can commit to paying out bribes against a given game. For our purposes, voters will not know the attacker’s budget exactly. However for some of our results, we will model voters as viewing this budget as a random variable . Namely, a given attacker’s budget is a sample from some fixed, known probability distribution , , that captures the variability of budgets seen across the universe of all possible attackers.
Then, we define another random variable as a function of by
Namely, reflects a voter’s probabilistic assessment of the first round in which the attacker will not be capable of locking up the amounts required to continue the attack. These costs to continue the attack include the bribes considered in Proposition 6.1 as well as potentially other costs (such as appeal fees depending on the appeal fee model).
Then, participants update their beliefs on this distribution as the attacker continues the attack in successive rounds. Specifically, if the attacker continues the attack through the th round, voters will estimate her probability of continuing it further to the th round for as .
Remark 7.1: If an attacker committed to a large attack budget at the beginning of an attack—for example, by placing it in a smart contract that could only be used to fund bribes—then we could consider this information as simply being included in the voters’ prior beliefs on (namely that voters imagine the attacker as being drawn from some corresponding subpopulation). For this work, we do not consider more sophisticated behavior in how the attacker provides information about her budget, such as by committing an additional amount, beyond what is required of her, in a smart contract during an intermediate round after the attack has begun, see Section 5.
Remark 7.2: One could have potential complications from situations where the attacker has the budget required to pay partial, but not full bribes in a given round. In this section, we suppose that in any given appeal round the attacker credibly commits to paying all of the bribes for that round, or does not commit to paying any bribes, see Section 5. Indeed, heuristically, one could imagine that each participant in Counterattack 1 places sufficient funds in a smart contract to automatically appeal the following two rounds, avoiding effects from rounds with partial bribes at the expense of increasing . Attackers who spread out their budget further to pay partial bribes over a larger number of rounds are considered in Section 9.
While voters observe with uncertainty, there are sometimes natural upper bounds that they will have on it. Indeed, if the Schelling game permits a maximum total number of rounds of , then . Moreover, we define
Then . One sometimes has better natural upper bounds on . Particularly, in Schelling games where bribes must be paid in terms of an asset with a limited supply, the quadratic growth of the required bribes seen in Proposition 6.1 can cause them to exceed the total supply, as discussed in Remark 6.3. In this case, there is a round where it is impossible to continue the attack, even as appeal is still possible, namely .
As we will require the same set of assumptions on our parameters in several of our results, we will condense notation by saying:
Definition 7.3: are said to be well-designed if
any user can trigger an appeal from the th round to the st round by paying
for all , for some .
for all .
for some .
, where .
Note that the constraints that make up the property of being “well-designed” are generally not very burdensome. Indeed, the designer of a Schelling game based system can ensure that the grow reasonably and require sufficiently large deposits relative to payoffs so that these types of constraints hold before and independent of any eventual attacks.
Then we see:
Proposition 7.4: Suppose that an attacker with a budget drawn from a distribution launches a attack on a multi-round redistributive Schelling game such as in Definition 2.3 where round voters can trigger an appeal from the th round to the st round by paying . Suppose:
Voters satisfy *. Moreover, is common knowledge to voters.
are well-designed.
, and in particular is finite.
Then there exists a pure strategy equilibrium where some (distinguished) voter executes Counterattack 1 (i.e., takes the bribe and then appeals) in each round until the attacker is no longer capable of locking up sufficient resources to continue the bribe. Hence, in this equilibrium, the honest choice eventually wins with probability .
PROOF: We consider a situation where all voters vote in any round where the bribe is available, and one (distinguished) voter per round performs Counterattack 1. Hence, this voter pays
to cover any required appeal fee in the th round. We will show that this situation is an equilibrium.
Eventually, one arrives at a round (at latest ) where the attacker can no longer fund her attack. At this point, wins. In any round where the attack bribes are available, voting , whether subsequently appealing or not, is a dominated strategy.
Hence, we need only consider the strategies of voting and performing Counterattack 1, or voting without performing Counterattack 1. Then, following the payoffs shown in Definition 2.3, participants receive if wins from the coherence game and the bribe.
For the distinguished voter, if she deviates from the equilibrium by not paying the appeal fees, wins and her total payoff is . On the other hand, if she pays the appeal fees and all of the other voters follow the candidate equilibrium, wins and the distinguished voter receives a total payoff of . Hence, as and ,
so the distinguished voter does not have an incentive to deviate.
Similarly, we take the perspective of one of the non-distinguished voters. Then, as long as the other participants do not deviate from the candidate equilibrium, wins regardless. Hence, if the non-distinguished voter does not counterattack, she receives , and if she counterattacks she receives . Hence, the non-distinguished voters also do not have an incentive to deviate.
Nonetheless, the equilibrium of Proposition 7.4 seems to be one that voters would be unlikely to fall into. Specifically, coordinating on which voters should be distinguished to pay the appeal fees seems difficult, as while each voter would prefer that some voter perform Counterattack 1, they would prefer that it not be themselves. Here equilibrium formation is particularly difficult as one does not necessarily have repeated games where the same groups of voters are offered a bribe again. As users have the additional infrastructure of smart contracts at their disposal, one might propose that the voters in each round place a deposit in a smart contract that chooses one of them and obliges her to pay the appeal fees; however, this merely displaces the issue as then voters prefer that some voter(s) join this smart contract coordination while preferring it to not be themselves.
In fact, in any given appeal round, our situation is similar to a multiplayer version of the anti-coordination game of “Chicken” [31]; Proposition 7.4 essentially corresponds to the pure strategy equilibria observed in that game. In the following result we see that there can also exist a mixed strategy equilibrium where voters (of a given appeal round) act symmetrically, each paying the required appeal fees with some positive probability, mirroring the mixed equilibria observed in “Chicken.” Due to its symmetry, this equilibrium seems somewhat more natural and more likely to arise in practice than that of Proposition 7.4. However, in order to show that voters will estimate the probability of the counterattack continuing a sufficient number of rounds beyond their own as being high enough that the counterattack is likely to eventually succeed and that it is worthwhile for them to counterattack themselves, we will require additional assumptions on the budget of the attacker.
Theorem 7.5: Suppose that an attacker with a budget drawn from a distribution launches a attack on a multi-round redistributive Schelling game such as in Definition 2.3 where round voters can trigger an appeal from the th round to the st round by paying . Suppose:
Voters satisfy *. Moreover, is common knowledge to voters.
are well-designed.
, and in particular is finite.
The distribution is such that for all such that and ,
Then there exists a (potentially mixed strategy) equilibrium where each voter in round funds appeals following Counterattack 1 with some probability , and
where
Here if , for the sake of computing the supremum, we take the convention that . However, as , there is some value of for which this supremum is finite. Indeed, for sufficiently large relative to , , , and the distribution , Inequality 3 is positive. We will explore this bound in the case of specific distributions that satisfy Inequality 2 in Corollary 7.7.
Remark 7.6: While this counterattack requires no altruism on the part of its participants, there are several practical limitations to this approach. The amounts that the counterattackers must pay in appeal fees become quite large (though much smaller than the budget required by the attacker as in Proposition 6.1), and may exceed the liquidity or risk thresholds of the counterattackers. Also, an important element of this scheme is that counterattackers are incentivized to appeal because their potential reward is very large relative to the chance that their individual participation changes the result. This depends on the “lone voice of reason jackpot” that comes from all voters in these rounds voting . In practice, a substantial number of voters will vote (and not appeal) either because they are altruistic or they are unaware of the attack. To the degree that the percentage of such voters is significant but less than half, this actually works against the counterattack.
Before we continue with the proof of this result, we see that the condition of Theorem 7.5 given by Inequality 2 is satisfied by reasonable models of the attacker’s budget. Indeed, one expects that the size of the budget for an attack roughly corresponds to the attacker’s overall wealth. Then, standard, classical models for distributions of wealth are given by Pareto distributions [32]. More recent work [33] suggests that wealth is distributed exponentially for less wealthy segments of the population and then distributed by a Pareto distribution in the tail of wealthier individuals. As the attack requires large budgets (see Proposition 6.1), we would expect that attackers would already be a wealthier subpopulation of the general population of users; hence, in the following corollary and example, we will model attacker wealth by a Pareto distribution for simplicity. Nonetheless, as we have assumed a maximum appeal round, , at which the attacker can possibly commit to paying the bribe, a model for the attacker budget with an infinite tail is inappropriate. However, as such upper bounds often arise in practical applications, truncated versions of the Pareto distribution have also been widely applied [34]. Then, we consider truncated Pareto distributions:
for . We see that such distributions satisfy Inequality 2. Note that the lower bound we obtain on does not depend on , so one can approximate a standard Pareto distribution arbitrarily well.
Corollary 7.7: Let be a truncated Pareto distribution with support between and and Then suppose that an attacker with a budget drawn from the distribution launches a attack on a multi-round redistributive Schelling game such as in Definition 2.3 where round voters can trigger an appeal from the th round to the st round by paying . Suppose:
Voters satisfy *. Moreover, is common knowledge to voters.
are well-designed.
, and for all (i.e., the number of voters grows exactly exponentially).
, and in particular is finite.
There exist constants such that the additional budget to continue the attack, from the st round to the th round, is exactly for all .
Then there exists a (potentially mixed strategy) equilibrium where each voter in round funds appeals following Counterattack 1 with some probability , and
where we compute the explicit quantity
Moreover, if and (namely the additional budget of continuing the attack in the th round corresponds exactly to the amounts required for the bribes seen in Proposition 6.1), by minimizing the bound over all choices of , we see
where we compute the explicit quantity
The proof of this corollary depends on showing that the tail of the distribution decreases quickly enough that Inequality 2 holds in this case. Then one computes the explicit lower bound given by Inequality 3. This is a rather straightforward calculus argument, and is included in Appendix A for completeness.
Example 7.8: Take the attacker budget to be distributed by a truncated Pareto distribution with - the value observed for UK wealth distribution in [33], the cost of continuing the attack from the st to the th round to be exactly the that must be locked up in bribes according to Proposition 6.1, , and . Then, one can use Corollary 7.7 with to see that, in the mixed strategy equilibrium, wins and the attack fails at least of the time. This implies a corresponding non-zero lower bound on the expected cost of the attack. Note that this lower bound holds for any that the attacker might choose. Generally, there is a trade-off that can be made between decreasing the size of the deposit relative to and increasing the lower bound on the rate at which attacks fail, see Figure 4.
We will now develop the tools necessary to prove Theorem 7.5, fixing the following notation that we will use towards this proof over the remainder of this section. We will show the existence of such that each voter in round , , choosing to trigger an appeal with probability gives rise to the equilibrium discussed in the statement of Theorem 7.5. For given choices of , we denote by
the number of participants who choose to appeal in the th round. Similarly, fixing a distinguished participant , we denote by
the number of participants other than who choose to appeal in the th appeal round.
Moreover, again from the perspective of who considers counterattacking, namely triggering an appeal, in round , we denote
etc.
In any round where the attack bribes are available, voting , whether followed by counterattacking or not, is a dominated strategy, so it can be excluded. Hence, we can calculate the payoff table for given by appealing in round versus not counterattacking under the assumption that there are zero votes for , see Figure 5.
Note the in the payoff table, which are given by the partial appeal payments of Equation 1. These values implicitly depend on the however, by our assumptions on our appeal fee models this dependence is continuous. Also note that we have .
We begin with a lemma that explores the probability that at least one voter appeals in a given round. In all of the following lemmas we assume that voters satisfy * and that is common knowledge to voters.
Lemma 7.9: Let , . Suppose that are well-designed, and
holds for all such that . Then, , and there exists an equilibrium where each round voter triggers an appeal independently with probability for some , with
PROOF: Take a voter in round . We calculate the expected returns from ’s perspective using the payoffs of Figure 5:
and
The system is structured so that a single counterattacker in the th round is sufficient for the game to progress to the st round. Then
In particular, our assumed lower bound for gives us .
Moreover,
We define
Then, if believes that no other participant will counterattack, she estimates
where the last inequality uses
Note that is independent of for all (though it depends on the