Skip to main content# Competitive Equilibria Between Staking and On-chain Lending

## Abstract

# 1. Introduction

### Transitioning Security from PoW to PoS

# 2. Protocol Assumptions

**Assumption 1 **(Cryptographic)

**Assumption 2 **(Distributed Systems)

**Assumption 3 **(Identity)

**Assumption 4 **(Time)

**Assumption 5 **(No External Markets)

**Assumption 6 **(Money Supply is Deterministic)

**Assumption 7 **(No Transaction Fee Revenue)

**Assumption 8 **(PoS)

# 3. Model

## 3.1 PoS Model

## 3.2 Lending Model

### 3.2.1 Two-State Lending Model

### 3.2.2 Three-State Model

**Assumption 9** (Borrowing demand is independent of staking and lending)

## 3.3 State Transition

**Assumption 10 **(Martingale Ordering)

### 3.3.1 Optimal Portfolio Construction

**Assumption 11**

# 4. Formal Properties

**Assumption 12** (Bounded Size)

**Assumption 13 **(Number of Agents)

#### Claim 1

#### Claim 2

#### Claim 3

#### Claim 4

#### Claim 5

#### Agent-level Behavior

#### Claim 6

## 4.1 Deflationary Monetary Policy

## 4.2 Inflationary Monetary Policy

### 4.2.1 Polynomial Inflation

### 4.2.2 Exponential Inflation

## 4.3 Conclusions

# 5. Agent-Based Simulations

## 5.1 Initialization

## 5.2 Simulation Loop

## 5.3 Borrowing Demand Distributions

## 5.4 Results

Published onApr 05, 2021

Competitive Equilibria Between Staking and On-chain Lending

Proof of Stake (PoS) is a burgeoning Sybil resistance mechanism that aims to have a digital asset (“token”) serve as security collateral in crypto networks. However, PoS has so far eluded a comprehensive threat model that encompasses both Byzantine attacks from distributed systems and financial attacks that arise from the dual usage of the token as a means of payment and a Sybil resistance mechanism. In particular, the existence of derivatives markets makes malicious coordination among validators easier to execute than in Proof of Work systems. We demonstrate that it is also possible for on-chain lending smart contracts to cannibalize network security in PoS systems. When the yield provided by these contracts is more attractive than the inflation rate provided from staking, stakers will tend to remove their staked tokens and lend them out, thus reducing network security. In this paper, we provide a simple stochastic model that describes how rational validators with varying risk preferences react to changes in staking and lending returns. For a particular configuration of this model, we provide a formal proof of a phase transition between equilibria in which tokens are predominantly staked and those in which they are predominantly lent. We further validate this emergent adversarial behavior (*e.g.*, reduced staked token supply) with agent-based simulations that sample transitions under more realistic conditions. Our results illustrate that rational, non-adversarial actors can dramatically reduce PoS network security if block rewards are not calibrated appropriately above the expected yields of on-chain lending.

There is currently an intense effort to improve the scalability of blockchains and other decentralized value systems known as crypto networks. These networks use cryptographic proofs and game-theoretic constructions to provide tamper-resistant updates to a global ledger. While there are a variety of research and engineering challenges in setting up these systems, one of the major bottlenecks to network throughput is the cost of Sybil resistance mechanisms within a decentralized consensus protocol. Proof of Work (PoW) networks achieve Sybil resistance by requiring consensus-participating nodes to provably burn energy to compute many iterations of a particular cryptographic hash function. PoW, while effective and permissionless, expends a large amount of natural resources and has resulted in concentrated ownership of the underlying digital assets (*e.g.,* Bitcoin). Proof of Stake (PoS) was first introduced as an alternative in a 2012 BitcoinTalk post [1] that showed the equivalence between a PoW miner who could immediately reinvest her block rewards into hash power within the network and a PoS validator who can reinvest their validation earnings into network security. PoS works by instead allowing users to ‘lock’ a digital asset, known as a token, into a smart contract that provides them with token-denominated returns in exchange for validating transactions and providing network security. Using shared, verifiable randomness, all network participants can use a multi-party computation protocol to sample the distribution of asset ownership locked into the contract, and choose participant(s) who receive the block reward emitted by the network. This is analogous to how PoW can be thought of as a protocol that samples the distribution of hash power to choose the next block producer [2]. One of the main benefits of PoS is that one does not have to commit a costly natural resource to participate in the network. Instead, a purely digital asset is used as collateral for the network and the network can control its supply to provide the desired properties. For an introductory background on PoS protocols and their complex security models, please see [3][4][5][6][7].

In this paper, we show that these purported benefits do not come for free. As PoS algorithms inherently connect a decentralized network’s security with the capital cost of a digital asset, PoS protocols tie their security to the cost of capital rather than to the cost of a natural resource. Volatility in the cost of capital, which is usually higher than that of natural resources [8], can have adverse effects on capital commitments to PoS networks. The main result we show is that alternative sources of yield can drive staking token capital allocators to collectively drain a network’s security, akin to a bank run. In particular, we find that PoS in deflationary systems is unstable and unlikely to work, and that for more reasonable inflation rates, the effectiveness of PoS depends on the relationship between staking and lending rates. This relationship should inform the further design of PoS systems, especially as a large number of networks are launching in 2020.

The move from PoW to PoS presents a plethora of challenges. In a PoS system, the network relies on participants who are staked in the system to stay online in order to achieve liveness. In practice, this is implemented by slashing participants—redistributing or burning a participant’s stake that is committed for validation rewards when they perform a malicious act—who go offline or miss a block that they are supposed to produce. Moreover, there are attacks that are unique to PoS such as the *nothing-at-stake* and *long-range* attacks [9][10]. These attacks are impossible in PoW, as resource costs of digital assets are practically zero, especially when compared to costs of natural resources [11]. Lastly, as the asset used for staking is also the medium of exchange, a malicious validator needs to only aggregate 33% of the token to perform a Byzantine attack.1

However, if there exist physically settled futures contracts on PoS tokens, then it is possible for an attacker to buy futures that allow for staking participants to sell their staked token in the future. This attacker can aggregate this stake and upon reaching an attack threshold, begin to perform a double spend or other malicious attack [12]. As these derivatives can be settled off-chain (*e.g.*, using a centralized exchange like BitMEX or Deribit), monitoring of this type of attack can be difficult. In PoW, one would need to aggregate the hash power needed to produce 50% of the network’s hashrate, which is a much harder task that relies on aggregating data centers, specialized hardware, cheap electricity, and a favorable country of residence. Moreover, PoS systems are vulnerable to *financial attacks*, or attacks that utilize the fact that PoS tokes serve as both the instrument of security and as the medium of exchange. Such attacks are often feasible due to emergent and unexpected coordination between participants in a PoS network and an external market.

Given the vulnerability of PoS to cartel-like behavior that can be coordinated via an external market, one might naturally ask if there are also any endogenous financial risks on PoS protocols that support smart contracts. Recently, there has been an uptick in interest in Decentralized Finance (‘DeFi’), which uses smart contracts to implement standard financial primitives in a purely on-chain manner [13]. These primitives, such as exchanges [14], lending [15], and stable reserve currencies [16][17][18], decentralize banking functions by creating incentives that encourage rational participants to receive arbitrage profits for maintaining the system’s security, while also meting out financial punishments for misbehavior. Instead of explicitly punishing fraud via legal recourse, these protocols use purely financial modes of recourse to encourage network participation and growth. One of the biggest sectors within DeFi is the on-chain lending market, in which the largest single platform is Compound [15], an Ethereum smart contract that allows users to lend and borrow assets that conform to the ERC-20 token standard. The Compound smart contract has held up to US $175 million of assets, has had over 40% of the float of the Dai stablecoin [18], and saw double digit asset growth during 2019. Given that Ethereum is likely to transition to PoS soon, one must evaluate: are there any financial attacks against chain security that result from an on-chain lending system? A simple Gedankenexperiment (a thought-experiment) for answering this question from the view of on-chain lending might be of the form:

Suppose that we assume that validators are rational financial agents. Would they not simply move their assets between staking and on-chain lending, depending on which has a higher yield?

In particular, it is clear that there is a relationship between the price of capital availability and participant’s willingness to stake, as stakers have to earn more than a risk-adjusted market rate on their staked capital. However, unlike Proof of Work, there are no physical limits that prevent validators in staking networks from rapidly moving their assets into higher-yielding activities. On-chain lending, such as Compound, makes this particularly efficient, as validators simply have to post a single Ethereum transaction to unbond their tokens and begin earning yield within a single block time. High lending yields would likely lead to a reduction in network security (*e.g.*, a financial attack) as these yields would encourage rational actors to unexpectedly coordinate and reduce network security by optimizing for financial gain.

We answer these questions with a stochastic model that can be theoretically solved in certain situations and is easily simulated via Monte Carlo methods. Our aim is not to model realistic networks parameters perfectly, but rather to show that even in the most simplified model of agents optimizing portfolios composed of staked and lent tokens, on-chain lending can cause dramatic volatility in network security. In particular, we construct an agent-based model, where each network participant is represented via an agent with a utility and decision function. Agent-based modeling has been previously used for modeling censorship properties in sharded PoW chains [19] and can serve as a conduit for comparing theoretical results to empirical data in a statistically rigorous manner. Our model for rational network agents involves having each participant view their total token wealth as a two-component portfolio of tokens staked and tokens lent. We assume that the agents have different risk preferences and locally optimize their token portfolios using mean-variance optimization [20], which allows for agents to adjust their portfolios based on observed returns and risk preferences. Figure 2 illustrates that there is a phase transition in inflation rates that leads to tokens going from being predominantly lent in deflationary regimes to predominantly staked in inflationary regimes. Moreover, Figure 3 shows that the spread between borrowing and lending rates is significantly worse for deflationary PoS assets, further confirming the existence of a phase transition. We explicitly state our simplifying assumptions in Section 2 and note that agent-based modeling allows one to relax these assumptions and analyze how these results carry over to real PoS protocols, such as Cosmos [21] and Tezos [22]. In Section 4, we prove properties of this model that match the observed phase transition from Figure 2 and Figure 5. A single sample path is depicted in Figure 1, which visually show that agents participating in on-chain lending and staking can cause a “flippening," where there are more assets lent out than staked. The observed volatility in the amount of assets staked is tantamount to dramatically reducing the cost of taking over a staking network, which implies that the security model of PoS networks needs to account for attacks on the network that stem from reduced cost of capital. The combination of theoretical and simulation-based results demonstrate that the threat model for PoS networks needs to be expanded to include financial attacks that result from yield competition with on-chain financial products.

For provenance, note that the mathematical notation used in this paper is documented in Appendix A.

In order to simplify our model, we will make some simplifying assumptions. These assumptions hold for all models analyzed in this paper and they focus on properties of the underlying distributed ledger rather than properties about the economic behavior of participants, which is described in the proceeding section.

All sampling processes use true randomness and not pseudorandomness.

This differs from many standard cryptographic threat models that assume pseudorandomness and provide an $\epsilon$-approximation to true random sampling [23]. In the Appendix, we use this property to ensure that our PoS algorithm is non-anticipating (*e.g.*, adapted to a suitably chosen filtration), allowing for conditional probabilities to be computed without having to aggregate the $\epsilon$ error terms. One can remove this assumption, in a similar manner to [24], with a significant increase in proof complexity.

All communication between participants is synchronous.

This can be relaxed to partial synchrony, with an increase in complexity in the proofs and simulations described in the following section.

All pseudonymous identities are known by all participants and the number of participants (measured by unique addresses),$n,$is fixed.

The entire system will update at discrete time intervals, with each tick to be thought of as a block update.

While participants will likely execute strategies in continuous time, assuming discrete time evolution ensures that participants only respond to event updates that are received on-chain. One can also remove this assumption at the cost of increased variance using ‘Poissonization’ techniques [25] and ‘sleepy consensus’ assumptions [26].

Only on-chain lending, borrowing, and staking will be considered.

We are ignoring off-chain lending (*e.g.*, OTC desks or lending businesses, such as Galaxy and Tagomi) and are assuming that there always exists block space for any participant’s action to succeed. We are omitting this to reduce the complexity of our model, as off-chain lending has varied pricing models and term structures.

The block reward at time$t, R_t$and the money supply at time$t, S_t = \sum_{s=0}^{t} R_t$are deterministic and known to all participants.

In particular, we avoid assuming that there exist governance mechanisms for changing the block reward, which have been proposed for protocols such as Algorand [27] and Celo [16].

The only revenue that validators receive from staking comes from the block reward.

This is a model assumption that removes the complexity of modeling crypto network fee markets, which currently have unstable dynamics and are poorly understood.

The following properties hold for our idealized PoS algorithm:

*No compounding*: The PoS mechanism uses epoch-based sampling and does not immediately reinvest block rewards. This is to avoid the concentration behavior described in [28]. One can relax our model to handle Pólya urn processes, at the cost of significantly worse variance.*Single validator per block*: To simplify the model, we avoid using committees [27][29][16][22][21][30][31][7] and verifiers [32][33] as they add more variance and make both formal and simulation methods more difficult.*Constant slashing probability*: We assume that each validator has a fixed probability $p_{\text{slash}}$ of being slashed on a block that they produce. This is simplistic as it assumes that all validators have the same chance of being slashed, regardless of stake and validation history. However, in practice, we have seen very few live slashes and this model encourages simpler formal and simulation analysis.*Sharded state is synchronously traversed*: Any sharded state in our PoS blockchain is read synchronously and we assume linearizability in our blockchain. This can be relaxed to non-linearizable protocols like Avalanche [29], but will depend on the scoring function used for each branch.*No unbonding period*: We do not assume that the PoS protocol has an unbonding period like that of Cosmos [21] or Tezos [22].

Let $S_t$ be the total outstanding token supply of a PoS protocol at time $t$ and let $S_t = \zeta_t + \ell_t$, where $\zeta_t$ is the number of staked tokens and $\ell_t$ is the number of lent tokens. We use a simple model of a PoS system that samples a single block producer from a discrete time-series of stake distributions, denoted by $\pi_{\text{stake}}(t) \in \zeta_t \Delta^n$, where $\Delta^n$ is the $n$-dimensional probability simplex (*see* Appendix A). The main input parameters are:

$\pi_{\text{stake}}(0)$: Initial asset distribution

$R_t$: Staking block reward at block height $t$

$S(i, \pi_{\text{stake}})$: Slash that validator $i$ receives if the staking distribution is $\pi_{\text{stake}}$

The formal specification of the algorithm can be found in Algorithm 1 in Appendix B. The algorithm state includes the current stake distribution, the current epoch’s reward set, the current epoch’s slash set, and the current block time. At a high-level, for each block, we select a validator who should receive a block reward and decide if they are to be slashed by flipping a coin with probability $p_{\text{slash}}.$ If they are slashed, we add their *id* and the amount that they are to be slashed to the current epoch’s slash set. Otherwise, we add them to the block reward set. The algorithm updates the stake distribution on a per-epoch basis.

We will study two models for lending, one involving an explicit model for borrowing demand and one that is implicit. The implicit model, which is simpler, provides a prototype for constant borrowing demand and is amenable to theoretical results. On the other hand, the model involving an explicit description of borrowing demand is more realistic and amenable to being fit by historical data.

In the two-state lending model, we track the time evolution of two token distributions, $\pi_{\text{stake}}(t), \pi_{\text{lend}}(t) \in S_t \Delta^n,$ which respectively represent the distribution of staked tokens and those locked in a lending contract. The $i$th component of the stake distribution, $\pi_{\text{stake}}(t)_i,$ corresponds to the amount of tokens that the $i$th agent has staked. Each agent has a wealth $W_i(t)$ at time $t$ that is equal to the sum of their portfolio of staked and lent tokens, *i.e.*, $W_i(t) = \pi_{\text{stake}}(t)_i + \pi_{\text{lend}}(t)_i.$ By definition, the total lending supply is equal to the sum of all lent portfolios, *i.e.*, $\ell_t = \Vert \pi_{\text{lend}}(t) \Vert_1,$ and the total money supply is equal to the sum of all portfolios, *i.e.*, $S_t = \Vert \pi_{\text{stake}}(t)\Vert_1 + \Vert \pi_{\text{lend}}(t) \Vert_1.$ At each time step, agents update their portfolios based on returns accrued from the previous time step and, after portfolios are updated, the lending rate, $\gamma_t,$ is updated based on the total amount lent. This means that we are making two implicit assumptions:

*Constant relative borrowing demand*: We are assuming that the ratio of borrowing demand (represented via a quantity of tokens) to the total token supply stays constant, as the rate only depends on the lending and staking supplies. Formally, this means that the demand at time $t$ is equal to $kS_t$*Flows are the only determining factor*: Participants who move tokens from staking to lending or vice versa are the only causes for changes to the lending rate

We draw inspiration from Compound [15], which provides a simple formula for the borrow lending rate, $\beta_t,$ and the lending rate, $\gamma_t.$ The Compound model computes a utilization rate $U_t$ at block height $t,$ which is the ratio of the borrowing demand to the token supply locked in the contract, and uses that to update formulas for $\beta_t, \gamma_t.$ Mathematically, they define the utilization rate as

$U_t = \frac{kS_t}{\ell_t+kS_t}$

We compute the borrow and lend rates using the following formulas, where $\beta_0, \beta_1 \in (0,1)$ are interest-rate parameters and $\gamma_0 \in (0,1)$ is a measure of the spread between lending and borrowing (*i.e.*, $1-\gamma_0$ is the relative spread).

$\beta_t = U_t(\beta_0+\beta_1U_t)$(1)

$\gamma_t = (1-\gamma_0)\beta_t$(2)

For reference, the Compound V2 contract uses the values $\beta_0 = 5\%$ and $\beta_1 = 45\%.$ As depicted in Figure 2, there can be an enormous amount of volatility in the fraction of the token supply that is lent, $\frac{\ell_t}{S_t}.$ In Section 4, we prove tail bounds on the inflows and outflows of lent tokens over a time step, *e.g.*, $\mathsf{Pr}[|\ell_t - \ell_{t-1}|> \epsilon S_t],$ that explicitly depend on the block reward, $R_t,$ and the interest rate parameters. These bounds suggest that even for the overly simplistic setting of the two-state model, PoS protocols need to carefully choose their block rewards if they desire to have a large fraction of the outstanding token supply staked at all times.2

Instead of assuming that there is constant relative borrowing demand, we can relax this by specifying an additional distribution, $\pi_{\text{borrow}}(t),$ such that $\pi_{\text{token}}= \pi_{\text{stake}}+ \pi_{\text{lend}}+ \pi_{\text{borrow}}$ and $S_t = \zeta_t + \ell_t + \xi_t,$ where $\xi_t = \Vert \pi_{\text{borrow}}(t)\Vert_1$ is the total amount borrowed at time $t.$ In this world, the utilization ratio is now defined as:

$U_t = \frac{\xi_t}{\ell_t + \xi_t}$

Formally analyzing this model has a variety of difficulties that stem from that fact that we have to explicitly model the borrow demand distribution and disentangle how it couples to each participant’s local model of risk, which is defined in the next section. In Section 5, we simulate this model with a variety of different borrowing demand distributions, but all formal proofs that follow only analyze the two-state lending model. Since the pool of borrowers (*e.g.*, arbitrageurs) is often disjoint from the pool of stakers and lenders, we will make the following assumption:

The borrowing demand distribution$\pi_{\text{borrow}}$is probabilistically independent of the lending and staking distributions.

The final task needed to completely specify this model is to define the state transition rule that sends $\pi_{\text{lend}}(t), \pi_{\text{borrow}}(t),$ $\pi_{\text{token}}(t)$ to $\pi_{\text{lend}}(t+1), \pi_{\text{borrow}}(t+1), \pi_{\text{token}}(t+1).$ As per Assumption 9, the evolution of $\pi_{\text{borrow}}(t)$ is independent of staking and lending, and will be specified separately in Section 5. We therefore need to specify state transition rules on a per-agent basis, where each agent’s state is their token portfolio $(\pi_{\text{stake}}(t)_i, \pi_{\text{lend}}(t)_i).$ Traditionally, the strategy space of rational actors is described via an expected utility function that an agent aims to maximize by taking various allowable actions. Before specifying the strategy space that we will sample, let’s consider a few examples to motivate the need for agents who have varying risk preferences. If at time $t,$ staking is returning more than lending and every agent moves their entire portfolio from lending to staking, then we will observe a correlated spike in lending rates $\ell_t$ will go to zero and $U_t = 1.$ Moreover, the relative staking reward to any agent will decrease as $S_t$ will increase by the amount of tokens that flow from lending to staking as the expected return on an epoch for an agent is their staked tokens divided by the token supply, $\frac{\pi_{\text{stake}}(t)_i}{S_t - \ell_t}.$ Thus the greedy strategy of moving all of one’s assets to the higher yielding activity and causing drastic swings in the relative yields of staking and lending is unstable and doesn’t accurately represent reality, where token holders have differing risk preferences and will not immediately move their entire portfolio from staking to lending (or vice-versa). Furthermore, cryptocurrency holders are often looking for returns that are multiples of their initial investment and have a long time-preference [34]. In our evolution of each agent’s portfolio, we also assume that an agent making a decision at time $t$ can only use the information about all portfolios up to time $t,$ $\{\pi_i(t) : i \in [n]\},$ and the implied rate $\gamma_t.$ Since we are dealing with on-chain lending only, this assumption says that players cannot use strategies that look into the future and that all agent portfolios are public. In order to capture strategies that are independent of front-running and latency arbitrage, we will make the following assumption:

There exists an$[n]$-valued martingale$Z_t$that chooses the ordering in which participants are allowed to update their portfolios at time$t$to those at time$t+1.$

Under this assumption, agents receive no advantage in expected returns by trying to predict when their strategy is executed (*e.g.*, is agent 1’s strategy executed before agent 2’s strategy because agent 1 has more staked than agent 2?). This assumption is reasonable as our goal is to figure out if rational, but non-Byzantine, agents will cause PoS network security to decrease when on-chain lending activity is sizeable.

Mean-variance methods, pioneered by Markowitz’s Nobel Prize winning work on portfolio theory [20], provide a way for rational traders of risky assets to construct portfolios that trade-off individual preferences for maximizing returns with those of risk minimization. These methods, which are the backbone of the majority of trillions of dollars of passive portfolios and statistical arbitrage strategies, provide a simple, easy-to-solve model that involves two parameters for constructing portfolios of $n$ assets: an expected return vector $\mu \in \mathbb{R}^n$ and a positive-definite covariance matrix $\Sigma \in S_+^{n\times n}.$ Given these parameters, one solves a strongly convex program that aims to compute the fraction of an agent’s wealth that should be allocated to each asset while ensuring that the sum of the allocations is 1, and each entry is positive. In particular, the seminal work of Markowitz aimed to optimize the quadratic form $f : \mathbb{R}^n \times \mathbb{R}^n \times \mathbb{R}\times S_+^{n\times n} \rightarrow \mathbb{R},\; f(w, \mu, \lambda, \Sigma) = w^T \Sigma w - \lambda \mu,$ where $\lambda$ is a parameter that controls the riskiness of the output portfolio and $w$ is the portfolio allocation. As $\lambda$ is varied, the *efficient frontier* of admissible portfolios is defined as the surface $S(\mu, \Sigma) = \{w \in \mathbb{R}^n : \exists \lambda \text{ such that } w \in \mathop{\mathrm{arg\,min}}\, f(w, \mu, \Sigma, \lambda)\}.$ The original work of Markowitz [20] focused on the single-period allocation problem, where an investor aims to find the optimal portfolio over a single time-period, which corresponds to assuming that $\mu$ and $\Sigma$ do not change over time. Further work on multiple period [35] and continuous-time methods [36] for mean-variance optimization allow for $\mu$ and $\Sigma$ to vary as functions of time, with the continuous-time methodology drawing $\mu$ from an Îto process, such as a solution to the Black-Scholes equation for options pricing. As blockchain systems have incremental updates with independent games per update (*e.g.*, transaction fee markets can differ wildly from block to block), we will necessarily have to consider the multiple period model and define how each agent’s mean vector and risk-preferences evolve over time. Finally, we note that our methodology is directly comparable to that of multistrategy backtesting in quantitative trading [37].

We will assume that each agent treats their token wealth, $(\pi_{\text{lend}}(t)_i, \pi_{\text{stake}}(t)_i)$, as a Markowitz portfolio and updates, on receiving tokens from staking and lending, an estimate for a time-dependent return vector $\mu_i(t) \in \mathbb{R}^2.$ We also assume that each agent has a different, time-independent covariance matrix $\Sigma_i$ that is drawn from a random matrix ensemble.3 In other words, the expected return vector adjusts with time (it depends on the staked and lent quantities), while the covariance stays fixed in time. As long as there is some variance in the chosen random matrix ensemble (*e.g.*, $\exists i, j \in [n]$ such that $\mathsf{Pr}[\Sigma_i \neq \Sigma_j] > 0,$) then the dynamics will not deadlock into a state where all participants end up with the same portfolio (*e.g.*, all outstanding tokens are 100% allocated to staking or lending). If agents simply move all of their assets from one pool to another, as opposed to some risk-adjusted proportion, then the system can deadlock quickly when borrowing demand is constant. Using the notation of the prequel, we define $\mu_i(t) = \mu_i(\pi_{\text{stake}}(t), \pi_{\text{lend}}(t))$ as:

$%\label{eq:mu}
\mu_i(t) =
\left[\begin{array}{c}
\mu_{\text{stake}}(t)_i
\\
\mu_{\text{lend}}(t)_i
\end{array}\right]
=
\left[\begin{array}{c}
\frac{\pi_{\text{stake}}(t)_i}{S_t - \ell_t} \\
\gamma_t
\end{array}\right]
=
\left[\begin{array}{c}
\frac{\pi_{\text{stake}}(t)_i}{\zeta_t} \\
\gamma_t
\end{array}\right]$(3)

Let us motivate this choice of expected return vector. Recall that the return vector is supposed to represent the relative rate of return (usually referred to as an alpha in the quantitative finance literature) over a riskless asset. In this situation, the riskless asset is holding our tokens (as bearer instruments) and we expect to earn yields4 of $\frac{R_t}{S_t-\ell_t}$ and $\gamma_t$ for staking and lending, respectively. However, note that the former yield can be greater than 1 $($when $S_t - \ell_t < R_t),$ whereas the latter yield cannot as per Equation 1. In order for Markowitz optimization to be well-defined, we need to choose yields that are directly comparable (*e.g.*, have the same range). Moreover, we note that the naïvely calculated staking yield loses the dependence on the $i$th party’s current wealth. Since our system allows for slashing and each validator incurs a variance in reward proportional to the inverse square root of the epoch length, it is preferable to find a yield that depends on the $i$th parties wealth to reflect this variance. The simplest estimator for a staking validator’s yield that is in $[0,1]$ (comparable to lending yield) and has a component of the variance is the probability of winning the block reward, $\frac{\pi_{\text{stake}}(t)_i}{S_t - \ell_t},$ which is exactly what Equation 3 describes.

In order to describe the covariance matrix, we will first state an assumption that aims to connect the variances in the system with the expected staked and lent times:

The$i$th agent’s covariance matrix$\Sigma_i$will be static (e.g., does not vary with block height), diagonal, and will have variances connected to the expected stake time$\tau_{\text{stake}}$and expected lent time$\tau_{\text{lend}}.$

Formally we use the following model for the covariance matrix:

$\Sigma^{-1}_i = \left[\begin{array}{cc}
\alpha_i & 0 \\
0 & \beta_i
\end{array}\right]$(4)

$\alpha_i \sim \mathsf{Exp}\left(\tau_{\text{stake}}\right) \\$(5)

$\begin{aligned}
%\label{eq:cov}
\beta_i &\sim \mathsf{Exp}\left(\tau_{\text{lend}}\right)\end{aligned}$(6)

One can interpret $\tau_{\text{stake}}$ as the expected epoch length (*see* Algorithm 1), while $\tau_{\text{lend}}$ represents the withdrawal window under which a lender can remove their tokens from the lent pool.5 By connecting the covariance matrix to these quantities, we encode the connection between a validator’s expected risk preference and the time that capital is locked into either staking or lending. Equation 4 represents the fact that risk and time preferences are, to first-order, inversely correlated (*e.g.*, you are willing to lock up capital for the longest duration in the least risky assets) [38]. To explain these choices of random variables, first recall that the Markowitz objective function is of the form $f(\mu, \Sigma, w, \lambda) = w^{T} \Sigma w - \lambda \mu,$ where $\lambda$ parametrizes the agent’s preference for return maximization over risk minimization. If we divide this objective function by $\lambda,$ provided that $\lambda > 0,$ we get an equivalent optimization problem for $\lambda^{-1}f.$ Note that $\lambda$ can be thought of as encoding ’collective’ risk-preferences for all agents (*e.g.*, a ‘bull market’ when $\lambda$ is large, a ‘bear market’ when $\lambda$ is small). As such we need a methodology for choosing $\lambda$ that depends on the network size and/or number of agents. We make $\lambda \sim \chi^2(n)$ in order to capture the fact that as the number of participants increases, so should the expected number that are risk-seeking. Once we do this, we can directly interpret the variables $\alpha_i, \beta_i$ as encoding both this risk preference and the inherent duration difference between lending risk and staking risk. The static nature of the covariance matrix in Assumption 11 implies that each agent has an ‘equilibrium’ risk-preference. Moreover, this implies that the distribution of risk-preferences amongst changes is not changing—which one can expect in the limit as the number of agents goes to infinity.6 Making this assumption allows for significantly faster simulation (*e.g.*, less compute needed to reduce the variance of the estimator below an error threshold $\epsilon$).

To summarize, we evolve the system by having each agent update their Markowitz estimate at each time step, which changes the lending and staking distributions for the next time step. In simulation, agents will only be able to migrate their staking tokens at times $k\tau_{\text{stake}}, k \in \mathbb{N}.$ Explicitly, we evolve the system via the following loop:

Initialize distributions $\{\pi_{\text{stake}}(0)_i\}_{i\in[n]}, \{\pi_{\text{lend}}(0)_i\}_{i \in [n]}$

Initialize empirical distributions $\{\hat{\pi}_{\text{stake}}(0)_i\}_{i\in[n]}, \{\hat{\pi}_{\text{lend}}(0)_i\}_{i \in [n]}$

For $t = 1, 2, 3, \ldots$

Observe empirical distributions $\hat{\pi}_{\text{stake}}(t)_i, \hat{\pi}_{\text{lend}}(t)_i$

(generated via actual staking / lending rewards accrued in the epoch)Compute Markowitz weights $w_{i,t} = (p, 1-p), p = p(\hat{\pi}_{\text{stake}}(t)_i, \hat{\pi}_{\text{lend}}(t)_i, \alpha_i, \beta_i) \in [0, 1]$

Define new portfolio as $p_i(t) = p(p_i(t-1) + \ell_i(t-1))$, $\ell_i(t) = (1-p)(p_i(t-1) + \ell_i(t-1))$

We will describe a few formal properties (proved in the Appendix) of the two-state model, as it is feasible to analytically analyze this model. Our goal in this section is to bound the amount of turnover in the stake distribution that is caused by on-chain lending becoming more attractive than staking rewards to rational stakers. A secondary goal is to understand what properties of the stochastic processes that represent the evolution of the lending and staking distributions are necessary and/or sufficient for ensuring that we do not have a lot of volatility in agent token portfolios. This is important as volatility in these portfolios implies that PoS networks have volatile security, which is a distinct defect when compared to PoW. We will make a few additional assumptions that are necessary to provide analytical results:

There is a minimum fraction$\delta > 0$of the money supply that needs to be staked, e.g.,$S_t - \ell_t > \delta S_t.$

If no one is staking, then the on-chain lending contract has no value (as it doesn’t have any security), so this is a realistic assumption that matches the practical parameters chosen in live networks such as Tezos [22] and Cosmos [21]. Note that one can directly interpret $\delta$ as the fraction of altruistic validators who will never reallocate or rebalance their assets.

The number of agents$n$is larger than a constant multiple of the product of the exponential parameters, e.g.,$n = \Omega(\tau_{\text{stake}}\tau_{\text{lend}}).$

This assumption is required for purely technical reasons that are explained in the proofs in the Appendix.7 If we solve the *unconstrained* problem8 for the Markowitz objective function, $f(\mu, \Sigma, w, \lambda) = w^{T} \Sigma w - \lambda \mu,$ then $w$ needs to satisfy the first-order condition, $\nabla_w f(\mu, \Sigma, w, \lambda) = 0.$ This yields,

$%\label{eq:markowitz_update_rule}
w = \lambda\Sigma^{-1} \mu$(7)

Given that we are in the multi-period Markowitz setting, this means that we can estimate the $i$th participant’s portfolio weights via $w_i(t) = \lambda \Sigma_i^{-1}\mu_i(t).$ As a measure of volatility in the security of the underlying staking mechanism, we will first look at how $w_i(t)$ changes in time. Intuitively, this change corresponds to how large rebalancing events (*e.g.*, moving tokens from staking to lending, or vice-versa) are between subsequent blocks. If this rebalancing is large, then the network could dramatically reduce its security as holders move their assets from staking to lending. On the other hand, if this rebalancing is small and decreases over time, then we know that the staked token supply is stable. Using the Markowitz update rule (Equation 7), we have the following bound:

$\Vert w_i(t+1) - w_i(t) \Vert_1 = \Vert \Sigma^{-1} (\mu_i(t+1) - \mu_i(t)) \Vert_1
\leq \Vert \Sigma^{-1}\Vert_{1\rightarrow 1} \Vert \mu_i(t+1) - \mu_i(t) \Vert_1$(8)

where $\Vert A \Vert_{1\rightarrow 1} = \max_{v\in\mathbb{R}^n, \Vert v\Vert_1 = 1} \Vert Av\Vert_1$ is the $L^1$ operator norm. This simple inequality implies that the volatility in portfolio weights, represented by the single block difference in weights, is controlled by the difference in the mean vector, as $\mathsf{E}[\Vert \Sigma^{-1}\Vert_{1\rightarrow 1}] = \mathsf{E}[\alpha \vee \beta] \leq \tau_{\text{stake}}\vee \tau_{\text{lend}}.$ Therefore, we focus on trying to bound the difference in expected returns as a function of the block reward at time $t,$ $R_t,$ the total token supply $S_t,$ the lent supply $\ell_t,$ and the Compound lending curve parameters $\beta_0, \beta_1.$

Let $\pi_{\text{stake}}(t+1)_i - \pi_{\text{stake}}(t)_i = \Delta^{\text{stake}}_{i}(t)$ and $\ell_{t+1} - \ell_t = \Delta_{\text{lend}}(t).$ Since the PoS algorithm is adapted to a filtration $\mathcal{F}_t$ on $S_t \Delta^n$ and the covariance matrices are constant as time evolves,9 $\pi_{\text{stake}}(t), \pi_{\text{lend}}(t)$ are also $\mathcal{F}_t$-adapted random variables. Note that by definition, $\Delta^{\text{stake}}_{i}(t) \leq S_t$ and $\Delta_{\text{lend}}(t) \leq S_t,$ since we cannot change the amount staked or lent by more than the outstanding money supply. We will first bound the difference in expected returns as a function of $\Delta^{\text{stake}}_{i}(t), \Delta_{\text{lend}}(t),$ and $S_t$:

*There exist constants *$C, C' > 0$* such that*

$\Vert \mu_i(t+1) - \mu_i(t)\Vert_1 < \frac{C|\Delta^{\text{stake}}_{i}(t)|}{S_{t+1}} + C'\Delta_{\text{lend}}^2(t)$

Note that these constants are allowed to depend on $\delta$ from Assumption 12. If we have an compounding and inflationary rewards schedule, *e.g.*, $S_t = e^{\lambda t},$ then this claim implies the following:

*If there exists *$\lambda > \tau_{\text{stake}}$* such that *$S_t = \Omega(e^{\lambda t})$* and *$\min_i W_i(t) > 0,$* then as *$t \rightarrow \infty,$* the maximum change in stake is bounded above by the lending volatility, *$\Delta_{\text{lend}}^2(t),$* with high probability.*

This claim implies that if there is not much variance in the lending rate, either due to choosing small parameters $\beta_0, \beta_1$ or because borrowing demand is minimal, then we should not expect portfolios to rebalance regularly and rational stakers will tend to keep their tokens locked in a staking contract. Another natural quantity to look at is the variance of the lent assets. We show that the money supply and time-preference for lending, $\tau_{\text{lend}},$ control the variance of lent assets.

*Let *$\mathcal{F}_t$* be the filtration such that the lending process *$\ell_t$* is adapted. Then we have:*

$\mathsf{Var}[\ell_{t+1} \vert \mathcal{F}_t] = \frac{\gamma_t}{\tau_{\text{lend}}^2} \Vert W(t) \Vert_2^2$

*Moreover, we have the following bounds:*

$\frac{\gamma_t^2 S_t^2}{\tau_{\text{lend}}^2 \sqrt{n}} \leq \mathsf{Var}[\ell_{t+1} \vert \mathcal{F}_t] \leq \frac{\gamma_t^2 S_t^2}{\tau_{\text{lend}}^2}$

Note that if $k$ is the constant representing the ratio of borrowing demand to $S_t$ (*cf.* Section 3.2.1), there exists a constant10 $\aleph$ that depends on $k$ such that $\gamma_t \geq \aleph$. As such, Claim 3 implies that as long as $S_t = \Omega(n^{\frac{1}{4}}),$ we have reallocation from staking to lending. Thus, any monetary policy that grows sufficiently quickly with the number of users of the network will *always* have assets moving into and out of lending. If we place constraints on the demand $k,$ we can strengthen this result into a bound on how much $\Delta_{\text{lend}}^2$ oscillates:

*Let *$\eta_t = 1 + \frac{\Vert W(t)\Vert_2^2}{S_t^2} \geq 1 + \frac{1}{\sqrt{n}}$* and *$\alpha_t = \frac{\ell_t \tau_{\text{lend}}}{S_t \eta_t}.$* If for all *$t$*, *$k \geq \frac{\alpha_t}{1 - \alpha_t}$* where and the hypotheses of **Claim 2** hold, then *$\Delta_{\text{lend}}^2(t)$* is a submartingale and we have*

$\mathsf{Pr}\left[\max_t \Delta_{\text{lend}}^2(t) > \lambda\right] < \frac{1}{\lambda^2}\mathsf{Var}[\Delta_{\text{lend}}^2(t)]$

*and subsequently,*

$%\label{eq:tailb}
\mathsf{Pr}\left[\max_t\Vert \mu_i(t+1) - \mu_i(t)\Vert_1 > \lambda\right] < \frac{1}{\lambda^2}\mathsf{Var}[\Delta_{\text{lend}}^2(t)]$(9)

In words, this claim says that as long as we have inflation and there is enough borrowing demand, then we can be sure that the *worst-case* rebalancing is bounded by the variance of lending volatility. If we add another constraint on the behavior of the increments $\Delta_{\text{lend}}^2(t),$ then we can strengthen this claim to get a phase transition that resembles the Galton-Watson phase transition [39].

*Suppose that for all *$t > 0, \Delta_{\text{lend}}(t) < \frac{\ell_t^2}{2 \ell_{t-1} \eta_t}$*, *$\mathsf{E}[\Delta_{\text{lend}}(t)] > 0,$* and let *$\eta_t$* be as in **Claim 4**. Define *$r_{\pm}$* as follows*:

$r_{\pm} = \frac{\ell_t \tau_{\text{lend}}}{S_t \eta_t} \left(1 \pm \sqrt{1 + \frac{\eta_t \ell_{t-1}(\ell_{t-1}-2\ell_t)}{\ell_t^2}}\right)$

*Then *$\ell_t^2$* is a supermartingale when *$\gamma_t \in (r_-, r_+),$* a submartingale when *$\gamma_t \in [0, r_-) \cup (r_+, 1],$* and a martingale when *$\gamma_t \in \{r_{-}, r_{+}\}.$

The intuition for this is as follows:

When there is either too little or too much borrowing demand (

*e.g.*, $\gamma_t < r_-$ or $\gamma_t > r_+$), then the expected lent supply either increases (on average) to one or decreases to zero. This is analogous to a gambler’s wealth after playing a game with probability $p < 1/2$ for $n$ rounds—the wealth concentrates into either the house (staked supply) or the gambler (lent supply).When there is a moderate amount of borrowing demand, $\gamma_t \in [r_-, r_+],$ then we have stable, potentially oscillatory behavior. Doob’s Supermartingale Convergence Theorem [40] intimates that the distribution is stationary as $t\rightarrow \infty.$ This corresponds to a gambler playing a game in which their chance of winning is $p > 1/2.$

These results for the simpler two-level model suggest that our simulated phase transition results convey the existence of a deeper phase transition.

In order to get a stronger understanding of what is going on at the agent level and for different monetary policies, one needs an understanding of the probability that a single agent has a large rebalancing event that affects the staked portion of their portfolio. This involves studying how the staking components of $\mu_i(t)$ change over time. Let $\mu_{\text{stake}}(t)_i$ be the difference in the staking component of $\mu_i(t)$ between times $t$ and $t+1.$ We can bound the probability that an agent rebalances their portfolio by an $\epsilon$ fraction via the following claim:

*Let *$\mu_{\text{stake}}(t)_i = \frac{\pi_{\text{stake}}(t+1)_i}{S_{t+1} - \ell_{t+1}} - \frac{\pi_{\text{stake}}(t)_i}{S_t - \ell_t}.$* Then for all *$t > 0,$* we have*

$%\label{eq:claim2}
\mathsf{Pr}\left[|\mu_{\text{stake}}(t)_i| < \epsilon\right | \mathcal{F}_{t-1}] = \Omega\left(1 - \left( \left(\frac{\epsilon S_{t+1}}{\gamma_t}\right)^n e^{-\tau_{\text{stake}}\tau_{\text{lend}}\frac{\epsilon S_{t+1}}{\gamma_t}}\right)\right)$(10)

Bounds like Claim 6 are of the form $\mathsf{Pr}[X_t < \epsilon] = Y_t,$ which imply that the extrema of $Y_t$ provide a guaranteed bound of how large $X_t$ can be when $Y_t$ is minimized. This means that we can try to bound the first hitting time $t^*$ for the maxima of $|\mu_{\text{stake}}(t)_i|$ by analyzing the minima of the right-hand side of Equation 10. Note that the function $1-(kx)^n e^{-k'kx}$ has a minima at $x = \frac{n k^{n-2}}{k'},$ which means that we can estimate when, as a function of $S_t, \gamma_t, \epsilon,$ we have maximal deviations in stake. In the following section, we examine this claim for different monetary policies $S_t.$

In this case, $R_t = k r^{-t}, r < 1$ and $S_t = \frac{k(1-r^{-t-1})}{1-r} = C_{\infty} - C'r^{-t-1},$ where $C_{\infty}$ is the final money supply (*e.g.*, 21 million, for Bitcoin). Letting $\epsilon = \delta S_{t+1}$ and plugging this into the right hand side of Equation 10 and optimizing for $t$ using the chain rule gives the condition $t^* = -\log\left(C- \sqrt{\frac{n\delta}{\gamma_t \tau_{\text{stake}}\tau_{\text{lend}}}}\right).$ Requiring that $t^* > 0$ gives the condition $C - \sqrt{n\delta}{\gamma_t \tau_{\text{stake}}\tau_{\text{lend}}} \in (0,1)$ or

$\delta > \frac{(C_{\infty} -1)\gamma_t \tau_{\text{stake}}\tau_{\text{lend}}}{n}$

The above condition implies that at the time with the highest expected amount of removed stake, we expect that quantity to be on the order of $\frac{C_{\infty}}{n}.$ If there are fewer terminal coins than participants, $C_{\infty} \ll n,$ then we should expect large rebalances for every staker. This formalizes the intuition that totally deflationary currencies are vulnerable to large rebalances that depend strongly on the number of coins in the system. It also confirms the intuition that if there are fewer coins than participants, then borrowing demand will be high and we should again expect large rebalances.

Next, we consider what Claim 6 implies about inflationary monetary policies. We will consider two types of inflationary policies: Polynomial ($S_t = \Omega(t^k)$) and Exponential ($S_t = \Omega(e^{\lambda t})$).

For polynomial inflation, we have $R_t = c t^{k-1}$ and $S_t = \Omega(t^k)$. Optimizing Equation 10 with this form of $S_t$ gives $t^* = \left(\frac{n\delta}{\gamma_t \tau_{\text{stake}}\tau_{\text{lend}}}\right)^{1/k}.$ Plugging this into the right-hand side of Equation 10 gives a lower bound of

$1 - \frac{\delta^2 n}{\gamma_t \tau_{\text{stake}}\tau_{\text{lend}}}
e^{-\tau_{\text{stake}}\tau_{\text{lend}}\delta^2 n / \gamma_t}$

which means that if $\delta = O(n^{-1/2}),$ individual agents will not have rebalances of size $\delta S_t$ with high probability. These guarantees do not depend on the polynomial degree $k,$ which means that even simple linear policies $(k=1)$ are sufficient to get low stake turnover.

In the case of exponential inflation, $S_t = C_0 e^{\lambda t}$ for some $\lambda > 0$ and initial token distribution $C_0.$ Following the logic of Section 4.1, we arrive at a conclusion of,

$\delta > \frac{C_0\gamma_t \tau_{\text{stake}}\tau_{\text{lend}}}{n}$

This states that the worst-case rebalancing is bounded by the *initial token distribution* as opposed to the final token distribution. Given that most networks assume that there will be more users than the *initial* distribution, $C_0,$ this implies that large rebalancing events should become rarer once the network achieves a scale of $n \gg C_0.$

The results of this section show that the model of Section 3.3 has a volatility that is mainly dependent on lending volatility. This volatility, if demand is sufficiently shallow, is small enough to help bound the worst-case portfolio rebalancing for any validator in the system. However, the turnover in staked quantity, measured by how each agent’s expected reward changes over each epoch period, is sensitive to the precise nature of the monetary policy, $S_t.$ In particular, deflationary policies cannot support on-chain lending, as the worst-case rebalancing rate depends on the terminal money supply, whereas the rebalancing rate for exponentially inflationary monetary policies only depends on the initial money supply (*e.g.*, Ethereum’s pre-mine). Finally, we note that polynomial inflation provides particularly good rebalancing guarantees that are independent of the rate of growth of the money supply. Many of these results rely on asymptotic behavior of agents in this system and all of these results depend on simple models for borrowing demand. The remainder of the paper will focus on using simulation to provide a more realistic picture of Section 3.3.

In order to test the three-state model in a quantitatively rigorous and realistic fashion, we turn to agent-based simulations. One of the main reasons to use agent-based simulation over formal methods is that it hard to formally prove what block reward growth rate is ideal for mitigating volatility in portfolio allocation. For instance, it is difficult to evaluate whether $R_t = \Omega(t^2)$ or $R_t = \Omega(t)$ provide ideal mitigation for on-chain lending with parameters $\beta_0, \beta_1.$ Moreover, we can test various realistic demand distributions, including ones that are atomic and do not have a probability density. Prior work on agent-based simulations of blockchain systems [19] has focused on analyzing consensus protocols via event-based simulation. We follow a similar event-based framework for simulating staking and lending, albeit ignoring the details of peer-to-peer networking and consensus. Our goal is to sample as many trajectories $X_t \vert \mathcal{F}_t = (\pi_{\text{stake}}(t), \pi_{\text{lend}}(t), \gamma_t)$ as possible for different combinations of parameters $\beta_0, \beta_1,$ and block reward schedules $R_t.$

In order to generate sample paths $X_t,$ we need to specify the following variables:

Initial distributions: $\pi_{\text{stake}}(0), \pi_{\text{lend}}(0)$

Initial interest rate: $\gamma_0 \in (0,1)$

Interest rate parameters: $\beta_0, \beta_1 \in (0, 1)$

Demand-generating distribution parameters (vary depending on the demand generating distributions chosen): $\mathcal{P}_{\xi}$

Staking and lending time scale: $\tau_{\text{stake}}, \tau_{\text{lend}}$

Slashing probability: $p_{\text{slash}}$

PRNG seed: $s \in \mathbb{Z}_{2^{64}}$

We chose to model $\pi_{\text{stake}}(0)_i \sim \mathsf{Exp}(\lambda_{\text{stake}})$ and $\pi_{\text{lend}}(0)_i \sim \mathsf{Exp}(\lambda_{\text{lend}}),$ as these distributions exemplify the extreme concentration of wealth that accompanies most token distributions. Exponential distributions are also useful in that the order statistics are also exponential,11 which represents the idea that the $k$th entrant to a network should have a decaying fraction (in this case $\frac{1}{k}$) of the total token supply. We note that we do not use power law distributions as there are conflicting reports of the wealth distribution in Bitcoin actually following a power law [41][42]. Given the statistical difficulty of discerning if one has a power law versus an exponential decay [43][44] and the extra parameter in a power law (*e.g.*, $p(x = t) \propto x_{\min} t^{-a}$), we decided to use exponential initial distributions.

In our simulations, we kept the initial interest rate at 10%, as we saw very little sensitivity to the initial choice. In particular, agents quickly rebalance their portfolio into lending, if the rate is high enough, and this appeared to equilibrate within a small number of time steps. We swept the other parameters, $\beta_0, \beta_1, \tau_{\text{stake}}, \tau_{\text{lend}}, p_{\text{slash}},$ over realistic ranges and for each choice of parameter, we sampled trajectories for a variety of seeds $s$ to get an ensemble average.

The main simulation loop follows that as described at the end of Section 3.3. We exactly evaluate the Markowitz update rule, with constraints, via the exact solution for optimal portfolios [37]. Given that we are solving $n$ independent, two-dimensional Markowitz problems, we evaluated the constraints analytically (as opposed to using a convex solver like Gurobi or CVX). The main event loop has the following causal ordering:

If $t = k \tau_{\text{stake}}$ for some $k\in\mathbb{N},$ allow agents to update their Markowitz portfolios12

Sample the borrowing demand distribution

Update $\gamma_t$ as a function of the new borrowing demand

Run Algorithm 1 to determine who wins the reward $R_t$ and/or if they get slashed

We sampled a variety of borrowing demand distributions for deflationary and inflationary block rewards, as illustrated via the sample paths in Figure 4. These stochastic borrowing demand paths each have four parameters: mean, variance, maximum, and minimum demand. We define the maximum and minimum demand parameters as multipliers of the token supply, so that the minimum is $(1-\eta_0)S_t$ and the maximum is $(1+\eta_1)S_t$ (illustrated via the lower and upper bounds in Figure 4, respectively).

In order to evaluate a variety of realistic conditions, we swept through many parameters illustrated above. The most interesting results come from looking at individual trajectories, such as Figure 1, and heatmaps of how certain scalar functions behave as we vary the lending rate parameters $\beta_0, \beta_1.$ We generated heatmaps for a number of random seeds and took averages over these random instantiations to generate Figure 6 and Figure 7. The two main measurements that we looked at were $f(\beta_0, \beta_1) = \mathsf{E}_t\left[\frac{S_t - \ell_t}{S_t+\ell_t} \Bigg| \beta_0, \beta_1, \tau\right]$ and $g(\beta_0, \beta_1) = \mathsf{E}_t[\gamma_t - \beta_t | \beta_0, \beta_1, \tau],$ where given a non-anticipating stochastic process $X_t$ and a function stopped at time $\tau,$ $\mathsf{E}_t[f(X_t) | \tau] = \int_0^{\tau} f(X_t) dX_t,$ for the stochastic measure $dX_t.$ We ran all trajectories for $\tau=250\:000$ blocks and approximate $\mathsf{E}_t[f(X_t) | \tau] \approx \sum_{i=0}^{\tau / \Delta} f(X_i) \xi_i,$ where $\xi_i$ is sampled from $dX_{t+\Delta} - dX_t.$ The first quantity, $f(\beta_0, \beta_1),$ is the normalized difference between the staked supply and the lent supply and when it is greater than zero, there is more quantity staked, on average. It is normalized relative to the total money supply $S_t + \ell_t$ so that $\forall t, \frac{S_t - \ell_t}{S_t+\ell_t}\in (0,1)$ and thus we can compare the relative staking to lending proportions at different times. The second quantity, $g(\beta_0, \beta_1),$ measures the linear spread between borrowing and lending rates, which varies over time even though $1-\gamma_0 = \frac{\beta_t-\gamma_t}{\beta_t}$ is constant. If the market for borrowing is efficient and there is less churn, we should expect that this rate should be quite low. Note that in all simulations used, we set the relative spread, $1-\gamma_0,$ to be 0.5%.

In Figure 5, we see plots of $f(\beta_0, \beta_1)$ for different inflation rates $r \in \{0.25, 0.5, 1.0\}.$ Note that when $r = 1.0,$ then we have a linear polynomial inflation rate. This figure demonstrates that even though the dependence on $\beta_0, \beta_1$ appears to be random, it is clear that the deflationary figures tend to have significantly more lending than staking. On the other hand, the linearly inflating component enjoys a significant advantage with regards to staked supply. Note that the dependence on $\beta_0, \beta_1$ appears mostly random because of the use of a common scale to plot them. The usage of a common scale dampens the correct scale to plot the figures at (which should be on the order of $\mathsf{Var}(f(\beta_0, \beta_1)$)), which varies dramatically as a function of the other parameters that we sample, such as the block reward. In some of the other plots that we will look at, we use independent color scales for each plot to emphasize the variation as a function of $\beta_0$ and $\beta_1.$

On the other hand, in Figure 6, we see an array of plots that show how the borrowing spread, $g(\beta_0, \beta_1)$, changes as a function of inflation rate and borrowing threshold. The third column, which is the linearly increasing block rewards regime, demonstrates very tight spreads, suggesting that even changing the borrowing demand threshold causes little variation in spreads. On the other hand, the deflationary regime is much more sensitive to shocks in borrowing demand, as illustrated by the plots in the upper left hand corner of Figure 6. These empirical results validate Claim 2 and Claim 3, as we directly see that lending volatility has a much more muted effect on inflationary systems.

Finally, in Figure 7, we see heatmaps of $f(\beta_0, \beta_1)$ for different borrowing thresholds and inflation rates. Note that these plots have different scales, unlike Figure 5, and that the deflationary figures are all mainly negative. From the first two columns, it becomes clear that as borrowing demand increases, $g(\beta_0, \beta_1)$ becomes increasingly monotonically decreasing in $\beta_0, \beta_1.$ This means that as borrowing demand increases, the system’s loss of security to lending increases in a more predictable manner. On the other hand, the linearly inflating regime does not have this issue and continues to have relatively random and non-monotonic dependence on $\beta_0, \beta_0$ as we adjust the borrowing threshold.