Rational Expectations Voting in Agent-based Models: An Rational Expectations Voting in Agent-based Models: An Application to Tax Ceilings Application to Tax Ceilings

,


Introduction
Laws to constrain property taxes have been an American phenomenon since at least California's Proposition 13 in 1978.Over the next twenty-five years, 31 states passed similar laws.1There is ample economic evidence that these laws effectively restrict local government. 2 Why do voters support laws to constrain local government?Economic models provide one explanation, Leviathan governments: governments with both the desire and the power to raise taxes beyond the median ideal level. 3We extend a model introduced in Anderson and Pape [2013] which offers a second reason: voter uncertainty over tax payments.The agent-based model we use here allows us to incorporate empirical data to estimate how important both tax payment uncertainty and the extent of Leviathan power are to providing popular support for tax ceilings. 4We use empirical data from two American cities: Minneapolis, MN and Binghamton, NY.Minneapolis and Binghamton have different property assessment regimes, which in turn generate different profiles of tax payment uncertainty across their populations.According to our model, the different property assessment regimes imply that Binghamton residents are as much as five times more tolerant of Leviathan extraction as the residents of Minneapolis.We can find restrictive ballot initiatives which would pass with nearly one hundred percent support in Minneapolis that would not pass in Binghamton.This suggests that property tax assessment regimes, and the resulting tax payment uncertainty, could be a key factor determining voter support for these laws.
The methodological contribution of this paper is to introduce a new kind of sophisticated voter to agent-based policy modeling.The voters in this model forecast the impact of alternative policies before they vote using the agent-based model that they are themselves embedded in.Full rational expectations [Muth, 1961, Lucas, 1973] require that agents use the correct distributions of all random variables to make their forecasts; here, the true distributions can only be computationally approximated.We simulate rational expectations by computationally sampling the space of exogenous random variables, and for each policy, calculating the associated endogenous random variables.The resulting joint distribution over all variables as a function of policy is then endowed to all agents as a common prior.Finally, all agents choose their favorite ex-ante policy by choosing the policy which maximizes expected utility with regard to this common prior. 5In Section 3, we describe this approach for a general agent-based model, and show how it works in the Tax Ceilings model specifically.
Investigating impacts of voter sophistication has a long tradition.As mentioned above, sophistication is a central tenet of rational expectations and the accompanying Lucas critique.One vein of literature investigates the impact of economic variables on voting outcomes6 or the sophistication of voters' conceptual models. 7Farquharson [1969] and related works8 investigate strategic voting, which they call 'sophisticated voting,' contrasted with 'sincere voting' like in the Downsian model [Downs, 1957].Voters in our model, like in Downs, vote sincerely, so are not sophisticated in the Farquharsonian sense.The sophistication of the agents in our models is in their understanding of economic policy models.This paper is more akin to Gomez and Wilson [2001], in which voters learn to attribute causality to variables in the economy.There are some agent-based models that investigate agent sophistication, typically with adaptive learning in an abstract political landscape.9Our contribution to this literature is the specificity of our policy question and the incorporation of relevant data into the agent-based model to address questions of voting behavior.
While the policy considered here is a property tax ceiling, this method of simulating rational expectations to calculate policy support could be paired with other agent-based models to predict support in other economic policy settings.Agent-based models of economic policy design could be attached to a simulated rational expectations voting model like ours, to predict voter preferences over these policies.10Or, simulated rational expectations voters to could be used to forecast support of a variety of market-related policies, such as taxes, price restrictions, redistribution, social insurance, or a minimum wage in agent-based market models.11Or, simulated rational expectations could be attached to a number of environmental economic agent-based models,12 to predict support for resource rationing, user fees, or land-use policy.
The economic policy contribution of this paper is to show that, all else equal, assessment policy can significantly change the freedom that a local government has to raise revenues that could plausibly explain which cities support, and which don't support, a given tax ceiling.In particular, we estimate that the city of Binghamton can raise property taxes nearly five times as far above median ideal level as can Minneapolis, before facing a tax ceiling, and that difference is largely attributable to differences in tax assessment policy which generate different levels of tax price volatility.We also find that this result is robust to modeling the utility of agents using risk aversion or loss aversion, both calibrated to functional forms found in the literature.
The economic policy contribution is noteworthy because existing models in the literature, with the exceptions of Vigdor [2004] and Anderson and Pape [2013], offer Leviathan extraction as the only explanation of support. 13,14We are the first to empirically show how uncertainty in tax payments effects support for tax ceilings.Moreover, the data we use is new: the empirical literature on tax ceilings has never used household-level, panel tax-price data. 15 We proceed in seven sections.In Section 2, we briefly discuss how the basic institutions of property taxation can embed idiosyncratic tax price uncertainty.In Section 3, we describe an approach to rational expectations voting in a general agent-based model.We also introduce the particular Tax Ceiling ABM that is adapted from Anderson and Pape [2013], and we use that as on onging example.In Section 4, we describe the data used in the Tax Ceiling ABM and also describe how we incorporate these data.In Section 5, we present and discuss our results of the Tax Ceiling ABM, including the implications of Binghamton's and Minneapolis's tax assessment regimes.In Section 6, we return to the general approach to rational expectations voting and discuss broader implications.In Section 7, we conclude.

Property Tax Background
Understanding the model of property taxes presented here requires understanding some details of property taxation.We consider tax bills first from the perspective of the individual taxpayer and second from the perspective of the jurisdiction.Then we discuss the implications for tax price volatility.
Tax bills from the perspective of an individual taxpayer.Taxpayer i's property tax bill (T i ) is an accounting identity defined as the product of the property tax rate (τ j ) in her jurisdiction j and her property's taxable value (v i ), In the United States, most states define taxable value, v, to be something different than the current market value of the property.These differences arise from infrequent revaluations, exemptions, exclusions from the property's taxable value, and both intentional and unintentional assessment errors.
Tax bills from the perspective of the jurisdiction.A jurisdiction's property tax base is defined as the sum of the taxable values of all properties located within the jurisdiction.Unlike the income or sales tax, the property tax base (B j ) is known ex-ante to policy makers.As a result of this ex-ante knowledge, most local governments select their level of desired revenue R j .(R j is also known as 'the property tax levy.') the ratio of the ex-ante tax base and desired revenue produces the statutory property tax rate, In practice, a jurisdiction's statutory tax rate often changes annually as both tax base and desired revenue changes. 16We use the definition of the tax rate to express an individual's tax bill as a function of revenue and the tax price p at time t Individual i's tax share (v i /B j ) is her tax price (p i ) of an additional dollar of property tax revenue.This is a price in a very real way: it is what individual i must pay for an additional unit of revenue.
in the same state.This effect has not been included in this model but may be included as a future extension.
15 There are many noteworthy papers empirically investigating support for tax ceilings.Citrin [1979], Courant et al. [1980], and Ladd and Wilson [1982] use survey data to ascertain factors that affect desire for a tax ceiling.Ladd [1978] and Alm and Skidmore [1999] find evidence that high property tax burdens and growth in local expenditures increase support for tax ceilings.Temple [1996] finds support for tax ceilings among communities with low income voters, higher tax prices, and modest property tax revenue growth.
16 For instance, Anderson [2012] demonstrates that from one year to the next property tax rates change for over 90% of MN cities for the period 2000 to 2003.
This equation captures the basic ways that tax bills can vary over time and across a community.Since the tax price is an increasing function of ones' home value, at any point in time the highest value home also has the highest tax price.Critically, however, this logic does not carry over to changes over time.17This implies that individuals' tax prices need not be correlated with home value over time.Empirical evidence suggests that there are substantial differences in price appreciation rates across properties even within relatively small geographic areas like counties, cities, and school districts. 18 Model: Rational Expectations Voting In Subsection 3.2, we describe a general approach to rational expectations voting in an agentbased model.Before discussing the general approach, we share some details of the Tax Ceiling ABM, where we use this approach; we use it as an ongoing example throughout the explanation of the general method.After discussing this general approach, we discuss risk aversion versus loss aversion voting in Subsection 3.3.Then we conclude this section with possible extensions in Subsection 6.
When talking about agent-based models, it is convenient to have notation for a profile variable; i.e. a variable that has a value for each agent.We use the notation z to denote such a variable; if there are I agents, it is a vector of length I with z i being the value associated with agent i.

The Tax Ceiling ABM
This model is an agent-based extension of the analytical model introduced in Anderson and Pape [2013] (hereafter Anderson/Pape).Because of the extensive treatment this model is given in that paper, the discussion here will be brief.
The setting for the model is a local jurisdiction of property-owning citizens, who use the local government to fund a collective good G.The collective good is financed through a property tax, which is levied on the citizens and subject to their approval.The local government may have some agenda-setting power, and may be able and willing to extract revenues beyond what the median citizen desires.Also, the citizens face some uncertainty over their individual tax payments.For both of these reasons, the citizens may desire to restrict future revenue levels.They have a mechanism for doing so, which is a property tax ceiling.In our approach, agents constuct rational expectations of the implications of the tax ceiling before they vote.
There are two goods in this economy: a private good x and the collective good G.The private good is produced with a constant marginal cost of 1 and each individual must pay for the private good out of their own wealth.The collective good G is made with public revenues R, paid for by property taxes.We assume G has a constant marginal cost of 1, like the private good, but that there is an unknown fixed cost d (i.e.G = R − d).d is a binary random variable that takes on either a value of 0 or D > 0. This random variable is called a calamity.For example, in 2006, the City of Binghamton experienced a flood which destroyed a fair number of city streets and bridges, so to achieve the same level of public streets as previous, the city had to spend more money. 19alamities are the source of common risk in the model.D is called the severity of the calamity, and π D ∈ [0, 1) is the likelihood of the calamity.(π 0 = 1 − π D be the likelihood of no calamity.) The agents in the economy are a population of citizens, indexed by i ∈ I.The outcome of this model is a final allocation of goods, a pair x, G , where x is a profile of private good levels and G is the level of the collective good.We assume that agents have identical utility functions and each agent i only values her own consumption, x i , and the level of the collective good; i.e.
for some utility function U .In Subsection 3.3 we discuss the functional form of U , which is different for risk aversion versus loss aversion.
Each citizen i has wealth, ω i , to spend on the private good and on the tax payment, since the collective good is paid for by property taxes.The tax payment of an individual is her tax price times revenue, i.e. p i • R (see Section 2).Moreover, since there are only two goods, assuming a binding budget constraint implies: x i = ω i − p i R; i.e. private consumption is determined by the size of the tax bill.We assume the tax price is a random variable of the form p i = µ i + i , where µ i is a known constant which can vary across citizens and i is an independent, normally-distributed, mean-zero random variable with variance σ 2 i , which can also vary across citizens.The epsilons are the source of individual risk in this model.
We model this situation in two stages.In the first stage, agents vote on a policy called a Tax Ceiling.A tax ceiling c ∈ R + represents the maximum level of (future) revenues R allowed by law.A particular tax ceiling appears on the ballot, and agents either vote for or against that tax ceiling.In the second stage price errors and calamity d are determined, and citizens then vote directly for a level of revenues R. The voted level of revenues is then subject to the tax ceiling (if it is in place) and also subject to possible leviathan extraction γ by the local government.Anderson/Pape define leviathan extraction γ as the ability to cause revenue to be (1 + γ) times the median ideal level R.
As explained in Anderson/Pape, calamity and leviathan extraction have opposite roles in this model (at least in the restricted setting they consider).The more severe the calamity, the more voters wish to allow the government to be unconstrained, to deal with the calamity if it arises.On the other hand, the larger the level of leviathan extraction, the more voters wish to constrain the government.Also, the higher the idiosyncratic risk, i.e. the variance of , the more agents wish to curb government ex ante, as they are risk or loss averse over the possibility that they get a large draw of i which results in a higher tax payment.
The key question that this model answers is: If the government chooses an extraction level γ, what is the probability that it will be blocked by a tax ceiling?This is answered by calculating the tax ceilings c which get a majority vote in the first stage of the model, when agents have forecasts given by rational expectations of the outcomes with and without each possible tax ceiling.

A General Approach
We consider Rational Expectations Policy Voting as a method for selecting policies in an ABM context.We consider two stages.The first stage is a voting stage over policies that are pursued in the second stage.The second stage is an agent-based model of any number of periods, where the activity of the second-stage ABM depends in part on the policy from the first stage.Agents in the first stage have utility functions over second-stage outcomes (for example, an agent may value direct implications of the policy as well as her own future consumption, which may be affected by the policy.)thought of as destroying a certain fraction of the collective good.
We define how an agent-based model can provide rational expectations for agents voting for a policy in the first stage where the policy gets implemented in the second stage.The mechanism is: agents repeatedly run the second-stage agent-based model with different policy values, thereby sampling the space of future outcomes.Said differently, rational expectations requires that agents in the model are as sophisticated as the modeler, and use the model to forecast the implications of policy.In this case, the model is an agent-based model: so the agents populate their forecasts with that same agent-based model.

Setting:
Suppose that there are I < ∞ agents in the economy, indexed i = 1, . . ., I. As previously stated, we use the vector notation x to denote a profile, meaning for any variable k, k = {k 1 , k 2 , . . ., k I }.
There is a set of final outcomes O with typical element o.These are assumed to be final outcomes after the second stage.In the Tax Ceilings model, the outcome set is + , with typical element o = { x, G}; that is, a level of consumption for each agent and the amount of the collective good G.
Agents are assumed to have utility functions u over outcomes, so u i : O → R.This allows for the possibility that agents care about the consumption of other agents, for example, reasons of altruism or spite.In the Tax Ceilings model, we assume pure selfishness where, agents value the collective good and their own consumption of the private good, and do not value others' consumption of the private good. 20In particular, we assume that each agent i in the model has a utility function Ũi over outcomes O, of Ũi ({ x, G}) = U (x i , G), for some common, quasiconvex function U .(More on the functional form of utility in Subsection 3.3, where we discuss risk aversion versus loss aversion.) There is also a policy-space Y with typical element y.In the Tax Ceilings model, the policy being voted on is a tax ceiling, i.e. the highest allowable revenue level R max , where R max ∈ Y tax = R ∪ {∞} , where ∞ represents no limit.Although the policy in the Tax Ceilings model is unidimensional, it need not be the case in other applications.
In the first stage, each agent will vote.We define V ⊆ Y as a vote space, which represents the list of available policies that agents can select from, and we define v ∈ V I as a profile of votes.The Downsian or Median Voter Theorem approach [Downs, 1957] would define V = Y and, assuming Y is unidimensional, we interpret y = Median ( v) as the prevailing policy.Alternatively, votes could be defined more narrowly, as a ballot measure that is accepted or defeated.Empirically, tax ceilings have overwhelmingly been selected by ballot measure, so we take this approach in the Tax Ceilings model.We define the ballot measure for ceiling c ∈ Y tax as V (c) = {c, ∞}.In other words, we restrict attention to two alternatives: a tax ceiling c or the alternative ∞, which represents no ceiling.Then if we let v (c) be the vector of votes when the ballot measure is V (c), we can define objects like the most restrictive tax ceiling with majority support, which equals This is a way to characterize the set of v which we find, and allows us to find parameter settings in which there is (or isn't) majority support for a tax ceiling to restrict the governments' behavior significantly.
The exercise pursued here reveals each agent's ideal policy in the first stage, selected by the vote space, V .We suppose that there is some aggregation method H : V I → Y which transforms votes into a selected policy.In the median voter or ballot measure settings described above, H ( v) = Median ( v); however, H could take on any form, such as different vote weights for different citizens.(For example, shareholder voting in a firm.) The second stage, as stated above, is an agent-based model.We wish to abstract as much as possible from the details of the second-stage agent-based model, in order to treat it like a 'black box' from the first-stage point of view.
To do so, we first define a second-stage state space S, with typical element s.We assume that all randomness in the model is isolated in the definition of the state space.Therefore, it includes any variables exogenous to the second-stage ABM except the policy.In the Tax Ceilings model, this is the value of the random variable d, the calamity, and the value of each agents' price error .It is possible that agents' behavior is still random after these exogenous variables are determined; in this case the value of any variables they use for randomization must also be included in the definition of the state space.In the Tax Ceilings model, agents' second stage behavior is deterministic given the policy and tax price errors.Note that this definition is consistent with Tesfatsion [2006], who defines agent-based models as deterministic maps, given that ABMs are implemented as algorithms so use pseudorandom numbers which are generated deterministically from a value called the 'random seed.'This random seed could be included in the definition of a state space if one desired.
Given this definition of the state space, we can define the agent-based model itself.We define it as follows: It is a function which takes as given an implemented policy y ∈ Y and a state of the world s ∈ S and delivers an outcome o ∈ O.We call this function We also assume that there is a common prior and true distribution F over S.
Given these primitives, we can define rational expectations in the following way: Recall S is the state space.Let S ⊆ S be a sampled subset of S. Sampling might be desirable for certain applications, so we allow for that possibility in the following treatment.(Note that we also allow for S = S.) In the Tax Ceilings problem, S tax is sampled according to the following scheme: we select the universe of possible values of calamity severity d, which is {0, D}, and, for each value 0, D, we sample 200 draws of the random vector N (0, σ 2 i ) I i=1 .Let V ⊆ Y be the vote space.Define the following family of functions, one for each agent i ∈ I: Let π i (o|y) be a conditional probability distribution over O, given a policy value y ∈ V .π i represents agent i's ex-ante belief over what outcomes would occur given the policy.Rational expectations requires, that, for each agent i in I, her belief π i must satisfy: where if is an indicator function which takes on 1 when the argument is true and 0 when false.The interpretation is: agent i's belief about the probability that outcome o occurs, given that the policy is y, must be equal to the the true probability that this outcome occurs, given the true distribution over S and the true mapping from actions and states to outcomes, which is given by the agent-based model L. Note that, there are two distinct terms for each state: First, a deterministic component (coefficient) if (•), and second, a stochastic component F (s).This reflects the fact that, by design, the L function (which is inside the if indicator) is deterministically given s.) Rational expectations requires that agents have forecasts given the true model and true distribution over exogenous variables.In an agent-based model, the 'true model' is the agent-based model L itself.In order to construct the probabilities π, we run all combinations of s ∈ S and y ∈ V to calculate L(o|s, y) over these sets.Formally, this is equivalent to backwards induction.The spirit of backwards induction, or having forward-looking agents, is that the future must be solved 'first' and then knowledge of the future must be provided to the agents in the past to inform their decisions.We discuss more on this comparison in Subsection 6.
Running the agent-based model for all s ∈ S and y ∈ V generates the set: This set is sufficient to define all agents' rational expectations beliefs π, as defined in Equation 4. Using these beliefs, we can find the rational-expectations consistent voting behavior of the first-period game using the following rule, which is simply utility maximization given beliefs π: Where, as above, Ũi is agent i's utility function over outcomes.Given this formulation, the winning policy chosen in the first period is y = H( v ).
Modelers should carefully consider the possibility that v i may not be unique for some or all agents.For example, in the Tax Ceilings model, it is possible to consider tax ceilings c that simply never bind.Such tax ceilings are, of course, as good as "no ceiling" (i.e.∞) for all agents.In that case, agents' optimal votes are non-unique.For the purposes of the Tax Ceiling model, we followed the following rule: we assumed that any agent who was indifferent between a ceiling c and the non-ceiling ∞ would vote for ∞ and against c.This is because we were most interested in cases in which a vote for a tax ceiling was an expression of strict preference for that ceiling.Such a rule may be useful for modelers applying this method to other ABMs.

Risk Aversion vs. Loss Aversion
The citizens i are expected utility maximizers, and we model them as risk averse or loss averse agents.There is now ample empirical evidence that people are loss averse, so integrating loss aversion and a reference level of utility into voting can help forecasting realistic voting outcomes for a well-informed and sophisticated populace.
In the Tax Ceilings model, risk averse agents have the following identical von Neumann-Morgenstern utility function over final bundles of goods: It is additively separable, constant relative-risk aversion (CRRA) utility, where r is the coefficient of relative risk aversion (CRR). 21or loss averse agents, we define loss aversion as a transformation of U .In particular, let u be an arbitrary level of utility and let Ū be a reference level of utility agent i.Then define the loss aversion function as follows: where The function v incorporates the loss aversion elements; in particular the kink at the reference point.That is abstracted into the function l for notational simplicity.The values of l and the exponent are values from the literature: Tversky and Kahneman [1992] and Barberis et al. [2001].
In the Tax Ceilings model, the utility function U (x, G) is additively separable.This suggests an alternative formulation, in which loss aversion could be applied separately to each component.(This could be followed in any agent-based model with separable utility.)We do not model loss aversion this way, because, although this formulation is well-defined, it undercuts the spirit of loss aversion, in the opinion of the authors.The spirit of loss averson is to view outcomes as monolithically a loss or a gain, so that a 'bad' outcome is considered a loss, which is treated differently than a 'good' outcome.Applying loss-aversion to the components separately means that, given that there is generally a trade-off between between consumption x and the collective good G (because G is paid for in taxes which reduce consumption), agents will experience, simultaneously, a loss and a gain for most bundles of x and G.
In loss aversion, the reference level of utility over the second-stage outcomes plays a key role.What is the correct choice for the reference level?One modeling choice, which we do not pursue, would be to assume that agents have some current level of utility which may only be tangentially related to the second-stage outcome, and it is this current level of utility which provides a reference level.In the Tax Ceilings model, this would be equivalent to assuming that each agent has some first-stage level of consumption and there is a first-stage level of the collective good, and these first-stage values are used to calculate the reference utility of each agent.
Instead of this approach, we choose to incorporate the spirit of rational expectations, or, at least, backwards induction, into reference utility selection.In the Tax Ceilings model, we begin with the second stage and find all agents' optimal policy.We assume, then, that this optimal policy wins out, with appropriate changes for leviathan extraction.Then, we assume each agent's expected tax price and the optimal second-stage policy, as a pair, provides the reference level of utility in the first stage.This can be done simply in the Tax Ceilings model, because there is no uncertainty in the second stage, so loss aversion plays no role.That is, in the Tax Ceilings model, agents' optimal policy in the second stage, as a function of the realized random variables, is the same under risk aversion and loss aversion.This arises from the fact that we apply loss aversion to total utility and not to the additively separate components of utility, and to the fact that, in the Tax Ceilings model, agents are explicitly comparing an ex-ante and ex-post ideal policy, which may not be the case in other second-stage ABMs.
What is the logic?In the simulation of the second stage, we find the ideal revenue (policy) of each agent under CRRA utility at expected prices and expected calamity, find the population median of that value, and declare the population median ideal revenue as the reference level of revenue.This defines, for each agent, a reference level of utility equal to their utility at the reference level of revenue and at expected prices and damages.(It is possible to show this formally.Please see Appendix A.1.)Choosing these values means we have scaled to the CRRA values, so this becomes as close as we can get to an "all else equal" exercise.
What of a model with a more general second stage, beyond the Tax Ceilings model?How can first-stage loss aversion reference utility level be worked out in this case?Rational expectations of outcomes given policy are still available for calculation of first-stage reference levels.The open question is, what policy (or distribution over policies) should be used to construct the expected outcome, and therefore the reference level of utility?The modeler has three choices.First, is to construct a 'second-stage ideal policy' as we do in the Tax Ceilings model; however, this may not be available if the second stage is sufficiently complex.The second choice is to make an assumption about the probability distribution over available policies used to construct the reference level of utility: for example, the modeler could assume one policy is the status quo.A modeler making this choice would be well-advised to consider alternative status quo policies to see the effect on voting outcomes.A third choice is to assume that the policy which wins the first stage vote will be used as the reference policy: however, this is problematic, as this introduces a feedback loop: which policy wins the first stage is a function of agents' utility functions, and agents' utility functions are a function of which policy wins the first stage.In order to resolve this feedback loop, the modeler would have to search the available policy space for equilibrium (fixed-point) policies which satisfy this loop, or, if the loop provides negative feedback (which may or may not be the case) iteratively run the model until it converges to a fixed-point policy.This would likely greatly increase the computational complexity of this approach-although, importantly, the second-stage would not need to be re-run for each iteration of the loop, because the second-stage is already run for each available policy.

Varying Parameters
The previous section provides a well-defined algorithmic approach to defining rational expectations policy voting in an agent-based model.These results are most useful to answer questions such as: As some parameter z varies, how does support for a particular policy vary?
In the Tax Ceilings model, the three fundamental parameters are: γ, which is the level of leviathan extraction of the local government; a level of idiosyncratic risk represented by a variable s, defined below; and a level of common risk represented by the severity of the calamity D. For a given level of idiosyncratic and common risk, if the government chooses an extraction level γ, what is the probability that it will be blocked by a tax ceiling, assuming the passage of the most restrictive tax ceiling that has majority support.We call the probability of being blocked ρ.
In order to calculate whether a particular level γ is blocked by a tax ceiling, we must first calculate which tax ceilings have support in the first stage given the amount of idiosyncratic risk and common risk as described above.We supply the agent-based model with empirical data from the cities of Binghamton and Minneapolis and vary the amount of common and idiosyncratic risk, and for each combination of common and idiosyncratic risk, we calculate whether each agent in the first round supports each possible tax ceiling.The amount of common risk is given by D, the severity of calamity.The amount of idiosyncratic risk is constructed by assuming tax price uncertainty is proportional to observed tax price variability and varying that proportion.That proportion is measured by a variable called s, which is called log tax price scalar.That is, the base level variance in the data is multiplied by a scalar equal to 10 s to find tax price uncertainty, and we vary s to vary the amount of tax price uncertainty (i.e.idiosyncratic risk.) A formal definition of ρ: Suppose π D is fixed at .1.Suppose s, D, and γ are allowed to vary.All other parameters we assume are defined by the jurisdiction j.Now define c as a function of the relevant parameters: c (j, s, D).Now we can define the key outcome variable ρ, the probability that some extraction level γ will be blocked by voters: After describing the empirical data in the next section (Section 4), we share our estimates and interpretation of ρ in the results section, Section 5.

Data
Our simulations require that we set levels of tax-price, tax-price uncertainty, wealth, and the size and probability of a calamity.Below, we discuss each in turn.

Tax price and wealth
We use administrative data to establish the wealth, tax-price levels, and variance of the individual agents in our simulations.These data allow us to observe how the tax prices of individual taxpayers evolve over time.Rather than calibrate the model with wealth and tax prices from only one city, we use data from two cities: Binghamton, New York, and Minneapolis, Minnesota.As we explain below, taxpayers in these two cities experience dramatically different levels of tax-price variance.
Using two cities where taxpayers experience different levels of tax-price variance allows us to better understand how tax-price uncertainty affects the level of leviathan power that voters are willing to tolerate before they use a tax ceiling to block leviathan government.These administrative data contain the estimated market values of all taxable properties in Binghamton and Minneapolis.For Minneapolis, we have these data for the years 2001 to 2009.For Binghamton, we were able to acquire these data for the years 2007 to 2012.Data for Minneapolis are provided by the Minnesota Department of Revenue.Data for Binghamton are provided by the Broome County GIS Portal Website (2013).To calculate tax-price for each property in each year, we divide a property's estimated market value by the sum of all estimated market values.Thus, each property's tax price equals its share of total market value. 22fter we calculate tax prices, for our simulations we restrict each dataset to a set of 200 residential homes.For each city we randomly select these 200 homes from the set of residential single-family homes that remained in the sample in all years. 23We calculate the tax price variance for each home and then multiply that data by the log tax price scalar to produce tax price uncertainty (see next section for details).In both cities, we use one year of estimated market value-in Binghamton, 2012, and Minneapolis, 2005-as a proxy for an agent's wealth.24Thus, the relationship between wealth and tax price in one year is linear, but the relationship between wealth and tax price variance is generally non-linear.As explained above, relative changes in assessed values (i.e., estimated market values) causes the tax prices of individual taxpayers to change over time.The uncertainty experienced by a homeowner in these cities is the risk of an unexpected change in tax prices.
Comparing Binghamton and Minneapolis is interesting because the two cities use different assessment regimes that produce different levels of tax-price variance and uncertainty.State law and the behavior of the local assessor combine to determine the assessment regime.The state law that governs assessments in New York and Minnesota is similar.The most important state legal requirement common to both cities is that, each year, the local assessor must estimate the current market value of each taxable property.Additional state laws, which are less relevant here, determine the percentage of that estimated market value that is taxable.
Although both states require the good faith estimation of market value, the assessor in Minneapolis appears to obey state law while the Binghamton assessor does not.In Binghamton, the majority of properties are never reassessed during the six-year period we observe.On average, in any year only 3.5 percent of Binghamton properties experience any change in estimated market value.The assessor did not change these estimated market values even though evidence establishes that actual market values were changing. 25On the other hand, virtually all Minneapolis properties are reassessed every year during the ten years we observe. 26In a conversation, the Binghamton assessors office stated that it had not completed a city wide re-assessment since 1993.Why do assessors in Binghamton fail to update estimated market values?If a state government does not provide resources and/or enforce the law, the local assessor may lack adequate resources and adequate incentives to comply.In sum, Binghamton and Minneapolis have very different assessment regimes.
The main implication of these different regimes is that for most individual properties taxprices vary over time in Minneapolis but not in Binghamton.Of course, tax-price variance may differ between the two cities for other reasons.That is, even if Binghamton and Minneapolis used identical assessment regimes, some differences in tax-price variance would remain because of, for example, differences in the path of real estate prices between Binghamton and Minneapolis.Because we cannot observe actual market values for all properties -because not all properties sell each year -our data do not allow us to identify precisely the extent to which the different assessment regimes are solely responsible for differences in tax-price variance.However, we believe that the fact that Binghamton assessors rarely update estimated market values to reflect changes in sales prices while Minneapolis assessor regularly do update such values, is the main cause of the different levels of tax-price variance between the two cities.
Tables 1 (Binghamton) and 2 (Minneapolis) display the different levels of wealth, tax price, and tax price variance of the two cities used in this analysis.Wealth is much greater in the city of Minneapolis than in the city of Binghamton.Tax-price variance in Minneapolis is greater than in Binghamton.This last difference largely reflects the different assessment regimes de facto.Figure 1 demonstrates that, in both cities, the distribution of wealth is skewed and looks roughly exponential.

A Calamity
The concept of 'calamity' is intended to capture the intuitive idea that a tax ceiling has a cost.If tax ceilings have no cost, agents do not have a reason to object to a tax ceiling.The cost of a ceiling is the loss of budget flexibility.The risk that the government may experience unexpected increases in the costs of delivering a certain quality or quantity of public services makes budget flexibility valuable.We call such an unexpected increase in cost a 'calamity.'These cost increases may be caused by increases in input prices (e.g., gasoline, skilled labor, fire engines, snow plows) or events such a natural disasters or severe weather events that place a strain on municipal services.A tax ceiling may prevent the government from responding to such an increase in costs by increasing taxes to avoid reductions in public services.
In our simulations, we explore how different sizes and probabilities of calamity affects voters' use of tax ceilings.In general, the more severe and more probable the calamity, the more voters value flexibility and the less they value tax ceilings.Of course, if we set the size or probability of a calamity high enough, voters will never demand tax ceilings.To put some upper bound on the size and probability of a calamity we consider the extreme example of natural disasters such as tornadoes, earthquakes, or hurricanes.These risks can be substantial and occur frequently; according to the National Flood Insurance, high risk areas have a 25% chance of experiencing a flood over a 30 year mortgage, and many of these high risk properties are not covered by flood Minneapolis, between 2001 and 2009, the COD decreased from 12.7 to 11.4.(Source: Minnesota Department of Revenue, Assessment Sales Ratio Study.)The increasing COD in Binghamton is consistent with its failure to update estimated market values to reflect heterogeneous changes in market value among Binghamton residential properties.
26 Over the period 2000-2006, about 94% of properties in Minneapolis change value.In 2007 and thereafter, this falls to around 50% (which is still much larger than the rate of reassessment in Binghamton).We believe that the reason for this fall is the 'popping' of the real estate bubble: if actual market values are falling, assessors may hesitate to lower property value assessments.
insurance.We use a 10% probability of a calamity occurring to the cities.These risks may put substantial strain on local governments to rebuild needed infrastructure such as public buildings, schools, roads, or updating other public goods earlier than expected.And the damage caused by natural disasters tend to be large.For example, rebuilding a school for $10 million would equate to roughly 2% of wealth in Binghamton.A 2012 flood in Binghamton, caused losses that greatly exceeded this level.However, not all of these losses will be covered by property taxes because other governments -state or national -may provide relief.With all of this in mind, we explore multiple levels of severity of the calamity, ranging from 0% to 1% of the value of the total housing stock.

Results
In this section, we present empirical estimates of the probability landscape ρ j γ s, D faced by the government of city j = bing, mpls.This probability ρ j is the probability that extraction level γ will be blocked by a tax ceiling, given idiosyncratic risk scalar s and common risk D. After defining more details about ρ, we interpret a representative graph (Section 5.1) and use that interpretation to define a 'Leviathan Extraction Choice Set.' We use this definition to establish our key result: we estimate that if there is any common risk, Binghamton voters will be more tolerant of Leviathan extraction than Minneapolis voters (Section 5.2).Then we establish secondary results: that Leviathan extraction tolerance increases as common risk increases (Section 5.3) and that loss aversion results in increased tolerance, but only for low to moderate levels of idiosyncratic risk 5.4.
The probability ρ that some extraction level γ will be blocked by a tax ceiling varies across jurisdictions because s does not fully capture idiosyncratic uncertainty.We assume that idiosyncratic uncertainty of each voter is proportional to the empirical tax price variance experienced by that voter (see previous section).s governs this proportion. 27This means an increase in s does (weakly, as we see below) increase uncertainty for all agents, but there are still differences in base levels of empirical tax price variance.In particular, the empirical tax price variance profiles σ 2 mpls and σ 2 bing vary in both mean tax price variance and the distribution of that tax price variance, and these differences are directly influenced by housing assessment policy.As described in Section 4, because of different tax assessment regimes, Minneapolis has a much larger mean tax price variance, and all citizens in Minneapolis have non-zero variance.On the other hand, Binghamton has a much smaller tax price variance, and most citizens have a very low tax price variance, while a small number of others have one that is markedly higher.So the population distribution of tax price variance between Minneapolis and Binghamton are significantly different.

Interpretation of a representative graph
In principle, ρ could take on any level between zero and one.However, we find that, by and large, three levels of ρ emerge: zero, .1, and ≈ 1.These three levels correspond to the common risk in the problem.There is a fixed, ten percent chance of calamity of severity D, with a corresponding reduction in the collective good.This means the three levels we observe correspond to three cases: tax ceilings which never bind; tax ceilings which bind only when a calamity occurs, and tax ceilings which always bind.The first two categories correspond to a local government which is unconstrained in the normal (non-calamity) state of affairs, while the third category corresponds to a local government that is constrained in the normal state of affairs.Therefore, for simplicity, we say that in the first case, that there is "no majority support for a binding tax ceiling" (unless Majority support for a binding tax ceiling No majority support for a binding tax ceiling a calamity occurs), and in the second case, we say there is "majority support for a binding tax ceiling."We think it is useful to think of the set in which there is no majority support for a binding tax ceiling as providing a "Leviathan Extraction Choice Set" for the government.A Leviathan Extraction Choice Set can be thought of as the options that the government has for Leviathan Extraction γ without fear of facing a tax ceiling.
Figure 2 depicts a representative Leviathan Extraction Choice Set.The amount of support is represented by ρ bing (γ|s, D), where D is set equal to .5% of total housing wealth and agents have only risk aversion, not loss aversion.Idiosyncratic Risk, on the x-axis, reflects the amount of tax-price uncertainty voters experience, the log of tax variance scalar s, in this instance, the tax-price variance is that of Binghamton, NY.Leviathan Extraction γ, on the y-axis, is the local governments choice variable given a value of s and a value of D (which is constant in this graph).This can also be thought of as the degree to which the government is able to raise revenue in excess of the majority-preferred amount.
Note that the Leviathan Extraction Choice Set is, unsurprisingly, in a contiguous area that includes the origin.Also note that, as idiosyncratic risk increases, the allowable level of Leviathan Extraction falls.This is broadly consistent with the theory model.The additional insight that is provided by our agent-based model comes from the the incorporation of data, so these sets can be shown for the actual cities of Binghamton and Minneapolis and for different parameter values.
There is some variation in ρ that does not fall neatly in the three categories of 0, .1, and ≈ 1; we discuss some of this variation below, and, moreover, a full set of graphs is available the Appendix Section A.3, including three-dimentional graphs (of which the contour graphs depicted here are projection), so the full variation in ρ can be seen.
Intermediate levels of ρ other than the three levels found above would be a result of idiosyncratic risk.The idiosyncratic risk does imply that, conditional on the level of calamity (0 or D), there is a distribution of median ideal revenue levels.And at high levels of s, there is a fair amount of variability.But these figures show that voters rarely choose to exclude particular levels of revenue within these distributions, choosing instead to contain either all revenue levels consistent with a certain level of calamity, or none of them.This suggests it is reasonable to assume that the ex-post median ideal revenue level, conditional on damages, is essentially deterministic for the purposes of considering extraction levels.(This does not mean that idiosyncratic tax price variance does not matter, however.If it did not matter, the line between extraction levels would be parallel to the x axis.) For the remainder of this section we will be comparing the Leviathan Extraction Choice Sets between and within Binghamton and Minneapolis with different parameter values.

Binghamton versus Minneapolis: The Key Result
Figures 3 and 4 depict our key result: the differences in tax price variance brought about by differences in assessment policy between Binghamton and Minneapolis which result in contrasting empirical estimates of the choice sets faced by their local governments.If there is any common risk, the government of Binghamton has much more freedom to extract than does Minneapolis at all levels of idiosyncratic risk s.And as common risk increases, the edge that the Binghamton government has over Minneapolis increases.
Moreover, we find this result holds over both risk aversion (Figure 3) and loss aversion (Figure 4).
For the highest severity of calamity (1% of housing wealth), we find that the Binghamton government can extract as much as five times as many cents on the dollar as can the Minneapolis government can.This result strongly suggests that assessment policy could significantly change voter support for a tax ceiling in a given jurisdiction.Note that scales differ between Figures 5a and 5b.See Figure 3 for a direct comparison across cities.

Within-city comparison of Leviathan Extraction Choice Sets as
Common Risk Varies.Binghamton is on the left (Figure 5a), and Minneapolis is on the right (Figure 5b).The x-axis in these graphs are idiosyncratic risk s and the y-axis is the level of Leviathan Extraction γ; although it should be noted that the y-axis range depicted for Binghamton is from zero to one and Minneapolis only ranges from zero to .1.
In both figures, it can be seen that, by and large, the choice sets are flat or downward sloping and are nested, one within the next, as the severity of calamity increases.In Binghamton this effect can be seen quite clearly and cleanly; however, in Minneapolis the ordering collapses after idiosyncratic risk s exceeds 4.This overall effect (ignoring, for a moment, the collapse in the pattern in Minneapolis for high levels of s) indicates that, as common risk increases, the local governments have more freedom to tax in those times that the calamity doesn't occur.
The exception to the downward-slopedness and nestedness occur in Minneapolis for high levels of s (> 4).The reason for this is truncation of tax price at (close to) zero.When a tax price error is applied to average tax price of an individual that brings it near zero, it is truncated at a small, positive value. 28This means when the empirical level of tax price variance in Minneapolis σ 2 mpls is raised to too high a level, many agents become truncated at this identical, low tax price level at the same time.This truncation makes the idiosyncratic risk into a kind of common risk.Therefore, when calamity severity is low enough, this artificial common risk, which is increasing in s, has some impact on the tax ceilings which are supported.Why doesn't this truncation effect occur in Binghamton?The reason is, in Binghamton, the majority of properties experience no change in home value, so these agents have a tax price variance of zero.This means that even at high levels of s, not enough properties are simultaneously truncated to induce a significant amount of this artificial common risk.(In the Appendix, Section A.3, three-dimensional graphs for Minneapolis are depicted that show that s ∈ [4,6] results in a noisy 'spike' in the choice set.)Calamity Severity D = 1% of Housing Wealth Figure 6 depicts the two Leviathan Extraction Choice Sets in Binghamton under risk aversion and under loss aversion.The pattern here is typical of Minneapolis as well (see Figure 16 in the Appendix for the graphs of Minneapolis).Loss aversion changes individual evaluation of a risky outcome by magnifying the downside; i.e. by making the worst outcomes worse.Here there are two sources of risk, and the relative magnitude of these two sources determines which is worse.It appears that this turning point occurs around a level of idiosyncratic risk s ≈ 6. Below 6, the more substantial risk seems to be common risk: the risk that too little revenue being collected because of a tax ceiling; so the voters allow for much more Leviathan Extraction at those levels.
On the other hand, above 6, the more substantial risk seems to be idiosyncratic risk, so voters are more concerned about having to pay too much in taxes.So they are willing to accept much less Leviathan extraction.

Discussion
We employ rational expectations policy voting in an agent-based model in a forward-looking model where the future is calculated first, and then used to make 'past' decisions.In the previous section, we analyze the empirical results which arise from our particular application: Tax ceilings in Minneapolis and Binghamton.This provides a model of how to interpret this kind of model-a rational expectations policy voting in an ABM model-particularly one that incorporates empirical data as we do.In this section, we discuss the larger methodological implications of this approach.
First, we discuss other appearances of rational expectations in agent-based models; we explicitly contrast this approach to adaptive learning; note a comparison with other instances of rational expectations and explore two resulting future extensions; and then discuss the difficulties in extending this approach to a general method for finding rational expectations in an agent-based model.
There are a small number of agent-based modeling papers that explicitly engage with rational expectations.With one exception, these papers calculate rational expectations outcomes or equilibria using analytical methods and then test under what conditions the agent-based model converges to those equilibria, and either use this to validate/test the ABM or to criticize rational expectations.29 (A piece in Nature [Farmer and Foley, 2009], for example, describes ABM as an alternative to rational expectations modeling in macroeconomics.)These papers are very different from our approach, since we provide a new way to find rational expectations in some ABMs.One hallmark of this distinction is that our method uses calculation of the future first, followed by calculation of the present; we have found no agent-based models that use this framework.The one exception is Guilfoos et al. [2013], who first build an agent-based model of farmers who draw water from a groundwater aquifer, taking taxes on water use as given; and then search the space of alternative taxes to choose the profile of tax rates that maximizes discounted future social welfare.In some sense, it could be said that the social planner in that paper chooses tax rates with rational expectations of their implications.However, the main distinction is that, unlike the model presented here, the phase of policy choice is not tied to voting by individual agents, so the individual agents do not have/use rational expectations.
One way to approach this model with adaptive learning would be to repeat the voting and resolution phase repeatedly until voting behavior settles down, while applying adaptive learning techniques to the payoffs associated with the votes.In any non-trivial population size, this would be problematic, because so rarely do individual votes affect the outcome, most learning techniques would either conclude that votes are irrelevant or characterize differences based on a very small number of cases where their individual vote had an impact.
Another way to approach this model with adaptive learning would be to require that agents vote honestly, as we require in our model (and is common in the voting literature).How could this work in an adaptive context?Agents could use the policy choice, not their individual vote, as the thing about which they learn, and adapt their vote over time to match their current estimate of the best policy.This would introduce path dependence which may or may not converge to the rational expectations outcome(s) as we defined it here.
However, this does not mean adaptive learning and this approach are at odds.On the contrary, the second-stage ABM could have adaptive learning (in which case agents would be constructing rational expectations of how they and other agents will learn after the policy is decided.)Moreover, it is interesting to ask the question of when an adaptive learning approach for voting would, or would not, converge to the rational expectations voting outcome.It may suggest a kind of 'learnability' of certain voting outcomes.
From a certain point of view, full rational expectations would be given by the π, v , and y consistent with S = S.That is well-defined in this approach, but may not be computationally feasible.In any case, this merely means that any given S provides an approximation of rational expectations.
Although we do not explore this in the Tax Ceilings mode, it is also possible to have S i s which vary across agents.The subtlety of the agent's model of the law of nature is dependent on the nature of the sampling: its size, of course, but also its representativeness.Therefore, varying size or elements of S i could reflect different sophistication, knowledge, or past experiences of agents.This would be an interesting avenue to explore in future research, especially if one were interested in directly modeling heterogeneous sophistication of voters.Another extension would be to view the sampled runs associated with S as a memory in the sense of case-based decision theory Gilboa and Schmeidler [1995].This would allow a form of vote selection consistent with human choice in classification learning laboratory experiments from psychology Pape and Kurtz [2013].
Finally, this model is specifically about policy voting in an agent-based context.A natural question is: Could this method be expanded to rational expectations agent-based modeling in general?Consider how one would approach this.One could run an entire agent-based model in reverse chronological order: i.e. identify all possible end states, then iteratively find all possible paths-histories of actions of agents-that can result in those end states, applying the rule that agents select the actions to maximize expected utility, given future outcomes/paths and the behavior of other agents.There are two major challenges with expanding this method in this way, which are avoided by restricting attention to policy voting.
To make this discussion concrete, suppose we consider an arbitrary two-stage ABM in which we wish to have each agent i select an action from a set A i , using rational expectations of future outcomes in the second stage, and suppose that second-stage outcomes are influenced by those actions and a state of the world s ∈ S.This is compared to our current model, in which agents select votes v i ∈ V and the second-stage outcome is a function of the votes v and the state s ∈ S.
The first challenge with this approach is computational feasibility.In the policy-voting approach, the second-stage ABM need only be run for each possible policy outcome and state of the world, i.e.V × S. The arbitrary alternative ABM would need to be run for every element of the set ∪ i∈I A i × S. Assuming the size of V is roughly the size of A i , this results in an I-fold increase the number of runs required.For a model with a non-trivial number of agents, say one thousand, this could make an otherwise feasible problem infeasible.
The second challenge is the simultaneous determination of behavior.In the policy-voting approach, we seek agents' honest votes about a future policy.Calculation of honest votes does not require agents to be strategic: that is, ones honest (non-strategic) vote is simply the policy one prefers, and it is not contingent on how other people vote.Therefore, there is no need to seek a Nash Equilibrium (or something of that nature) because agents' choices are independent.However, in the arbitrary alternative ABM considered above, we must consider that agents are required, not only to have rational expectations of future outcomes, but rational expectations of each others' current behavior.This likely means applying some kind of equilibrium-solution mechanism at each stage.This makes the model more computationally infeasible, and it undermines one of the major benefits of ABMs, which is to avoid having to search for fixed points.

Conclusion
Our agent-based model incorporates empirical data to estimate the importance of tax payment uncertainty and Leviathan extraction to popular support for tax ceilings.We use empirical data from two American cities: Minneapolis, MN and Binghamton, NY to inform our model.These cities have different property assessment regimes, which in turn generate different profiles of tax price variance across their populations.According to our model, the different property assessment regimes imply that Binghamton residents would be as much as five times as tolerant of Leviathan extraction as the residents of Minneapolis because the tax price variance is much higher in Minneapolis as they more frequently assess properties.This suggests that property tax assessment regimes, and the resulting tax payment uncertainty, could be a key factor determining voter support for these laws.
The methodological contribution of this paper is to introduce voters embedded in an agentbased model who forecast the impact of alternative policies before they vote.We call this 'rational expectations voting in an agent-based model,' because we sample the space of possible outcomes and generate a joint belief distribution for all random variables as a function of policy.The resulting belief distribution is then endowed to all agents as a common prior and all agents maximize expected utility with regard to this common prior.While the policy considered here is a property tax ceiling, this method could be paired with other agent-based models in completely different economic settings, such as pairing with market models to predict support for the minimum wage or with an environmental model to predict support for resource rationing.
The policy contribution of this paper is to show that, all else equal, assessment policy can significantly change the freedom that a local government has to raise revenues that could plausibly explain which cities support, and which don't support, a given tax ceiling.This is a noteworthy contribution because existing models in the literature, with the exceptions of Vigdor [2004] and Anderson and Pape [2013], offer Leviathan extraction as the only explanation of support of tax ceilings.Unlike the existing literature investigating voter support for tax ceilings, we estimate how uncertainty in tax payments affects support for tax ceilings, and we use new data: householdlevel, panel tax-price data.Future research could focus on whether variation in assessment policies predicts passage of these laws.

Figure 2 :
Figure 2: Voter support for a binding tax ceiling Idiosyncratic Risk is the uncertainty faced by voters as a function of actual tax price variance.(s) Leviathan Extraction the level of Leviathan Extraction chosen by the local government.(γ) Calamity Severity D is set equal to 0.5% of Wealth.

Figure 3 :
Figure 3: Relative Size of Leviathan Extraction Choice Sets Under Risk Aversion: Binghamton vs. Minneapolis

Figure 5 :
Figure 5: How Leviathan Extraction Choice Sets vary as Calamity Severity D Varies from 0% to 1% of Wealth under Risk Aversion

Figure 5
Figure 5 depicts how Leviathan Extraction Choice Sets vary as the severity of calamity D varies.Binghamton is on the left (Figure5a), and Minneapolis is on the right (Figure5b).The x-axis in these graphs are idiosyncratic risk s and the y-axis is the level of Leviathan Extraction γ; although it should be noted that the y-axis range depicted for Binghamton is from zero to one and Minneapolis only ranges from zero to .1.In both figures, it can be seen that, by and large, the choice sets are flat or downward sloping and are nested, one within the next, as the severity of calamity increases.In Binghamton this effect can be seen quite clearly and cleanly; however, in Minneapolis the ordering collapses after idiosyncratic risk s exceeds 4.This overall effect (ignoring, for a moment, the collapse in the pattern in Minneapolis for high levels of s) indicates that, as common risk increases, the local governments have more freedom to tax in those times that the calamity doesn't occur.The exception to the downward-slopedness and nestedness occur in Minneapolis for high levels of s (> 4).The reason for this is truncation of tax price at (close to) zero.When a tax price error is applied to average tax price of an individual that brings it near zero, it is truncated at a small, positive value.28This means when the empirical level of tax price variance in Minneapolis σ 2 mpls is raised to too high a level, many agents become truncated at this identical, low tax price level at the same time.This truncation makes the idiosyncratic risk into a kind of common risk.Therefore, when calamity severity is low enough, this artificial common risk, which is increasing in s, has some impact on the tax ceilings which are supported.Why doesn't this truncation effect occur in Binghamton?The reason is, in Binghamton, the majority of properties experience no

Figure 6 :
Figure 6: Binghamton, Probability That Most Restrictive Passing Limit Binds: Risk v Loss Aversion Thanks to the Department of Economics at Binghamton University faculty for useful comments and suggestions.Thanks to the students of the Binghamton University Graduate economics course "ECON 696H: Agent-based Policy Modeling," who helped develop this model and provided research assistance: Huong Do, Yangyang Ji, Huan Li, Olu Omodunbi, Daniel Parisian, Tuan Pham, Apoorva Rama, Mikhail-Ann Urquhart, Xiaohan Zhang.
Figure 10: Binghamton, Probability That Most Restrictive Passing Limit Binds, Loss Aversion Figure 11: Minneapolis, Probability That Most Restrictive Passing Limit Binds, Risk Aversion

Table 2 :
Minneapolis Sample Summary Statistics A.2 Results: Statistics