Climate Change and Peak Demand for Electricity: Evaluating Policies for Reducing Peak Demand Under Different Climate Change Scenarios

This research focuses on the relative advantages and disadvantages of using price-based and quantity-based controls for electricity markets. It also presents a detailed analysis of one specific approach to quantity based controls: the SmartAC program implemented in Stockton, California. Finally, the research forecasts electricity demand under various climate scenarios, and estimates potential cost savings that could result from a direct quantity control program over the next 50 years in each scenario. The traditional approach to dealing with the problem of peak demand for electricity is to invest in a large stock of excess capital that is rarely used, thereby greatly increasing production costs. Because this approach has proved so expensive, there has been a focus on identifying alternative approaches for dealing with peak demand problems. This research focuses on two approaches: price based approaches, such as real time pricing, and quantity based approaches, whereby the utility directly controls at least some elements of electricity used by consumers. This research suggests that well-designed policies for reducing peak demand might include both price and qua_ntity controls. In theory, sufficiently high peak prices occurring during periods of peak demand and/ or low supply can cause the quantity of electricity demanded to decline until demand is in balance with system capacity, potentially reducing the total amount of generation capacity needed to meet demand and helping meet electricity demand at the lowest cost. However, consumers need to be well informed about real-time prices for the pricing strategy to work as well as theory suggests. While this might be an appropriate assumption for large industrial and commercial users who have potentially large economic incentives, there is not yet enough research on whether households will fully understand and respond to realtime prices. Thus, while real-time pricing can be an effective tool for addressing the peak load problems, pricing approaches are not well suited to ensure system reliability. This research shows that direct quantity controls are better suited for avoiding catastrophic failure that results when demand exceeds supply capacity. Real-time pricing has many advantages, but consumer response to real-time prices is not reliable enough to protect against catastrophic system failure. The reason is the distinction between higher (but well-behaved) increases in marginal supply costs versus system failure. Peak demand problems do not develop smoothly and gradually. Instead, peak demand problems are characterized by infrequent but serious crises whose timing is largely unpredictable. It is the potential for system failure that requires rapid temporary changes, and it is here that pricing measures appear to subject some severe practical limitations. Real-time pricing cannot guarantee a sufficient demand reduction to avoid system failure. The price elasticity for electricity demand is largely unknown, particularly at extreme temperatures. A one-time high hourly p#ce may not be able to produce the necessary reduction in demand quickly or predictably enough to avoid catastrophe. This suggests one major advantage of direct quantity controls: if the control is effective and can be deployed quickly, regulators can be assured of avoiding system catastrophe. For these reasons, the ideal peak demand policy might contain a mixture of tools, with realtime pricing and direct load controls to reduce peak demand and maintain system reliability under different climate change scenarios.

not yet enough research on whether households will fully understand and respond to realtime prices.
Thus, while real-time pricing can be an effective tool for addressing the peak load problems, pricing approaches are not well suited to ensure system reliability. This research shows that direct quantity controls are better suited for avoiding catastrophic failure that results when demand exceeds supply capacity.
Real-time pricing has many advantages, but consumer response to real-time prices is not reliable enough to protect against catastrophic system failure. The reason is the distinction between higher (but well-behaved) increases in marginal supply costs versus system failure. Peak demand problems do not develop smoothly and gradually. Instead, peak demand problems are characterized by infrequent but serious crises whose timing is largely unpredictable. It is the potential for system failure that requires rapid temporary changes, and it is here that pricing measures appear to subject some severe practical limitations. Real-time pricing cannot guarantee a sufficient demand reduction to avoid system failure. The price elasticity for electricity demand is largely unknown, particularly at extreme temperatures. A one-time high hourly p#ce may not be able to produce the necessary reduction in demand quickly or predictably enough to avoid catastrophe. This suggests one major advantage of direct quantity controls: if the control is effective and can be deployed quickly, regulators can be assured of avoiding system catastrophe. For these reasons, the ideal peak demand policy might contain a mixture of tools, with realtime pricing and direct load controls to reduce peak demand and maintain system reliability under different climate change scenarios.

Chapter 1. Introduction
This study evaluates the impact of a direct load control policy intended to reduce peak demand for electricity by limiting residential consumers' demand for electricity for air conditioning during critical peak hours under different future climate change scenarios. In order to put the research in context, the study first provides background on the electricity industry, including the technology for producing and delivering power, the history of policy and regulation directed toward it, and recent experience with restructuring in the United States. The study then presents the unique attributes of electricity that create problems for those charged with supplying power, particularly the problems of balancing supply and demand in real time, which is essential to maintaining reliability. With this background, a major objective of this study is to assess the relative advantages and disadvantages of using real time prices for electricity to balance supply and demand, as compared to using direct load controls. The study also the study examines how electricity demand for cooling will change as global climate change warms the planet and the role for current strategies to manage peak demand.
The rapid growth of peak demand in recent decades has brought peak demand to the fore in discussions on electricity and energy policy. Although the emphasis on addressing peak demand is fairly recent, the issues are as old as the electricity utility system itself. First, the necessity for real time production and the large divergence between peak and off-peak demand create the need for considerable excess capacity, much of which is idle for a large fraction of the time. For example, in 2006, the highest peak load year on record in New England, 15% of all generation capacity ran 0.9% of the time or less, and 25% of all capacity ran 2.9% of the time or less. This combined with high costs and long lead times 1 for capacity expansion means that it is very expensive to meet peak load without some form of demand management that smoothes consumption over time. Another issue is that modern society is highly dependent on reliable electricity supply. Thus, there is, in essence, a political mandate to meet demand virtually 100% of the time. These issues combine to make the connection between system peak demand and system reliability an important driver of public policy in the electric utility sector.
Electricity demand varies minute-to-minute. Cost considerations necessitate that base load power generation uses technologies that have low operating costs and high capital costs because the plants are highly efficient. In contrast, since peak load generation capacity operates only a small fraction of the time, cost considerations dictate the use of technologies with lower capital costs and higher operating costs. Hence, the cost of electricity production varies considerably over time. However, the highly variable nature of the marginal cost of generating electricity is not reflected in the flat retail electricity prices paid by end users. Therefore, in order to balance supply and demand in real time, generators must produce sufficient electricity to meet customer demand at each point in time at a fixed retail price. As demand increases so does the likelihood of a system outage, which is highest during peak times 1 . As a result, utilities must build capacity to supply electricity for the most extreme peak loads, which may only occur for one or two weeks of the year.
An analogy with another form of infrastructure helps highlight this problem. Building sufficient electrical generation capacity to serve peak demand is akin to expanding a 1 The electric utility sector has traditionally focused on peak demand because the likelihood of system outages (often measured by the "loss of load probability" or LOLP) is by far greatest at peak times (Koomey & Brown, 2002) .
highway to 10 lanes each way in order to accommodate rush hour traffic without congestion. While this alleviates congestion during the morning and afternoon rush, the rest of day those 20 lanes will be sparsely populated with vehicles. The costs for building such a highway are prohibtively high and do not make sense when the need for all 20 lanes is only for a few hours a day. Peak demand problems on highways are handled by having highly congested roads during rush hours, when traffic greatly exceeds the highway's carrying capacity. This is not done with electricity because the electrical system would fail on a regular basis if demand exceeded system capacity and this is viewed as unacceptable by society at large. Generators must build the infrastructure, regardless of cost, in order to meet peak demand, leading to higher than necessary wholesale and retail electricity prices and bills In this traditional model there is no way of ensuring reliability except by building capacity that is rarely used, and building such large amounts of excess capacity has been very costly. A more cost effective approach to reliability and meeting peak load would be to involve customers in the decision-making. If customers were faced directly with a choice between paying the full costs of building new capacity that would only run a few hours per year, versus the alternative of shifting a fraction of their consumption to off-peak hours, perhaps many customers would choose to reduce their peak use. The relative costs of building new capacity versus reducing peak demand emphasize this point. Using the capital cost of a simple cycle gas turbine as the basis of the cost for peaking capacity, Spees (2008) estimates that the price of peaking capacity is $94/kW year annually. Thus, the capital cost associated with a generator that runs exactly 1 hour per year is $93,720/MWh 2 . By contrast, a survey of utilities finds that the cost of coincident peak load reductions ranges from $18 to $25/kW year-less than one fourth of the $94/kW year it costs to build new capacity. Clearly, reducing peak demand could potentially be a much cheaper means of meeting peak demand than building more capacity.
This topic is of particular importance in the face of climate change. Hotter temperatures and extended heat waves will lead to more frequent and harder-to-predict peaks 3 • Many systems are already beginning to feel the strain of rising temperatures, causing policy makers at all levels of government to start examining policies aimed at reducing peak demand and increasing the resiliancy of the grid. Thus, there is considerable potential benefit to implementing demand management strategies to reduce episodic peak demand.
There are two general categories of policies that can be used to reduce peak demand: price-based policy and quantity-based policy. Price-based policies focus on charging customers for electricity based on contemporaneous competitive wholesale market prices that reflect marginal supply costs. The resulting price elasticity exhibited by customers when responding to changes in real-time marginal supply costs would reduce electricity demand during peak hours and increase demand during off-peak hours. With quantitybased policies, the utility directly controls the amount of electricity that is used by the household, business, or factory during peak periods. In one approach, consumers are paid incentives to join a program under which the utility controls appliances within the home, 2 By contrast, a base load generator operating all the time would be a capital intensive coal plant with a lower operating cost than a peaking gas generator. The per unit cost of this generator is $26.06/MWh. 3 The chaotic nature of weather makes it unpredictable beyond a few days. A major limiting factor to the predictability of weather beyond several days is a fundamental dynamical property of the atmosphere. Hence, changes in the properties of the atmosphere could potentially make it harder to predict the weather. 4 most commonly air conditioners. Quantity-based policies are a more direct tool for reducing peak demand, offering customers incentive payments to reduce their electricity consumption during peak periods.
The remainder of this dissertation is organized as follows: Chapter 2 provides background on the electricity industry, including the technology for producing and delivering power, the history of policy and regulation regarding electricity markets, and some recent experience with restructuring of electricty markets in the U.S. Chapter 2 also explains the significance of peak demand in the electric power industry. Chapter 3 reviews two different types of policy tools used to reduce peak demand: real-time pricing and direct quantity controls. Chapter 3 also introduces the SmartAC™ program-a direct control program that limits the amount of electricity demanded by participating consumer air conditioning systems. Chapter 4 is a review of the economic literature on how pricebased policies are used to manage peak demand, and then compares the relative efficiency of using price-based versus quantity-based policies for reducing peak demand. Chapter 5 and 6 focus on the analysis of the quantity-based SmartAC™ program for reducing peak demand. Chapter 5 illustrates several of the issues arid challenges associated with this type of analysis. Chapter 6 reports the results on estimating the impact that the SmartAC program had on peak demand in 2007, and presents projections for how effective this type of direct control policy might be under various climate change scenarios. Finally, Chapter 7 presents policy recommendations, suggestions for areas of future research, and conclusions.

Chapter 2. Understanding the Structure of the Electricity Industry
In order to put this research in context, this chapter provides background on the electricity industry, including the technology for producing and delivering power, the history of policy and regulation directed toward it, and recent experience with restructuring in the United States. This chapter concludes that the long lead times associated with building new capacity, the lack of price response in the face of timevarying costs, the large differences between peak demand and average demand, and the necessity for real-time delivery of electricity all make the connection between system reliability and peak demand an important driver of public policy in the electricity sector.
This chapter also concludes climate change will exacerbate the peak demand problem.

Brief History of the Electricity Industry
For most of its history, the electricity industry was characterized by utilities operating as regulated geographic monopolies. These utilities were vertically-integrated, meaning that the same company owned and operated all of the infrastructure necessary for electricity generation, long-distance transmission, and final distribution and sale to end-users. The industry was regulated by federal and state governments; the former generally controlling the "wholesale" side and the latter controlling the "retail" side. Wholesale refers to the generation and transmission of electricity, while retail refers to the sale of electricity to residential, commercial, and industrial end users. Operating under monopoly conditions meant that only one electricity provider was available in most states or regions. This arrangement avoided the costly duplication of transmission wires and power plants, and for the privilege of being the sole provider of electricity, the utility submitted itself to the 6 oversight of the state Public Utilities Commission (PUC), which set retail electricity prices at a level that allowed the utility to earn a limited profit. Subject to rate-of-return regulation, the utilities charged their retail customers average-cost rates that included the return on the utilities' investments in generation, transmission, and distribution The process works a little differently for publicly owned electric companies, such as municipal utilities and rural cooperatives. Because municipal utilities {"munis") are owned by the local government in the area they serve, and cooperatives are owned by the customers themselves, they may have less incentive than investor-owned utilities to take undue advantage of their monopoly position. Accordingly, in most states, publicly owned utilities and cooperatives set their own prices. 5 Ancillary services are the power-related functions necessary to keep the grid working and reliable.
Examples include maintaining central control over generators to adjust power instantaneously to deal with momentary power surges and reductions in demand; adjusting generation to adapt to predictable hour-to-hour and daily variation in demand; and providing power in response to unexpected generator or transmission system failure. Ancillary services have traditionally been supplied by vertically integrated utilities with the cost included in the price of electricity.
Today, most restructured electricity markets follow a generic model. This model is characterized by a competitive wholesale electricity market in which sellers (generators) and buyers (utilities) transact by making supply and demand bids. An Independent System Operator (ISO) oversees the wholesale electricity market by managing the highvoltage transmission owned by newly created regulated transmission and distribution (f&D) companies, which descend from the formerly integrated utilities. In turn, the distribution utilities resell electricity to final users.

Characteristics of Electricity
Electricity is an unusual commodity in many respects. One is that the power produced by a particular generator does not necessarily go to that generator's customers. More precisely, if a generator sells N kilowatts of power to its customers, it is merely committing to inject N kilowatts of electricity into the overall electricity system at the same time that its customers are pulling N kilowatts from the grid. It is as if Starbucks sold M cups of coffee by dumping that volume of coffee into a common vat mixed with coffee from every other coffee shop, out of which its customers had the right to pour M cups. As a consequence, the distinction between the grid and the power pooled within it can become blurry. If coffee were sold as in the Starbucks scenario, one might well expect that the owner of the vat-the grid-might find itself becoming involved with the wholesale purchase and retail sale of the coffee within it. This blurriness could be especially pronounced when it comes to ensuring a reliable supply of electricity. This section discusses the characteristics of electricity that affect the structure of wholesale electricity markets.

8
Perhaps the most crucial feature that distinguishes electricity from other commodities is the need to keep supply equal to demand on virtually a minute-by-minute basis. For most other commodities, buyers can wait if the item is not on the shelf or if the telephone line is busy. Sellers sometimes may have to backorder a "hot" item or keep inventory around a little longer when items do not sell as fast as expected. Both of these can be costly and inconvenient, to be sure, but they are not catastrophic in the way that a mismatch between electricity demand and supply can be. If more electricity is demanded than generated, brownouts or blackouts follow. If more electricity is supplied than used, the heat from the extra energy can damage transmission and distribution lines.
Keeping electricity supply just equal to demand, by varying either production or use, is called load balancing. Two properties of electricity exacerbate the problem of keeping loads balanced. First, the cost of storing electricity in substantial quantities is prohibitive.
With most commodities, if a seller thinks that demand could be stronger than expected, he can keep an inventory of the commodity available of the shelves or in a warehouse.
This tactic is not available for suppliers of electricity. Batteries are too expensive to store much power for most users, and at least up to now, generation on-site is prohibitively costly for all but large industrial users or commercial facilities that cogenerate electricity as a by-product of energy available from other production processes or space heating systems. Hence, when users want electricity, the generators have to be producing it at that moment, and the transmission and distribution systems have to be able to deliver it.
The second problem is that load imbalances can take down the entire grid or entire regions with the grid, not just those who are customers of a particular distribution utility that happens not to have procured enough to meet their demands. The power procured 9 by everyone essentially becomes part of a common pool from which all users draw. If what is there does not suffice, all customers on that grid lose, even if the cause of the insufficient supply is a failure to produce by one generator or unanticipated demand from just one utility's customers.
In short, the inability to store large amounts of electricity means that supply must constantly be kept equal to demand. The systemwide nature of the effect means that the costs of failing to keep loads in balance are borne by everyone on a grid and not just the utility that happens to be out of balance. Accordingly, utilities are responsible for anticipating customer demand and procuring sufficient power to cover demand, as necessary to maintain the appropriate balance.

Wholesale Market Structure
In order to perform well, markets need to be structured around the key elements of the commodity being sold. In this case, the need to maintain a constant balance of supply and demand, coupled with customers' highly variable demand, requires that power systems are characterized by a range of generating technologies in terms of their capital and operating costs. These range from highly capital-intensive baseload plants that are designed to run continuously at low operating costs, to peaker plants that are relatively inexpensive to build and can start up quickly in order to meet peak demand, but generally have high fuel costs during the relatively few hours per year that they operate. 1 Opm) are 20% higher than during off-peak hours. Nitrogen oxide emissions are about 12 30% higher during peak hours and sulfur dioxide emissions are just over 20% higher during peak hours (Independent System Operator-New England, 2008)6. This is most likely because the additional generation that is brought on line to meet the higher demand during peak periods has higher emissions rates. These typically are older resources with lower thermal efficiency.   ' ""' ----$1,500

Minimizing Costs through Economic Dispatch
The practice of meeting demand by sequentially activating technologies with the lowest marginal operating costs is called "economic dispatch." Economic dispatch benefits electricity customers in a number of ways. By systematically seeking the lowest cost of energy production consistent with electricity demand, economic dispatch reduces total electricity costs. To minimize costs, economic dispatch typically increases the use of the more efficient generation units, which can lead to better fuel utilization, lower fuel useage, and lower greenhouse gas emissions than would result from a less efficient generation mix. In principle, retail customers will benefit if the savings are passed through in lower retail rates. Economic dispatch methods are also flexible enough to incorporate policy goals such as promoting fuel diversity or respecting demand reductions as well as supply resources.
Economic dispatch principles and operation are the same in both vertically-integrated utilities and deregulated wholesale markets. In wholesale power markets, generators offer blocks of electricity for various time periods at prices that reflect their marginal operating costs. System operators match s:upply bids to demand forecasts, determining which generators to dispatch, and setting hourly wholesale market prices as the highest supply bid accepted (all of the accepted bids are paid the market clearing price). If the auction is competitive, the market-clearing price is equal to the short-run marginal cost of the most expensive generator dispatched.      and ramp-rate (how quickly the generator can be brought on line). Real-time dispatch occurs in real time and is performed by the regional transmission operator (RTO) or independent system operation (ISO). The regulator monitors the hourly dispatch schedule, load, generation, and interchange (imports and exports) to ensure the balance of supply and demand. Often, the regulator must modify the merit order to account for grid conditions and operational reliability needs. In real time, many of the adjustments to least-cost dispatch are to prepare for, or respond to, contingencies that affect grid reliability.

The Significance of Peak Demand
Peak demand issues came to the fore at the beginning of the decade because of the System planners are not only concerned with meeting the system peak demand, but also with local and regional peak demands that may result in outages due to local transmission, distribution, and generation constraints (in fact, local outages are far more common than system outages). Beyond system reliability, there are additional reasons why peak demand is an important public policy issue. Peak demand raises environmental concerns because the system's highest marginal cost plants operate during peak hours and these plants are often the most inefficient, and thus produce more greenhouse gas emissions per unit of electricity produced than baseload plants. Many peaking plants are fired by natural gas or fuel oil, raising issues of fuel security (for oil) and diversity /price stability (for natural gas).
Facility siting is another concern. As the magnitude of peak demand increases, the size of the electricity system must also grow, leading to more generators, transmission, and distribution lines. There is growing resistence (and growing competition from other uses) to using scarce land resources for siting this infrastructure.
A major concern, at least for economists, with peak demand is economic efficiency. In the traditional situation of unresponsive demand, the only way to ensure reliability is to build capacity that will rarely be used. The process of building large amounts of excess 25 capacity has been very costly. Consider the following set of figures, which illustrate the peak load problem. Figure 2-6 shows power use chronologically for every hour in 2006.
Figure 2-7 shows those hourly loads rank ordered from the lowest load hour to the highest load hour (peak) for all 8,760 hours in the year. In 2006, the highest peak load year on record in ISO-New England (ISO-NE), 15% of all capacity ran 0.7% of the time or less, 20% of capacity ran 0.97% of the time or less, and 25% of all capacity ran 3.0% of the time or less. Figure 2-8 shows how much capacity is needed just to serve peak load.
In the figure, the peak load hours are highlighted and the width of the textured bands indicates the number of MW that can be considered peak hours: the bands' widths show the quantity of capacity that must exist just to serve demand during peak hours. The last band from the left indicates the amount of capacity built just to serve the top 30 hours (corresponding to a capacity factor of 0.34%); the combination of all of the bands indicates the amount of capacity that exists to serve the top 500 hours (corresponding to a capacity factor of 5.7%).    These data are based on ISO day ahead and real-time data for the ISO-NE Control Area, so standard offer prices and customers with independent bilateral contracts will not have the same energy costs or price signals. However, all generators have access to comparable market information and fuel costs so over the long run standard offer and bilateral contract prices are likely to approximate the ISO market prices. The ISO New England 2006 Annual Market Report indicated that the 51% of loads were served by bilateral contracts, 45% by the day-ahead market, and 4% by the real time market.
The formula for calculating hourly costs is: The price per MWh is calculated by dividing the cost to serve the hour by the real-time demand: Cost/Real-time Demand=Price per MWh If load curves could be flattened (through load management or responses to time-varying prices) then a more efficient use of capital could result. In addition, when the system is close to peak, small increases in demand can lead to large increases in marginal costs per kWh because of the inelasticity of supply at that time.

The California Electricity Crisis
Concerns about peak demand can be seen more broadly as a need to ensure that supply and demand remain in balance at any instant. The California power crisis in 2000 and 2001 illustrates the magnitude of the problems that can arise when markets are not wellstructured. In 1998, California began a process to open retail electricity markets to competitive suppliers. During the transition from regulated to competitive retail markets, regulators capped retail electricity rates to protect customers from high prices. At this time, regulators also created the California ISO (CA ISO) to oversee the transmission system and an independent power exchange, the PX, where wholesale electricity would be traded. Retail restructuring was going reasonably well until the early summer of 2000 when wholesale electricity prices began to skyrocket as a result of a combination of factors. Generation capacity had not kept pace with demand, which had increased 11 % since the 1990s. And, the problem of inadequate capacity was exacerbated by drought conditions in the West, which diminished hydroelectric production to less than 75% of 1999 levels. Furthermore, the price of natural gas, which was becoming more commonly used, tripled. Wholesale prices began to spike to levels nearly 10 times those reached in the previous two years.
While wholesale prices skyrocketed, retail electricity prices were kept low through price caps. Low retail rates provided no incentives for customers to curtail or shift their demand to off-peak or lower-cost hours. Thus, utilities were forced to purchase wholesale power at prices that were five times the capped retail rate in order to meet demand. To illustrate, at one point the average wholesale price was $.0126/kWh compared to the limit of $0.054 that Pacific Gas & Electric was allow to charge its retail customers. Unable to cover their costs, California's three large utilities (Pacific Gas & Electric (PG&E), San Diego Gas & Electric (SDG&E), and Southern California Edison (SCE)) began to declare bankruptcy. Fearing they would not get paid, independent power producers refused to deliver electricity and rolling blackouts ensued. In the end, PG&E and SDG&E were left $13 billion in debt, Governor Gray Davis declared a state of emergency, and the state itself began to buy power on behalf of the utilities. It was estimated that a one-day blackout cost $100 million in losses for California businesses.
The California crisis drew national attention not only to the problem of peak demand, but also to the potential for wholesale market manipulation. Independent generators unilaterally had the ability and incentive to exercise inarket power and withhold output in order to raise wholesale prices. Enron-a leading player in the California energy markets and controlling 3,500 MW of electricity (enough for more than 2.6 million homes)-was eventually convicted of intentionally shutting down generators and withholding supply in order to manipulate energy prices.
The California crisis brought renewed attention to the problem of peak demand and the disconnection between highly variable wholesale prices and static retail rates. In the opinion of many, the worst effects of the energy crisis could have been alleviated if 33 customers had access to real-time prices that provided incentives for customers to shift their electricity use to off-peak periods. In fact, the International Energy Agency estimated that a 5% decrease in peak demand would have reduced peak wholesale prices by as much as 50% (Harrington, 2003) .The crisis underscored the notion that wellfunctioning electricity markets must include mechanisms to handle peak load problems and showed that reducing peak demand can improve system reliability, avoid costly fuel expenditures, reduce capital expenditures on generating capacity, and reduce generators' ability to exercise market power.

7 Drivers of Peak Demand
Weather tends to be the most important driver of peak demand. In warmer regions of the U.S. air conditioning loads drive peak demand on the hottest summer afternoons. For colder regions, peak demand is in the winter and is driven by the demand for electric heating on the coldest mornings of the year. For example, planners at ISO-NE report that between 40 and 50% of peak summer demand is due to air conditioning load; during 2006 which had the highest peak on record, the system peak was more than 50% greater than the average system load. P~ak loads are largely determined by temperature and the highest peaks loads often coincide with the hottest days of the year. Figures 2-12 and 2-13 illustrate the relationship between temperature and electricity demand. In New England, system peak almost always occurs during the summer months.

Climate Change and Peak Demand
Climate change-induced temperature increases may exacerbate existing peak demand problems. The first issue is that rising summer temperatures and more frequent extremeheat events are likely to increase air conditioner ownership and use, leading to increasingly peaky summer demand and raising the risk of power shortages during heat 38 waves. The peaks are likely to be greater in magnitude and frequency, cutting into existing capacity margins. Also, as the unpredictability of yearly climate patterns increases, peak demand will be harder to anticipate. For example, if California had to meet its future electricity demand with the generation resources it has available today, given rising temperature forecasts there is the potential for peak electricity demand to exceed supply by as much as 17%. The number of extreme heat days for some parts of California are predicted to increase by four fold by the end of the century.
Though demand continues to grow, new development of traditional generation options are becoming increasingly limited-many coal plants have been deferred or cancelled in the drive to reduce greenhouse gas emissions from fossil fuel combustion 9 • Furthermore, changes in precipitation levels and changes in the patterns and timing of snowmelt would alter the amount of electricity that hydroelectric facilities could generate, particularly in the late spring and summer months when demand is the highest. In regions that depend on hydropower generation, this could have a significant impact. For example, hydropower generation currently contributes about 15% of California's in-state electricity production, with a range from 9 to 30%, due to variations in climatic conditions. Two recent studies project losses in annual hydropower generation on the order of 10 to 30% by the end of this century if precipitation levels in California decline .
The value of hydroelectric power will also fall as more precipitation in California falls as 9 In 2007, the construction of at least 59 proposed coal-fired power plants was cancelled (Sierra Club, 2009 rain instead of snow. Snow plays an important role in equalizing water flows, since virtually all precipitation in California falls in the winter. Big stonns drop a lot of snow and rain; the snow in the mountains stays frozen until spring, when it melts slowly over the spring and early summer. This helps equalize stream flows over time and avoids major river flood events during storms, which would otherwise overflow all dams, and release large amounts of water to the ocean in uncontrolled flood events (Philipson & Willis, 1999). As even without decreased precipitation, warming will result in less efficient patterns of water flow into reservoirs, and thus less opportunity for generating hydroelectric power.

9 Conclusion
Electricity planners and regulators have traditionally focused on peak demand because the likelihood of system outages is by far the greatest at peak times, but society is rightly concerned about peak demand for other reasons as well, including economic efficiency, environmental impacts, and fuel security and diversity This chapter concludes that the long lead times associated with building new capacity, the lack of price response in the face of time-varying costs, the large difference between peak demand and average demand, and the necessity for real-time delivery of electricity all make the connection between system reliability and peak demand an important driver of public policy in the electricity sector. The California energy crisis illustrates that the combination of limited generation capacity, very inelastic demand, impediments to flexible pricing, and an inability to store electricity was a recipe for soaring marginal supply costs and rolling blackouts.

40
This chapter also concludes that weather is an important driver of peak demand and that one important pathway through which climate change is likely to affect the electric power system is growth in cooling demand.
Peak load management programs are one way to balance electricity supply and demand, reduce the strain on the electric system, and limit the use of the more expensive and least efficient power plants. The following chapter introduces peak demand management strategies, including time-varying price signals and quantity controls for reducing peak demand.

Chapter 3. Strategies for Reducing Peak Demand
This chapter explains two strategies for reducing peak demand: time-varying prices and direct control. This chapter explains that time-varying prices can reduce peak demand by giving customers incentives to shift some of their peak electricity consumption to offpeak hours by charging higher prices during peak hours. Also, this chapter explains that distribution utilities can use direct control strategies to reduce peak demand by providing subsidies to their customers for investments intended to reduce their peak electricity consumption.
As discussed in Chapter 2, considerable cost savings are possible if peak demand for electricity can be reallocated to off-peak periods. At least in theory, an incentive-based approach might accomplish this by having prices that vary temporally to reflect the realtime marginal costs of electricity production. Peak demand management policies are different from policies that encourage broad energy conservation or improved efficiency.
Policies that fall under the latter category tend to focus on reducing overall energy demand, while those in the former category focus on smoothing demand over time. Both are important parts of energy conservation efforts. There are two approaches used to reallocate demand from peak to off-peak periods. One way to facilitate peak demand reductions is to offer customers time-varying prices that charge customers a higher price for electricity consumed during peak periods and a lower price for electricity during offpeak periods. Another approach is to pay customers an incentive in exchange for direct control over the customers' appliances. Then, the utility directly controls the amount of electricity that is used by the household or business during peak periods. This chapter will describe both approaches.

Real-time pricing
Currently, consumers are able to make informed choices about when to use their cell phones; most plans offer a certain number of minutes to use during peak hours and unlimited off-peak usage-plans with more peak minutes are more expensive. Customers can choose a plan based on their willingness to pay for the option to talk during peak hours. Although electricity prices fluctuate just as much over the course of a day, the vast majority of customers are not on billing plans that charge different prices for using electricity during different times of the day. If customers' bills reflected the costs of electricity at different times, they would have an incentive to make more informed decisions about when and how they use electricity throughout the day. This is known as real-time pricing (RTP). The fundamental idea behind RTP is to provide accurate price signals to customers that convey the true cost of supplying electricity. Since electricity cannot be stored economically, and it has to be consumed immediately, and since generation plants of varying efficiency are used to meet peak demand, the cost of power varies by time-of-day and day-of-year. If clear price signals were conveyed to customers, they could decide whether to continue buying powe·r at higher prices or reduce their demand during peak hours. This promotes economic efficiency in the consumption of electricity. It can also lead to substantial savings in the aggregate for society, making RTP an important public policy issue. Qo Because electricity supply and demand have to be balanced in real-time and because electricity cannot be economically stored, to meet peak demand generators must run the 43 most expensive and least efficient power plants. The generators used to serve peak demand, referred to as "peaking plants", have relatively low capital costs and high variable costs. But, because peaking plants run for only a fraction of the hours in the year, the capital cost per operating hour is high. The marginal cost of supplying electricity rises to Pw as demand increases to Q 0 , but PR is fixed so customers have no incentive to reallocate their demand from peak to off-peak hours as marginal supply costs rise. The result is costly investments in generating infrastructure that sits idle during all but a few hours each year and expenditure on high-cost fuel to meet peak demand.  RTPs can mitigate peak load problems by giving customers incentives to shift their electricity demand from peak to off-peak periods. Figure 3-2 illustrates the same peak demand scenario as Figure 3-1 . Only here, the customer faces RTPs, which change in response to wholesale market conditions. As PR rises to reflect the increasing cost of response to wholesale market conditions. As PR rises to reflect the increasing cost of supplying electricity to meet peak demand, customers demand less electricity at the higher price. Instead of demanding the amount of electricity Q 0 , peak demand is only Q 0 '. As a result, the system does not need as much peaking capacity or costly fuel to serve peak demand.  from peak to off peak. Reallocating demand from peak to off peak improves system reliability, facilitates reductions in capital investments in capacity, and helps avoid costly fuel expenditures. Shifting demand from peak to off peak increases consumer surplus, which is the amount by which customers benefit from buying electricity at a price that is less than they would be willing to pay.  How much will be saved by RTP depends on two things: first, how much demand customers will shift from peak to off peak and second, how much generation investment and fuel expenditure can be offset by this demand reduction. The first item itself depends on two things: how rapidly utilities and regulators move to install new pricing designs that provide RTPs to customers and how sensitive customers' demand is to the price signals.

Advanced Metering Infrastructure
A prerequisite to the provision of RTP is the installation of advanced metering infrastructure (AMI). In a recent report on AMI, the Federal Energy Regulatory Comissions (FERC) presented the hardware and total capital cost information in Table 5-46 1 updated here for inflation and annualized over 20 years using an 8% cost of capital (Federal Energy Regulatory Commission, 2006b). Before annualizing, average AMI costs range from $100 to $200 for just the meter, and between $200 to $300 including installation, communications, and infrastructure costs.  Utilities, along with state public utility commissions, are uncertain whether customers will respond to price signals and some are also afraid of customer backlash to potentially volatile price signals.

Potential for Reducing Peak Demand
Since the 1970s there have been many research efforts to determine small residential and commercial customers' sensitivity to changes in electricity prices (Barbose, Goldman, & Neenan, 2004;Faruqui & Sergiei, 2009;McDonough & Kraus, 2007) . There has been a good deal of skepticism that small customers, who constitute the majority of electricity users, will respond to RTPs by reducing their demand during peaks. Recent research, however, shows that even if customer demand is not very sensitive to changes in price, surprisingly large peak load reductions can be achieved. For example, at elasticity equal to -0.1 and -0.2, peak demand can be reduced by 10.4% and 15.1%, respectively. The magnitude of these peak load reductions translates into dollar savings of $15-$43 billion (based on the 2006 capital costs of gas and coal generation) (Spees, 2008).
To illustrate this computation, consider the value of a 5% reduction in peak demand. The first benefit is the reduction in the need to install peaking generation capacity. This is a long run benefit and consists of the sum of avoided capacity and energy costs. It can be estimated based on the capacity cost of a simple cycle combustion turbine. The second benefit is the avoided energy costs that are associated with the reduced peak load. Third is the reduction in transmission and distribution (T&D) capacity. This is also a long run benefit, but is harder to quantify and is very dependent on local distribution constraints.
If transmission lines carry power beyond their designed capacity, they can overheat and fail. A 5% reduction in U.S. peak demand of 757,056 MW amounts to 37,853 MW of peak demand (Faruqui et al., 2007). The amount of peaking capacity that is needed to tneet this peak demand can be computed by allowing for a reserve reliability margin of 49 15% and T&D line losses of7.1% (Spees, 2008). This equates to 47,013 MW, or roughly 625 combustion turbines 11 . Opinions on determining the value of the avoided cost of peaking capacity range from low to high. Using a conservative value of the avoided cost of capacity of $52/kW-year, the total value of avoided capacity costs is $2.4 billion per year (Faruqui et al., 2007). Using a higher value of the avoided cost of peaking capacity of $94/ kW-year (Spees, 2008), the total value of avoided capacity costs is $4.4 billion per year.
Using the relationship that was observed between annual capacity and energy benefits in a recent analysis of the Pennsylvania-New Jersey-Maryland ISO (PJM), the annual value of avoided energy costs is estimated at $300 million (Faruqui et al., 2007).
In addition, there would be a reduction in T&D capacity needs. As noted earlier, T&D needs are location-dependent and much harder to estimate. Still, they are unlikely to be zero and a conservative estimate puts them at 10% of the savings in generation capacity and energy costs (Faruqui et al., 2007). Using this estimate, the range of potential savings in T&D costs from a 5% reduction in peak demand is $240 million and $440 million per year.
Using the conservative value of avoided peaking capacity, adding the three components yields long-run benefits of $3 billion per year, as shown in Figure 3-4. Over a 20 year time horizon, these represent a discounted present value of $35 billion (assuming an 8% discount rate).

. Annual long run benefits of a 5% reduction in peak demand
These long run benefits from shifting demand from peak to off peak can be viewed as an efficiency gain because they involve real savings in total resource costs on average over time. In theory, there should also be an immediate reduction in the wholesale market price for energy and capacity because of the reduction in demand. In areas that are capacity constrained, the short run benefits could be larger than the long run benefits.
These price-mitigation benefits would persist only temporarily until generation capacity adjusts to the new lower peak demand.
The benefits of reducing peak load can be compared to the cost of installed the enabling AMI. Assuming an approximate cost of $200 per meter (Federal Energy Regulatory Commission, 2006b), and assuming that AMI replaces the remaining 94% of the 138.4 million meters in the U.S., an investment of $27 billion will be necessary. If 50% to 80% of these costs are recovered through operational benefits, the remaining cost of the AMI is between $5.4 billion and $13 billion. Therefore, the net costs of AMI that would need 51 to be recovered through savings from peak demand reductions are 15 % to 3 7% of the $35 billion in long run savings. If, however, most of the peak demand reductions come from a small number of customers, most of the benefits can be achieved by placing only a small number of customers on AMI at a much reduced cost. Recent research indicates that most of the peak demand reductions come from the very largest 20% of customers on RTP (Spees, 2008). This means that AMI cost is justified for industrial, commercial, and a fraction of the largest residential customers. With large industrial customers, the administrators of a demand-management program can examine a large quantity of energy use all under one roof, rather than incurring the costs of interacting with many small residential customers in order to have affected the same total load.
There are additional benefits to RTP that are not captured above. These include more competitive energy and capacity markets, reduced price volatility, improved system reliability resulting in fewer outages, and fewer GHG emissions during peaks. For example, the average emissions rates in the U.S from natural gas-fired generation are: 1135 lbs/MWh of carbon dioxide, 0.1 lbs/MWh of sulfur dioxide, and 1.7 lbs/MWh of nitrogen oxides (United States Environmental Prott;ction Agency, 2007a) Therefore, a 37,853 MW reduction in peak demand avoids approximately 21,481 tons of carbon dioxide emissions, 3,785 lbs of sulfur dioxide, and 64,350 lbs of nitrogen oxides.

Barriers to the Adoption of RTP
There are several barriers to the adoption of RTP. Chief among them is the entrenchment of average cost pricing. Historically, the role of state regulators has been to design electricity prices based on allowable cost revocery and on allocating those costs fairly across customer classes. Despite the potential for saving $35 billion by reallocating 52 peak demand to off-peak, the economic efficiency of rate design, in the sense of setting price equal to marginal supply costs, has typically been given low priority. There is resistence from regulators and consumer advocates to exposing customers to unstable prices. Recent research shows, however, that even if customers have no means of knowing or responding to the RTP, over the course of the year the low off-peak rates balance out the extremely high peak rates. Even customers with high coincident peak demand would not have a large change in average price (Spees, 2008) . This indicates that regulators should not worry about the effect of RTP on poor or unresponsive customers.
Just as consumers have learned to respond to the volatile prices of gasoline, airline tickets, and other commodities, they can learn to respond to electricity prices. The largest difference is that customers purchase electricity every hour of the year and therefore some customers will want to automate their response to changing prices. Futher, for the customers that place a high value on price stability, utilities could provide any combination of hedges or flat rates; these rates would charge a premium above the RTP rate reflecting the higher cost of service.
Interestingly, a move toward time-varying rates might get the push that it needs from the automobile industry. The transportation sector is responsible for 124 million of the 346 MMTC0 2 e generated annually in New England, or 35% of total greenhouse gas emissions (Environment Northeast, 2006). Plug-in hybrid electric vehicles (PHEVs) and electric vehicles (EVs) such as GM's Chevy Volt have the potential to reduce GHG emissions from the transportation sector, reduce the nation's dependence on oil imports, and improve air quality because PHEV s are far more efficient than internal combustion engine vehicles. For example, if PHEV s comprise 50% of New England's light duty vehicle fleet by 2050, regional GHG emissions will be reduced by 11 to 15 million tons of C0 2 e, a reduction of 8%-12%. There is an abundant supply of off peak generation and transmission capacity to supply electricity for transportation-recent research shows that if 60% of the U.S. light duty vehicle fleet was electrified, it would use 7%-8% of gridsupplied electricity in 2050. A lower, off peak electricity price will encourage PHEV drivers to charge their cars during off peak hours and the additional off peak demand will smooth utilities load profiles. This will lead to a more efficient use of generation and transmission capacity and lower the average cost of supplying electricity. PHEV s can also be used to enable intermittent renewable resources, which often provide the most power during off peak periods 12 •

Introduction to the PG&E SmartAC Program™
This section will use a case study to help explain direct control strategies for reducing peak demand. Direct control strategies work by limiting consumers' electricity consumption during peak hours. This section will describe the SmartAC™ program in order to illustrate how this strategy works. This research also analyzes the past and future effectiveness of the SmartAC program in reducing peak demand under different climate change scenarios. Thus, this section also provides background on the SmartAC program that is important for understanding the analysis in Chapter 6 of this research. 12 Th .
ere 1s growing interest and momentum in a concept known as "vehicle-to-grid" or V2G. V2G describes a system in which power can be sold to the grid by an PHEV that is connected to the grid when it is not in use for transportation. Alternatively, when the car batteries need to be fully charged, the flow can be reversed and electricity can be drawn from the grid to charge the battery. Since most vehicles are parked 95% of the time, their batteries could be used to let electricity flow from the car to the power lines and back. Better Place is one such company that is proposing to provide this service. Better Place's is also proposing to provide utility companies with energy demand management capabilities that can minimize charging requirements during peak electricity consumption hours by leveraging connectivity with the car and known user profiles.
Ideally, all customers would have the opportunity to purchase energy at prices reflective of real-time marginal supply costs. In the absence of RTPs, some utilities and ISOs offer customers payments for allowing household air conditioners to be retrofitted with a device that allows remote control by the utility during periods of peak electricity demand.
PG&E's SmartAC Program is an active demand management program designed to reduce peak demand by limiting the amount of electricity used for air conditioning. The utility does this by installing programmable thermostats on participating customers' central air conditioners. When the energy situation becomes critical, PG&E sends a signal to incrementally raise the temperature setting on the thermostat. This reduces the power required by air conditioners, helping to reduce the overall drain on the power system. Stockton is located 45 miles east of San Francisco and south of Sacramento. Stockton is notoriously hot in the summer, partly because the Coastal Range mountains block the cool ocean breezes from cooling the city. Stockton is the fourth largest inland city in California (behind Sacramento, Fresno, and Bakersfield) with an immediate population of 290,000 and 690,000 in the metropolitan surrounding area. 56 pG&E recruits residential and small commercial(< 200 kW) customers 13 with central air conditioning 14 for the program. PG&E explains that on hot summer days, when hundreds of thousands of air conditioners are used, demand for electricity is at its highest and approaching system capacity. By reducing the power air conditioners require, the program reduces the risk of a power outage. The program also advertises other benefits for customers who participate-participants are given a one-time incentive payment of $25 for participating. In addition, all system customers benefit from reduced air pollution and smog, improved system reliability from reduced pressure on power plants during critical peak demand hours, and avoided expenditures on expensive peak period electricity.
Participants are guarenteed that the program will only operate during system peak periods between 10 am and 6 pm, Monday through Friday (excluding holidays and weekends) on those summer days when electricity demand threatens to exceed supply capacity. The program operates no more than 100 hours per year and no more than 6 consecutive 13 The program does not allow customers on life support or medical baseline customers to participate.
14 All of the customers participating in the SmartAC™ program have central air conditioners. Central air conditioners are split systems, the condenser and compressor are located in an outdoor unit while the evaporator is mounted in the air handler unit. The air conditioner transports heat out of the home using a refrigerant such as hydrofluorocarbons (HFCS). The compressor converts the refrigerant into a high temperature, high pressure gas. As that gas flows through the condenser coil, it loses heat and condenses into a high temperature, high pressure liquid. This liquid refrigerant travels through copper tubing into the evaparator coil. There, the refrigerant expands. Its sudden expansion turns the refrigerant into a low temeprature, low pressure gas. This gas then absorbs heat from the air circulating in the duct work. The cooled air is then distributed back through the home or building. Meanwhile, the heat absorbed by the refrigerant is carried back outside through copper tubing and released into the outside air.
Air conditioners also dehumidify the air. As the warm air circulating through the ducts passes over the evaporator coil, it is quickly cooled and can no longer hold as much moisture as it did at a higher temperature. The excess moisture condenses on the outside of the coils and is carried away through a drain, similar to what happens when moisture condenses on the outside of glass of ice water on a hot, humid day.
hours. Although most participants do not notice the temperature increase when the SmartAC is activated (in a recent survey, only 6% reported a change in temperature during activation), participants who do become uncomfortable during an event can optout of the day by going online to manage their SmartAC devices or by calling a toll-free phone number. If a participant opts-out, the air conditioner operations and thermostat settings will be returned to their pre-event condition.

How the SmartAC Program Works
The programmable thermostat control technology controls the central air conditioner, which in turn controls the indoor temperature. When the utility activates the thermostats, the thermostats increase the temperature to which the house is cooled. If the air conditioner is in cooling mode when the temperature setting is raised, the air conditioner may tum off until the house reaches the new temperature setting. If the air conditioner is already off, it may remain off for a longer period so as to allow the inside temperature to reach the higher temperature setting. The advantage of this approach is that it allows the utility to control how much the inside temperature rises. No customer should experience an indoor temperature increase greater than the thermostat setpoint. In theory, raising the temperature settings on all participating air conditioners distributes the temperature increase e~enly across the participating population regardless of house and air conditioner characteristics.
Raising the temperature setting has an indirect effect on air conditioner energy use. How a particular air conditioner responds to the increased temperature setting depends the physical characteristics of the home, such as insulation, age, and square footage, that 58 determine how quickly the home heats up. How much electricity the air conditioner uses to cool the home also depends on the characteristics of the air conditioner itself, mainly its age, efficiency, and size. Usually, when SmartAC raises the temperature setting, the air conditioner either shuts off or stays off until the indoor temperature reaches the new setting. When the house reaches the new temperature setting the air condtioner begins cooling again in order to maintain that temperature. Air conditioner usage is fundamentally a function of the differential between the outside temperature and the inside temperature, so increasing the temperature setpoint will reduce the amount of energy needed relative to the cooler temperature setting.
When programmable thermostats were first deployed for peak demand management programs, the most common control strategy was a single temperature setting increase of three, four, or five degrees Fahrenheit. An increase of this sort has two important implications for peak demand management. First, the demand reduction is greater immediately following the temperature setting increase. During the initial period of readjustment the majority of participating air conditioners shut off as the indoor temperature slowly increases to the new setting. The peak demand reduction is large and is maintained until the air conditioners reach the new temperature setting, at which point the air conditioners begin cooling again. From a system perspective, this inability to maintain a constant peak demand reduction can be a limitation. Second, a "block" temperature increase such as this can cause customers to be hot and uncomfortable. As described above, the majority of air conditioners shut off as the household temperature rises to the new setting. For what could be a substantial period of time, the circulation system does not blow cool air. From a customer comfort perspective, the block temperature setting increase also has disadvantages.
Utilities have experimented with different strategies to overcome the limitations of the block increase. Increasing the temperature setting by one or two degrees every hour or every couple of hours should, in theory, mitigate the problematic aspects of the block temperature setting increase. Instead of experiencing a large, immediate peak demand reduction and resulting lack of cooling in one long period, the periods of temperature equilibrium readjustment are shorter and spread out through the afternoon and evening.
The SmartAC program chose to use a gradual temperature change for all participating customers during the summer of 2007; customers' temperature settings were raised 1°F at the beginning of the first, third, and fifth hours of the system contingency. This strategy is referred to here as the "gradual" strategy; PG&E also experimented with a "steep" strategy in which the temperature setting increased 1°F per hour for each of the first four hours of the system contingency. After the fourth house, the "steep" strategy stays at a 4°F increase for the duration of the emergency. All else being equal, the steep strategy should provide a bigger peak demand reduction during the early hours of the system emergency than the gradual strategy.
The discussion of the SmartAC™ program will be returned to in Chapter 6, which analyzes the impact of thermostat re-set programs on reducing peak demand and explores the potential for using such programs to reduce peak demand in a future characterized by rising temperatures and climate change.

Conclusions
This chapter concludes that real-time prices and direct control strategies can reduce peak demand. Real-time prices can give customers incentives to move some of their peak electricity to off-peak hours by charging higher prices during peak periods. In the absence of RTPs, some utilities provide subsidies to their electricity customers for investments intended to reduce their peak electricity consumption, including payments for allowing a household air conditioner to be retrofitted with a smart thermostat that allows the utility to remotely limit the air conditioner's electricity consumption during periods of peak demand.
This chapter demonstrated that the regulated price of electricity to customers, based on average cost, is often below the marginal cost of producing electricity, particularly in peak periods or when costs of pollution are not taken into account. The difference between marginal cost and price means that customers have insufficient incentives to reallocate consumption from peak periods to off-peak periods. All of the benefits of peak demand management, including cost savings, reduced price volatility, improved system reliability, more competitive markets, and fewer GHG emissions during peak hours, are large enough to warrant attention by policymakers and regulators.
However, there are several barriers to the adoption of both RTP and direct control programs. These barriers include regulatory policies and rate freezes, customers' and policymakers' apprehensions about price volatility, and perceptions about the availability and cost of enabling technologies. Unless these barriers are addressed, the full potential of peak demand management will not be realized.

Chapter 4. Literature Overview
This chapter will provide an overview of the economic literature pertaining to 2 issues related to peak demand. The first issue covered in the economic literature is peak load pricing. This chapter will provide an overview of how pricing methods evolved to manage peak demand. The second issue covered in the literature is the relative efficiency of price-and quantity-based methods for managing peak demand problems. This chapter will provide an overview of the Weitzman framework for comparing the relative efficiency of these methods and then apply the framework to problem of peak demand for electricity. This chapter concludes that direct control methods are more efficient than pricing methods for reducing peak demand for electricity, mainta.inirig reliability, and avoiding catastrophic blackouts.
As discussed in Chapter 2, the crux of the peak demand problem is that there are high fixed capital costs associated with increasing capacity, low marginal variable costs, and highly variable demand. Further, because electricity cannot be economically stored and a constant balance between supply and demand must .be maintained, electricity must be generated at the moment it is demanded.
In this traditional situation, the only way to ensure reliability is to build sufficient capacity to meet demand at all times, even though that capacity is rarely used. Consequently, there is excessive capacity during low demand periods and there will still be rare instances when capacity is exceeded inadequate capacity. The economic literature on peak load pricing addresses situations such as this. Peak load pricing is commonly used by the telecommunications industry, airlines, hotels, and theatre tickets to reallocate demand from peak to off peak periods by charging a higher price for peak demand and a lower price for off peak demand, thereby providing incentives to smooth demand over time with the goal of achieving more efficient capital utilization.
Peak load pricing has been studied extensively by economists ( M. Crew, Fernando, & Kleindorfer, 1995), both in the context of regulated industries (telecommunications, electric utilities) and unregulated sectors. This chapter summarizes key issues related to peak load management and reviews the literature on peak load pricing. First, this chapter will review the economic literature on peak load pricing. Second, this chapter will examine the relative efficiency of using peak load pricing or direct demand management to reduce peak electricity demand.  The economic theory of peak load pricing originates in the seminal papers of Boiteux (1949) and Steiner (1957). As applied to electricity, the Boiteux-Steiner model, in its basic form, postulates that throughout the year there is a uniform demand for electricity in each of two 12-hour periods, shown as day and night in Figure 4-1. A single generating technology is available. The marginal operating cost of generating each kilowatt-hour of electricity is equal to b. In addition, total capacity must be sufficient to meet peak demand. The capital cost of capacity per kilowatt-hour is equal to c, or ~=c/ (12 x 365) dollars per kilowatt for each hour of peak use (Mitchell, Manning, & Acton, 1978). Thus, ~ represents the marginal cost of peak generation capacity.

Peak Load Pricing
The levels of demand in each period are assumed to vary inversely with the price of electricity in that period, and to be independent of the price in the other period 16 . Thus, there are two demand curves for the quantities of electricity demanded during the day (12QD) and night (12QN) in Figure 4-2. In this model, the marginal costs per kWh of output at night (off-peak) are the operational costs, b, since generating additional electricity will require more fuel .but no additional capacity. In the peak period, however, capacity is constrained and generating additional electricity will require building more capacity, so that the peak period marginal cost is b+~ per kWh (Bergstrom & MacKie-Mason, 1991;Mitchell et al., 1978;Steiner, 1957;Turvey, 1968). 16Th.
is assumption is probably too strong. A well-known counter-example occurred in 1964 when AT&T began lowering rates for long-distance phone calls after Sp.m. They found themselves deluged With calls from people who formerly called during the day. The interdependence of demands can be handled with an increase in mathematical complexity. This section will touch on this issue. See If a single price per kWh is charged in both periods, and the utility is allowed to recover costs but not make excess profit, then-as shown in Figure 4-2-an average price p must be set equal to the average cost per kWh, and customers will demand 12Q 0 during the day and 12QN at night1 7 • If, however, prices are set equal to marginal costs in each period, the equilibrium quantities of electricity supplied and demanded will be 12Q* 0 and 12Q* N as shown in Figure 4-3. In contrast to the single price case, the peak price is higher and less capacity is needed to meet 12Q* 0 . This market equilibrium is the optimal pricing solution in the 17 The value of the average price is found by solving the equation: Total Revenue=Total costs: pQD(p) + pQN(p) = (b+ /J)QD(p) +bQN(p) .
following sense. Of all the possible pairs of day and night prices, the prices equal to marginal costs maximize the difference between the value that consumers place on the amount of electricity they use and the cost of its production. This difference is the economic surplus that is realized by having electricity available to the community.
Marginal cost pricing ensures that in each period productive resources will be used to supply electricity up to-but no further than-the point at which the value of the last unit of electricity consumed is just equal to the cost of its supply.
The Boiteux-Steiner model is a simplification that does some violence to reality. Its virtue is the clarity with which it links the design of the optimal rate structure-two periods with separate prices, each equal to marginal costs-to the structure of the costs of production and to the demand conditions facing the utility.

More General Models
The economic theory expressed in the Boiteux-Steiner model has been extended and made more realistic in several directions. Papers have generalized the demand assumptions to provide for any number of periods of varying lengths (Williamson, 1966) , and to accommodate variations ill demand within periods (Wenders, 1976). A number of authors have addressed the potential complication of a "shifting peak," where higher peak prices shift so much peak demand to the off peak period that the formerly off peak period becomes the new peak (Bailey & White, 1974;Bergstrom & MacK.ie-Mason, 1991;Berlin, Cicchetti, & Gillen, 197 4;Hirshleifer, 1958;Mitchell et al., 1978)1 8 • And extensive is "P eak shifting" can especially be a problem at the boundaries of the peak and off-peak periods. For example, if the price of cell phone calls is free after Spm, many people will wait until S:Olpm to make calls, thereby overwhelming the system. Demand at S:Olpm could be much higher than demand during the "peak" at 4:59pm. But, most people probably will not delay their calls until 2am. This efforts have been made to expand the analysis to encompass stochastic variations in demand (Chao, 1983;. The Boiteux-Steiner model has also been generalized to include more realistic supply conditions. As discussed earlier, generators minimize total costs by using a range of generating technologies in terms of capital and operating costs to meet demand. Models that incorporate this feature have been developed by Crew and Kleindorfer (197 6) and Wenders (1976). By working with models that incorporate continuous production coefficients, rather than fixed proportions of capital and fuel, Boiteux (1949) andPanzar (1976) have incorporated the nonlinear response of generators to levels of demand.
As the theoretical economic models have grown more realistic, their normative prescriptions have become increasingly detailed and are not easily summarized. The optimal prices for each period incorporate considerations of expected shortages, the longrun substitution of capital for fuel, and constraints that ensure that revenues will cover costs. In many of the more general models, the optimal prices require that the off-peak as well as peak-period prices include some elements of capacity costs (Mitchell et al., 1978).

Deriving Peak and Off-Peak Prices
Bergstrom and Mac.Kie-Mason use simple analytics to derive the appropriate peak and off-peak prices for a situation characterized by two periods, a fixed level of capacity, and implies that demand is more continuously variable over time, not just simple on-peak vs. off-peak as in the Boiteux-Steiner basic model. But, it also provides a lesson for charging off-peak prices that are implemented as simple step functions.
highly variable demand. In the off-peak period there is excess capacity, but in the peak period the capacity constraint is binding. As demonstrated above, the marginal cost of generating electricity in the off-peak period is only the marginal operating cost (fuel costs). But, when the capacity constraint is binding in the peak period, the marginal cost of generating electricity is the marginal cost of building new capacity plus the marginal operating costs. Off-peak demand is a substitute for peak demand, and vice versa.
Similar to the illustration in section 4.1, marginal operating costs are equal to b and marginal capacity costs are equal to k=b+~. PP and P 0 denote the prices charged in the peak and off-peak periods, respectively, and dp and d 0 denote electricity consumption in each period. Bergstrom and MacKie-Mason also assume that customers have a utility function for electricity consumption of the form U (dp, d 0 ). The utility function is homothetic, twice differentiable, and strictly quasi-concave 19 . The assumption of homothetic separability makes the ratio of peak demand to off-peak demand a function of the ratio of peak price to off-peak price.
Peak and off-peak demand are functions of the price in each period such that dp(P P ,P 0 ) and d 0 (P 0 , Pp)· The marginal rate of substitution between peak and off-peak consumption 1s: (4.2.1)

. .
A utility function that can be written as U(d)=g(h(d)) where g(·) is a monotonically increasing function and h(d) is homogenous of degree one, twice differentiable, and strictly concave.
On the assumptions of the utility function, given the prices Pr and P 0 demand will be determined by the same ratio of prices. That is, the demand function satisfies This should also be true for any continuous utility function. The function x(e) can be defined implicitly by MRS(x(e))= e so that x(e) is the ratio of peak demand and off-peak demand corresponding to a price ratio of e=Prf P 0 . Because peak demand is greater than off-peak demand when the two periods are priced the same, x(P)>1. For any Q such that x(e) ~1, there is a unique set of equilibrium prices (P 0 P 0 ) and demands (dp, d 0 ) that make peak demand equal to capacity.
For example, assume that the electric utility is allowed a rate of return on capacity equal to ck. Also assume that the utility function is specified as (4.2.3) Equation ( 4.2.3) implies that the customer values electricity consumed during the peak period twice as much as electricity consumed off peak. If peak consumption is a perfect substitute for off peak consumption and the price, p, is the same in both periods, then the only demand for electricity will be during the peak period. This means that the entire cost of capacity be paid by peak usage, so that p=ck. But, peak load pricing would equalize peak and off-peak demand when Pr=2P 0 . At these prices, the customer is indifferent between peak and off-peak consumption and consumption in both periods will equal capacity. The profit constraint is satisfied when Pr+P 0 = ck. Since Pp=2P 0 , it must be that with peak load pricing Pr=2 ck/3, and P 0 = ck. · In this case, peak load prices are lower than the uniform 70 price in both periods. Even though the intention of peak load pricing is to shift demand from the peak period to the off-peak period, lower prices in both periods can lead to higher overall demand for electricity.
In the above example, peak load pricing lowers prices in both periods, but this is not always the case. Other utility functions lead to peak prices that are higher than the uniform price. Consider the case of perfect complements, where at any price, the customer always wants to consume exactly twice as much in the afternoon as in the morning. Let At any price, the customer chooses dp/ d 0 =2. No matter what price is set, the utility can utilize all of its capacity in the peak period and only half of its capacity in the off peak period. Therefore, the profit constraint is satisfied for any pair of prices (P,,,P 0 ) when Pp+P 0 / 2= ck. In this example, moving from uniform pricing to peak load pricing results in an increase in the peak price and a decrease in the peak price.

Solving for Prices by Welfare Maximization
One can also solve for peak and off-peak prices using a welfare maximization problem (Williamson, 1966). The appropriate objective function can be stated as (4.2.5) where 71 jP;dQ are the respective areas under the demand curves Co is the off peak operating cost and a function of Q(= h(Q 1 )) c is the on peak cost and has two components: ci is the peak operating cost (=g(Q 2 )) p and c; is the cost of capacity ( = k(Q 2 )).
Maximizing W leads to the following necessary conditions: Where h'(Q 1 ) is the off peak marginal operating cost; g'(Qi) is the peak marginal operating cost; and, k'(Qi) is the peak marginal capacity cost.
Solving the first order conditions yields the following price solutions: The price solution states that the off peak price of electricity is set equal to off peak marginal operating costs and that peak price is set equal to peak marginal operating costs plus peak marginal capacity costs 20 .

20
The argument has been made that it is neither appropriate nor equitable to assign all of the capacity costs to the peak period. The thought is that the capacity costs of base load plants should be assigned to all periods, the capacity costs of shoulder plants should be assigned to shoulder and peak periods, and the capacity costs of peaking plants should be assigned to peak periods. This distribution of costs S/kWh Peak Demand  Figure 4-4, at a uniform price of P unifonn> during offpeak periods customers consume Q u OP at a total cost of CF uniform · Q u op)· Under the uniform price, consumer surplus is equal to area (h) and producer surplus is equal to would reflect the respective contribution of each plant to meeting demand. This argument falls apart, however, when discussed in terms of opportunity cost. The opportunity cost of anything is the value of what is given up when a resource is used for one purpose that precludes its use for other purposes. Thus, the opportunity cost of using generating capacity that would otherwise be sitting idle is zero. If off peak consumption is made to pay a price greater than zero for capacity costs, the result will be under-consumption of off-peak electricity. In effect, off-peak users would be subsidizing peak users.
(i+k). Thus, total social welfare equals (h+i+k) . If peak load pricing is used and the offpeak price is set at P Off-Peak' customers will consume Q* OP at a total cost of (P Off-Peak· Q*or).Under these conditions, welfare is measured by the sum of producer and consumer surplus. Consumer surplus is the area under the off-peak demand curve and above the price line Poff-Peak' indicated in Figure 4-4 by the box labeled i and the triangles h and j.
Producer surplus is the area in Figure 4-4 above the supply curve and below the price line Poff-Peak' as indicated by (k+n). Welfare is the sum of the producer and consumer surpluses, the area defined by (i+h+j+k+n) . Thus, the social loss from using a uniform price instead of peak and off-peak prices is (j+n) . This is referred to as deadweight loss.
The uniform price also leads to inefficiencies in the peak period because the price is below the marginal supply cost. Using electricity that costs more to produce than customers' value for consuming it results in a deadweight loss to society equal to the area . To see this, if customers are charged a single uniform price for both the peak and off-peak period of P Uniform' during peak hours customers will consume Q u P at a total cost of (P Uniform · Q u p)· Consumer surplus is area (a + g + h) and there would be no consumer surplus. If peak load pricing is used and the peak period price is set at Preak' customers will consume Q *p at a total cost of (Preak ·Q *r)· The measure of welfare is again given by the sum of consumer and producer surplus. Consumer surplus is the area to the left of and above the peak demand curve and above PPeak' the area (a). Producer surplus is the area above the supply curve, to the left of the peak demand curve, and below PPeak-the area (h + g +I+ j + k + n + y). Again, welfare is the sum of producer and consumer surplus, or the area defined by (a+ h + g +I+ j + k + n + y). This section will use the Weitzman framework to compare the relative efficiency of using either price-based or quantity-based policies to maintain system reliability.
Maintaining system reliability is a public good that will be undersupplied in competitive markets. To see why this is, consider the interconnected nature of the grid. A grid works very well as a power distribution system because it allows sharing. If a power company needs to take a power plant off line for maintenance, other parts of the grid can compensate. This makes the grid redundant and reliable most of the time. However, there can be times, particularly at peak demand, when the interconnected nature of the grid makes the entire system vulnerable to collapse. For example, consider a hot summer afternoon when the grid is operating close to its maximum capacity. If something (lightening strikes, mechanical failures, sudden surges in demand, etc.) causes a power plant to suddenly trip off line, the other plants connected to it have to spin up to meet the demand. If all of the power plants were already operating near maximum capacity, then the plants cannot handle the extra load. To prevent the plants from overloading and failing, they will disconnect from the grid as well, and the overload will cascade through the grid. In nearly every major blackout, the situation is the same. Thus, the probability of such a failure occurring is a decreasing function of the amount of capacity physically available to the system for dispatch and an increasing function of the demand on the system. Whenever generation capacity is added or peak demand is reduced, the probability of a system failure goes down. Since power outages impose widespread costs, reducing the probability of a failure creates benefits that accrue largely to parties other than the capacity-adder or demand-reducer. Thus, capacity additions create a classic 75 positive extemality by improving the reliability of the system. Economic theory tells us that the existence of this positive extemality suggests that peaking capacity and demand reductions will, in equilibrium, be undersupplied by a competitive market. The magnitude of this extemality depends on the product of the social cost of the outage and the probability of an outage occurring.
The problem is exacerbated because the electricity delivery system is instantaneous and uses "pull" rather than "push" technology, so that customers ultimately determine the amount of electricity that travels through the wires, not the utility. The system is vulnerable to catastrophic failure if customers "pull" more electricity than the system can handle. Such failures impose widespread costs; any action that causes a reduction in the likelihood of such failures creates a positive externality.
This situation is illustrated in Figure 4-5. The expected short-run peak demand situation corresponds to the short-run demand curve labeled SRD 1 • Thus, at quantity equal to Q * and price equal to P*, the market is in both short-run and long-run equilibrium. If demand increases, however, we move up the short-run demand curve. At some point, as available capacity is fully utilized, this curve becomes vertical and further increases in demand cannot be accommodated. As shown in the figure, this situation is far more likely if the short-run demand curve is highly inelastic. The recognition of reliability as an extemality problem leads to the question of whether public policy should seek to affect the structure of electricity markets in such a way as to "internalize" the extemality and thereby move the competitive equilibrium closer to the socially optimal level of reliability. There is, of course, a large economic literature on policies designed to internalize externalities, developed primarily in the context of the negative externality created by environmental pollution. Some of the analysis from that literature can be adapted to explore possible policy responses to the reliability extemality.

77
The standard analysis of externalities says that the optimal level of the externalitygenerating activity can be determined by finding the point at which the marginal social benefit associated with the externality is equated to the marginal cost of the activity that produces the benefit. In the present context, this means that the marginal value of electricity use (or, conversely, the value of reducing the probability of a system failure) should be equated to the marginal cost of producing electricity. To make the discussion concrete, and to allow for the analysis of both price-and quantity-based demand reductions, this discussion is couched in terms of maintaining system reliability by reducing peak demand.
The marginal cost of generating electricity for peak demand has two components: marginal supply costs and the expected cost of a power failure, which is a decreasing function of total capacity and increasing function of peak demand. Both marginal supply costs and the expected costs of a failure increase rapidly as demand approaches maximum system capacity. Supply costs increase because inefficient peaking plants that run on high cost fuels are needed in order to generate enough electricity to serve peak demand. The expected costs of a system failure increases as demand threatens to exceed supply capability. In fact, one could argue that the probability of system failure goes to certainty as demand approaches the maximum system capacity. The resulting marginal cost curve is the sum of the two components and its slope is steeper than either curve taken alone. curve can also be interpreted as the value of reducing the probability of system failure. As peak demand rises, the probability of a power outage of any magnitude rises as well. What matters for this analysis is the marginal effect on expected benefits of decreasing peak demand. This can be treated as a decreasing function of demand. That is, as peak demand falls, interruptions become less likely and each successive decrement of peak demand has less effect in terms of making them rarer still.
The socially optimal level of peak demand is where the marginal benefit is equal to the marginal cost, indicated Q* in Figure 4-6. In contrast, if fixed retail rates are capped below P* at PF, customers will demand electricity at a level equal to QF. This means that traditional fixed rate pricing structures result in a risk of a system failure that is greater than the socially optimal level of risk.
In the simple situation described by Figure 4-6, regulators and policy makers can unambiguously improve market efficiency by dealing with the reliability externality.
Further, they can, in this simple situation, do this using two equivalent policy instruments: they can allow retail prices to adjust to reflect real-time or near real-time marginal supply costs or they can directly manipulate customers' electricity use to maintain Q*. In theory, these approaches are equivalent. When prices are used as planning tools, the basic operating rules from the regulator implicitly specify that customers will maximize their utility at the given prices. When direct demand management is used, the regulator explicitly limits electricity demand at a certain level. From a strictly theoretical point of view, the two methods are equivalent-no matter which method is fixed, there is always a corresponding way to set the other which achieves the same results when implemented.
A number of factors make this theoretical equivalence between price-based and quantitybased policies breakdown. The most important is that neither of the curves in Figure  In other words, if there is a "threshold" above which the marginal benefit of demand reductions is very low and below which the marginal benefit is very high, then the optimum must be near this threshold, regardless of where the cost function turns out to lie. Conversely, if the marginal benefit function is particularly flat in the region of the optimum, the correct Q* depends a lot on where the marginal cost curve lies; by setting price the planner will "track" movements in that function and will be more efficient than the quantity approach on average.
The implication of this classic quantity vs. price issue in the present context warrants a closer look. Figure 4-7 illustrates the case in which the exact location of the marginal cost curve is unknown to the planner or regulator. This illustration, however, also assumes that the location and shape of the marginal benefit curve are known with certainty. In this case, the marginal cost curve could lie in 3 possible locations. The probability of each is one-third and the height of MC 1 is equal to the average heights of MC 2 and MC 3 , making MC 1 the mean location for the actual marginal cost curve.
Given the possible outcomes for the marginal cost curve, and knowing the location of the marginal benefits curve, the optimal level of peak demand is Q*. If the actual marginal cost curve, MC, is equal to MC 1 , then Q* can be maintained either through actively limiting customer demand to Q* or by setting price equal to marginal cost so that price equals P*. If the planner uses advanced technologies to directly maintain customer demand at Q* and the actual marginal cost curve turns out to be MC 2 , the resulting level of peak demand will be too low compared to the socially optimal level of peak demand,

81
QQz· Triangle B is the measure of the resulting inefficiency. On the contrary, if the actual marginal cost curve turns out to be MC 3 , the resulting level of peak demand will be too high compared to the socially optimal level, QQ 1 • The deadweight loss is illustrated by triangle A.
The planner also has the choice to use a price-based policy and set price equal to the marginal supply cost. The planner's best guess is that MC= MC 1 • If the planner is wrong and MC=MC 2 , the actual level of peak demand, Q*, will be lower than the optimal peak demand, Qr 2 . Triangle D captures this inefficiency. Conversely, if MC=MC3, the actual level of peak demand will be higher than the optimal, Qr y The inefficiency is measured by area C. Proponents of real-time prices often claim that price-mechanisms offer considerable efficiency gains over quantity-based controls for reducing peak demand and maintaining system reliability. This may be true if the damages from peak demand occur gradually: a slight increase in peak demand means slightly more damage. Under this assumption, RTPs make sense. Rather than attempting to limit peak demand to a fixed target at any cost, retail electricity prices should be adjusted to reflect real-time marginal supply costs and the level of electricity consumption at any point in time will be determined by customers' price elasticity of demand. This remains true as long as the damage from peak demand is gradual. But, if one assumes that the damages from peak demand and the associated risk of system failure rise dramatically beyond a particular threshold, intuition suggests that it will make sense to adopt quantity-based controls that assure that demand will not exceed a given margin of reliability. Extending Weitzman's analysis to account for this threshold effect shows that to be the case as shown in Figure 4-7: if one assumes that catastrophic damages occur once peak demand exceeds a specific threshold, one finds that quantity controls are indeed desirable.

Conclusion
The first part of Chapter 4 reviewed the economic literature on electricity pricing, particularly peak load pricing. As developed by Boiteux and Steiner, the peak load pricing model sets the price in the peak period equal to the sum of marginal capacity costs and marginal operating costs and sets the price in the off-peak period equal to just the marginal operating costs. Offering a lower price during off-peak hours creates an incentive for customers to reallocate some of their electricity demand from the peak period to the off-peak period. There is, however, one important drawback to using peak load pricing as a peak demand reduction tool. That is, peak load pricing depends on knowing when the off-peak and peak periods occur. Peak electricity demand, however, can only be predicted by the forecasted weather and can only be anticipated on relatively short notice. While peak load pricing is well-suited for smoothing demand for services or commodities that follow regular patterns (cell phone plans, airline tickets, hotel reservations), it may not be as effective in reducing episodic spikes in electricity demand.
The second part of Chapter 4 compared the relative efficiencies of using price-based or quantity-based policies to reduce peak demand. Using Weitzman's framework for comparing price-and quantity-based policies, the discussion showed that price-based policies are pref erred when the marginal benefits curve for peak demand reductions is relatively flat, but as the marginal benefit curve becomes increasingly steep, it will eventually tip the scales in favor of quantity-based policies. But, Weitzman's framework is a simplification that does not capture the range of issues involved with quantity-based policies. The next two chapters will examine these issues. Chapter 5 is designed to be an introduction to the challenges facing the estimation of peak demand reductions from the 87

.Conclusion
The first part of Chapter 4 reviewed the economic literature on electricity pricing, particularly peak load pricing. As developed by Boiteux and Steiner, the peak load pricing model sets the price in the peak period equal to the sum of marginal capacity costs and marginal operating costs and sets the price in the off-peak period equal to just the marginal operating costs. Offering a lower price during off-peak hours creates an incentive for customers to reallocate some of their electricity demand from the peak period to the off-peak period. There is, however, one important drawback to using peak load pricing as a peak demand reduction tool. That is, peak load pricing depends on knowing when the off-peak and peak periods occur. Peak electricity demand, however, can only be predicted by the forecasted weather and can only be anticipated on relatively short notice. While peak load pricing is well-suited for smoothing demand for services or commodities that follow regular patterns (cell phone plans, airline tickets, hotel reservations), it may not be as effective in reducing episodic spikes in electricity demand.
The second part of Chapter 4 compared the relative efficiencies of using price-based or quantity-based policies to reduce peak demand. Using Weitzman's framework for comparing price-and quantity-based policies, the discussion showed that price-based policies are preferred when the marginal benefits curve for peak demand reductions is relatively flat, but as the marginal benefit curve becomes increasingly steep, it will eventually tip the scales in favor of quantity-based policies. But, Weitzman's framework is a simplification that does not capture the range of issues involved with quantity-based policies. The next two chapters will examine these issues. Chapter 5 is designed to be an introduction to the challenges facing the estimation of peak demand reductions from the 87 use of quantity-based policies. Chapter 6 describes the methods used to estimate the peak demand reductions achieved by the quantity-based SmartAC program and predict the effectiveness of such a program under various climate change scenarios.

Chapter 5. Introduction to Evaluating Peak Demand Reduction Programs
This chapter describes the important challenges facing the use of quantity-based policies for reducing peak electricity demand. The chapter concludes that there are several analytical and theoretical challenges associated with estimating customers' baseline electricity consumption. In the case where incentive payments are paid according to the magnitude of the demand reductions, these challenges may result in overpaying customers.
As discussed in Chapter 3, the SmartAC™ program reduces demand during peak periods by limiting participating customers' demand for electricity for air conditioning. An important issue for the SmartAC™ program, and others like it, is determining how much peak demand is reduced. Knowing the magnitude of the demand reduction is important to system planners who must be able to account for anticipated peak demand reductions in supply procurement planning, and evaluate the cost-effectiveness of such programs. In cases where incentive payments are paid according to the magnitude of the demand reduction, accurate estimates are required to properl.Y compensate customers.
Determining the size of the demand reduction is a matter of subtracting a customer's actual electricity consumption during peak hour when the customer's air conditioner thermostat is subject to re-set by the utility from an estimate of what the customer would have otherwise consumed if not for the utility intervention. The former measure is collected by a data logger or meter on the air conditioner, but the latter measure must be estimated.
This estimate is referred to as the customer baseline. Conceptually, the customer baseline is the quantity of electricity the customer would have used in the absence of any action 89 taken the utility to reduce peak demand. It is important that the customer baseline is not directly observable and must be estimated through one of several different statistical methods. Most estimation methods can be categorized as either an average or a weatherbased regression model.

Averaging Method
An averaging method simply estimates the customer's baseline for a particular day and time of day by calculating his average electricity consumption during each hour over the previous 10 to 12 days. For example, the baseline for the hour ending at 1 pm is the average over all the selected days of the demand on those days for the hour between 12 noon and 1 pm. The choice of exactly which days and how many days to average over is one significant issue with the averaging method. Most of the methods that use a version of averaging use 10 or 11 business days prior to the "curtailment day," or the day that the utility limited the customer's peak demand. Other common strategies include restricting the averaging to the 10 days with the highest average demand out of the last 11 or the 5 days with the highest average out of the last 10. Eliminating the lower demand days creates a baseline that reflects demand conditions that most approximate those conditions when demand reductions are likely to occur-i.e., when it is hot and peak demand is high.
Often, an adjustment factor needs to be applied to the baseline estimate in order to address systematic day-specific effects (other than temperature) that.may bias the baseline estimate. Day-specific effects could include wind, humidity, or cloud-cover that is out of the normal range. Non-weather related day-specific effects such as events in the news or holidays could also impact customers' electricity use. There are several approaches to this "same-day" adjustment. The first is an additive adjustment-a constant is added to the provisional baseline for each hour of the curtailment period. For a simple additive adjustment, the constant is calculated as the difference between the actual electricity demand and the provisional baseline estimate for some period prior to the curtailment.
The second approach is a scalar adjustment. The provisional baseline estimate for each hour of the curtailment period is multiplied by a fixed scalar, which is calculated as the ratio of the actual load to the provisional baseline for some period prior to the curtailment. The final option is referred to as a weather-based adjustment. A model of demand as a function of weather is fit to historical load data. The fitted model is used to estimate demand a) for the weather conditions of the days included in the provisional baseline estimate and, b) for the weather conditions of the curtailment day. The difference or ratio of these two estimates is calculated and applied to the provisional baseline as an additive or scalar adjustment.

Weather-Based Regression Model
Weather-based regression models are an alternative to averaging. Regression models are used to develop a baseline demand curve based on dependent variables related to weather, building operations, or other factors. In the context of calculating peak demand reductions from customer baselines, the regression model uses electricity consumption data for a particular customer (or even a particular appliance) and weather data specific to the customer's location. The model estimates the relationship between electricity consumption and the outside temperature using all available data except data from curtailment days. The model is fit to those data and applied to the conditions during peak demand in order to estimate what that customer's demand would have been in the absence of the demand reduction. In most cases, the model is fit separately for each customer in order to control for differences in customers' homes, appliances, and personal preferences.
In these models, each observation corresponds to a particular day and hour (or finer time interval There are many different approaches to regression modeling that vary with respect to the general method used (e.g. classical versus Bayesian), estimation algorithms (e.g. Ordinary Least Squares, Generalized Least Squares, Maximum Likelihood Estimation), functional specification (e.g. conditional demand analysis, change modeling, etc.), the use of control groups (e.g. participants versus non-participants), and the variables that are explicitly included in the model specification.
There are several differences among various specifications of weather-based regression models. The first is the type of weather variable or variables included in the model.
Typically, temperature, degree-days, and/ or a temperature-humidity index (THI) are used, although some modelers include additional weather terms such as precipitation, cloud clover, sunshine, and wind speed. Temperature and humidity are the dominant drivers of cooling demand.
Cooling degree days (CDD) is a quantitative index designed to reflect the demand for energy needed to cool a building. More specifically, the number of cooling degrees in a day is defined as the difference between a chosen reference value and the daily average temperature. The reference value, often referred to as the "base temperature", is generally the lowest temperature below which no air conditioning is necessary. If the building in question is consistently in cooling mode across the span of the data used in the regression, degree-day variables offer no advantage over temperature variables. However, if the data include milder conditions when cooling (or heating) is not required, degree-day variables generally perform better. In effect, degree-days "count" temperature only when it is high enough to require cooling.
The second difference among regression models is whether a fixed degree-day base is used. If degree-day variables are used, the degree-day base may be fixed in advance or determined by the regression. As mentioned above, the base temperature for a building is the lowest temperature at which cooling is desired. Thus, the more accurate the base temperature is, the more closely degree-days will be correlated with electricity demand. If the base temperature used is too low, the model will tend to underestimate demand in hot weather and overstate demand in cooler weather. If the base temperature used is too high, the opposite occurs. The appropriate degree-day base varies considerably across homes and other buildings, depending on the home's insulation, shading, and other factors the affect the indoor temperature. A meaningful way to determine the best base temperature for a given building is by analyzing cooling demand data in relation to temperature data. Models that allow the degree-day base to vary across homes tend to have lower systematic error, but also are more complex and more time-consuming to fit.
Also, these models are less well determined if data are limited, which means that a relatively long history of electricity demand and weather data is necessary.
A third difference among models is whether lagged weather terms are included. Lag temperature or degree-day terms are used to account for heat build-up over time in a home. This is the effect of "thermal mass." One approach is to include the weighted average of degree-days for the past 48 hours, with the weights exponentially decreasing.
Simpler approaches put multiple temperature or degree-day variables into the model at different lags. Lagging humidity is not highly meaningful because buildings do not trap humidity in the way they trap heat.
A fourth difference is whether the explanatory variables are hourly or daily. Although the estimated coefficients are almost always allowed to vary by hour of the day, the explanatory variables may vary only daily. Because buildings store heat, the amount electricity used for cooling does not respond instantaneously to the outside temperature, but depends on the temperature over the course of the day. Lagged weather variables can account for this effect, but modeling demand in a given hour as a function of the daily average temperature can often work as well without the added complexity. In part, this approach works because the variation in temperature over the course of the day is similar from day to day. Thus, the 1 Oam coefficients of daily average temperature tend to be smaller than the 4pm coefficients. This is partly because there is less heat build-up in the home by 1 Oam and partly because the actual outdoor temperature for a given daily average is lower than that at 4pm.
Common models estimate weather-sensitive electricity demand as a function of daily temperature, hourly temperature, heating and cooling degree-days, heating and cooling degree-hours, both degree-days and lagged degree-days, or a temperature-humidity index.

Theoretical Issues with the Use of Customer Baselines
Programs that pay consumers for demand reduction below some baseline face several challenges that are inherent in defining the baseline. These challenges include moral hazard, adverse selection, and the double-payment problem.

Moral Hazard
Moral hazard occurs when the customer anticipates a peak demand "event." In one case of moral hazard, the customer might intentionally increase his electricity consumption in the hours leading up to the curtailment period in order to increase his baseline so as to receive a higher payment. For example, a customer with an on-site generator might turn off his generator temporarily to establish an artificially high level of consumption and then turn the generator back on to collect incentive payments for what is otherwise normal consumption behavior. In another case, a customer might pre-cool, or increase cooling in the hours prior to the curtailment event in order to retain his comfort level longer if air conditioning is being controlled during the curtailment. Other concerns, such as adjusting manufacturing processes in anticipating of the curtailment event, are more relevant to commercial and industrial customers. If gaming or pre-cooling occurs, savings estimates based on the two hours prior to the curtailment event will be overstated whereas anticipatory behavior by customers such as canceling production orders or encouraging employees to go home early, could lead to under-estimating demand savings.

Adverse Selection
Adverse selection could result in payments for demand reductions that would have occurred anyway, and which had nothing to do with the program incentives. The adverse selection problem arises from asymmetrical information about a customer's true baseline.
Since the baseline is not directly observable to regulators, customers usually have better information on their baseline consumption levels than the regulator and can use this information to their advantage in their decision to participate in a demand reduction program. Therefore, the program is likely to attract disproportionate participation from customers who anticipate lower consumption for reasons having nothing to do with the incentives paid by the program to reduce demand. For example, at the commercial or industrial level, if last year's or last season's consumption is used to estimate the customer baseline, firms whose electricity usage has been reduced since that time have a greater incentive to participate in the program .. Therefore, payments could be made for load reductions that would have occurred anyway. At the same time, firms that are entering their high demand season, or have grown rapidly since last year simply will not join the program.

Double Pqyment Problem
The use of a baseline as the basis for demand reduction payments is susceptible to paying excessive demand reduction incentives to customers (the double payment problem), and cause customers to forego consumption whose value exceeds the cost of producing the energy 22 • This can happen when the sum of the bill savings and the incentive payments, which are computed relative to the estimated baseline, exceed the cost of producing the electricity. Such excessive compensation for demand reductions causes inefficient price formation in wholesale energy markets. That is, despite the availability of relatively inexpensive energy in the wholesale market, excessive incentives may cause customers to 22 The double payment gives participants an incentive to defer consumption when the value of consumption is greater than the LMP and/or to switch to alternative energy sources that cost more to operate than the LMP. Such a program design, therefore, results in an inefficient market outcome and an inefficient use of resources. For example, if the retail price of electricity is $80/MWh and the LMP is $90/MWh, a customer who is paid the LMP to reduce consumption would be able to earn an additional $20/MWh by using a $150/MWh on-site generator ($80 bill savings+ $90 payments-$150 generator=$20 net gain) to reduce its net metered consumption. Thus, the program compensation design encourages the use of a more expensive $150/MWh resource even though a less expensive $90/MWh resource in the wholesale market was available to serve the customer's demand. forego more valuable energy consumption or cause customers to substitute higher cost sources of energy.
To see the double payment problem, consider a customer who usually consumes a baseline quantity of electricity, BCL, at price Px. When the wholesale price Pis greater than Px, the customer wants to consume a quantity less than his baseline, q. The demand reduction is DR=x-q.
If the difference between what the customer would have consumed and his actual consumption, DR, is called a "demand resource," it seems reasonable to say that the customer should be paid the wholesale market price P for its "demand resource," just as generators are paid for their supply resources. It also seems reasonable to say that no customer should have to pay for something he does not consume. Taken together, those statements imply that if q < x, the customer should pay P for the quantity, q, that he does consume and should be paid P for the amount of the demand reduction, DR. Thus, the net payment to the customer would be: Net Payment to Customer = (P x DR)-(P x q).
Since q=x-DR, this can be rewritten as: This says that a customer who consumes less than its fixed BCL of x should buy x at the market price and then be paid a price of 2P for selling back "demand resources." There are several solutions to the double-payment problem. First, economic demand response programs can be eliminated in favor of real-time pricing programs. Second, a customer could be required to purchase his baseline. If he does not consume all of the electricity that he has paid for, he will be compensated for the unused energy. Lastly, the simplest solution is to only pay customers the real-time price for electricity that they do use.

Conclusions
This chapter described the important analytical and theoretical challenges facing baseline approaches for estimating the impact of quantity-based policies that limit customer demand during peak hours. This chapter concludes that the weather-based regression method has several important advantages that make it preferred to the averaging method.
This chapter also describes the theoretical problems of moral hazard, adverse selection, and the double-payment problem. This chapter concludes that in cases where incentive payments are paid according to the magnitude of the demand reduction, these challenges can result in overpaying customers.
Considering these challenges, it is important to remember that price-based policies, such as real-time pricing, can reallocate electricity demand from peak periods to off-peak periods without these challenges.
Both quantity-based and price-based policies are an essential element of California's energy strategy, as articulated in the state's Energy Action Plan (EAP II), which directs the state's investor owned utilities to subscribe at least 5% of system peak demand into either price-based or quantity-based control programs. Currently, both types of programs are administered by California's 3 regulated investor-owned utilities: PG&E, SCE, and SDG&E. The utilities generally offer their large commercial and industrial (C&I) customers options to participate in both types of programs for reducing peak demand.
For example, PG&E offers a Critical Peak Pricing (CPP) program to its large C&I customers that provides lower energy rates on non-peak days in exchange for higher rates (up to five times the otherwise applicable rate) on peak days. Most of the programs available to residential and smaller commercial customers, however, are quantity-based programs 23 , such as the SmartAC program described in Chapter 3. Chapter 6 analyzes the effectiveness of the quantity-based SmartAC program in reducing peak demand and forecasts how such a program will help reduce peak demand under various climate change scenarios.

23
Both price-based and quantity-based programs for residential and small commercial customers will most likely grow as utilities' proposals for advanced metering infrastructure make their way through the regulatory approval and implementation process. Chapter 6. Evaluating the Role of Residential Air Conditioning in Reducing Peak Demand

Introduction
This chapt~r will evaluate the role that direct quantity control policies will have in reducing peak demand in a future characterized by hotter summer temperatures and climate change. Doing this will take 4 main steps: 1. Model the effect of temperature on electricity demand for air-conditioning.
2. Estimate the magnitude of peak demand reductions from the quantity-based SmartAC™ program.
3. Forecast electricity demand for cooling over a range of possible climate change scenanos.
4. Examine the role for air conditioner thermostat re-set programs in reducing peak demand over a range of possible climate change scenarios.
The first step is carried out by modeling electricity demand from a particular air conditioner as a non-linear function of the outside temperature and the time of day. The second step involves determining how much PG&E reduced peak demand by raising the cooling setpoint on participating customers' air conditioners during the summer of 2007.
This also takes several steps. The first step is to use the non-linear model mentioned above to estimate the electricity demand for a particular air conditioner as a function of the outside temperature and the time of day, based on historical demand and weather data. Next, given specific peak demand-day conditions (daily average temperature, time of day), the amount of electricity the customer would have consumed absent the utility Intervention is estimated. Lastly, the amount of electricity the customer's air conditioner actually consumed is subtracted from the estimate of what it would have consumed but for utility intervention. The difference is interpreted as the peak demand reduction.
The third and fourth main steps are to forecast future electricity demand for air conditioning and potential demand reductions across a range of possible climate change scenarios. Forecasting concerns extrapolating the findings from ex post evaluations to a set of conditions that differ from those that have occurred in the past. Most climate scientists agree that future climate conditions and patterns will be out of the range of historical weather. The challenge here is that the functional relationship between energy demand for cooling and temperature may differ under these extreme conditions from what is was under the observed conditions. A related challenge is to address not only the degree of uncertainty associated with the ex post evaluation parameters, which is largely tied to the accuracy and statistical precision of model parameters, but also the uncertainty associated with the climatic predictions that underlie the forecast. Everything is uncertain in the future, and providing point estimates based on specific values for key variables can significantly overstate the true confidence that underlies the estimates. To address this uncertainty, this chapter will report ranges of forecas. ted energy demand for air conditioning across several possible climatic futures.
To aid the reader, this chapter is divided into two main parts, with several sections and subsections within in part. Part I estimates peak demand reductions from the SmartAC program during the summer of 2007 based on historical weather and electricity consumption data. Part II forecasts future demand for electricity for cooling under different climate change scenarios.
The conclusion of the analysis is that direct load control programs that limit consumers' demand for electricity for air conditioning during critical hours are effective in reducing peak demand. This research also concludes that this type of direct control program has a smaller impact on peak demand at extremely hot daily average temperatures. This means that this type of direct control program will reduce peak demand more effectively if the impact of climate change on daily average temperatures is moderate. Likewise, this research concludes that this type of program may reduce peak demand more effectively in regions of the country with moderate temperatures and low humidity, such as northern California and the Pacific Northwest.

Electricity Consumption Data
This analysis uses two important sources of data. The first is the electricity consumption of each participating air conditioner and the second is the daily average temperature in Stockton, CA.
Data on each participating air conditioners' energy use was collected by data loggers installed on the customers' air conditioners; the data loggers recorded each air conditioner's energy use in either one-minute or fifteen-minute intervals. The data loggers used were the HOBO Energy Logger Pro TM (for one-minute data) and the DENT DATApro™ Data Logger (for fifteen-minute data). Both loggers used a current transformer, installed around a single leg of an air conditioner, to monitor the voltage of the electromagnetic field produced by an alternating current, and were programmed to convert that voltage reading into amps. Four different sized current transformers were used for this study: 20, 70, 100, or 150 amp. The one-minute or fifteen-minute interval data that was stored in the loggers included the date, time, and average amps during the interval. The DATApro TM loggers, which recorded information on fifteen minute intervals took instantaenous amp readings every minute and recorded the average of those readings at the end of every fifteen minute interval. The HOBO data loggers recorded amp readings every minute. During the data cleaning process, the one-minute data was converted to a format consistent with that recorded by the fifteen-minute data loggers; the average amps over fifteen minute intervals was calculated. Amps were converted to kW 24 using the measured voltage at each site. If the voltage could not be measured, the average measured voltage (220v) calculated across all air conditioners was substituted. There is little variation in voltage across all air conditioners.
The data loggers were installed on a sample of 300 air conditioners. The sample was selected from the population of 2,917 participating customers at the start of the summer of 2007. The sample design and sampling was done by Kema, Inc. and will only be explained briefly here. The sample design had 6 strata, based on the type of controlling device, tons of air conditioning per household, and whether multiple central air conditioners were present in the home 25 • The purpose of the stratification was to reflect changes in the composition of the population of participating customers as the program grows. 24 25 Watts=voltage x amps. One ton of cooling is equal to 12,000 BTUs.
It is also important to know how much the sample of customers uses their air conditioners. There can be no demand reduction from an air conditioner that is not operating because its electricity consumption is already zero. Air conditioners that are used frequently are more likely to contribute to the peak demand reduction than air conditioners that are rarely turned on. Figure 6-1 shows that 3% of the sample never operates their air conditioner, 59% only use air conditioning on the hottest days of the summer, and 15% use air conditioning every day of the summer. This sample make-up is useful for evaluating the impact of the thermostat re-set because almost all of the sample (96% to 97%) probably uses air conditioning on the very hottest days of the summer, when peak demand is most likely to occur and demand reductions are the most valuable.

Weather Data
The second important data source for this analysis is weather data for the Stockton area.
PG&E provided KEMA, Inc. with observations of dry bulb temperature and relative humidity in half-hour intervals for three weather stations in Stockton and its surrounding areas for the period from January 1 through December 31, 2007.
The relevant temperature variable for this analysis is the daily average temperature. This is because homes and other buildings heat up and cool down more slowly than the outside temperature-thus, the daily average temperature captures the range of temperatures that the home experiences over the course of the day and is a better gauge of how much energy will be needed for cooling than the day's maximum temperature. The daily average temperature is calculated as: DAT= (Maximum Temperature+ Minimum Temperature)/2.
In addition, PG&E provided KEMA, Inc. with historical weather data for Stockton that included observations of daily average temperature for the period from May 1 through October 31 for the years 1983-2006. This allows a comparison between the summer of 2007 and the previous 23 years. These data establish percentile cut-offs to identify the one, five, and ten percent hottest days across the 23 years. The first percentile contains days with daily average temperature above 87.5°F, the fifth percentile contains days with a daily average temperature above 83.1°F, and the tenth percentile contains days with a daily average temperature above 80°F. The first and fifth percentiles are particularly ttnportant for the SmartAC program because it is on hot days such as these when 107 electricity demand for cooling skyrockets and peak demand reductions are especially important.

Analysis of PG&E SmartAC TM Peak Demand Reductions
This section describes the method used to estimate peak demand reductions under the quantity-based SmartAC program. This process proceeds in three steps. First, we estimate the parameters of the demand relationship at the individual level. Then, we compare estimated peak demand to observed peak demand on "curtailment days" in order to 111 determine the reduction in peak demand resulting from the thermostat re-set. Finally we use the relationship between energy demand and temperature to forecast energy demand for cooling under different climate scenarios. This allows us to project potential peak demand reductions from quantity-based policies.
Now each of these steps will be described in detail.

Specify the demand relationship
The weather-based regression model used in this case estimates air conditioner electricity demand as a function of dry bulb temperature in the form of average daily cooling degree-days. Each of the 24 hourly demand indicator variables is regressed against an hour-specific intercept term and degree-day terms. The resulting parameters, though based on only a single daily temperature measure, provide an hourly estimate of demand as a function of weather. The Akaike's Information Criterion (AIC) was used to compare the competing models. The AIC is computed as -21n(I} +2k where Lis the likelihood function and k is the number of free parameters. 115 equal to 14 degree days and Vlrv) would be equal to 2 degree days. However, consider a different customer for whom the most appropriate cooling degree day base temperature is also 70°F, but the best breakpoint temperature is 86°F. In this case, if the daily average temperature for one observation is 84°F, Clrc) is equal to 14 degree days but V JrvJ is equal to zero. The model was also estimated with no breakpoint temperature parameters. The log likelihood ratio of the model specification with the breakpoints is compared to that with no breakpoints and the specification with the highest value for the likelihood function is chosen.

Estimate the parameters of the demand relationship
The optimal model for each participating air conditioner includes a set of estimated parameters. Depending on the optimal model chosen, the model may or may not include "breakpoint" cooling parameters. The most common optimal model including both the cooling base and breakpoint parameters is provided in E quation (6.3.3).
Where the hat variables on the right hand side represent estimated parameters from the /\ regressions and L;hd is the estimated demand for air conditioner i in hour h on day d.
The weather-based regression model estimates a base load as well as the cooling and breakpoint parameters. Where air conditioner demand is the only dependent variable being modeled, the expectation is that the base load is zero unless the air conditioner is a heat pump, or there is some ongoing low-level demand load used by the condenser. In instances where the model produced base load parameters that were, in aggregate, negative, the base load parameters were set to zero.
The following tables report the range of results from estimating the model across all participating air conditioners. Table 6-1 shows the range of estimates of cxih' the base load parameter, across all hours of the day. Table 6-2 shows the range of estimates of ~Cih' the coefficient on the cooling degree days from the base temperature, and Table 6-3 shows the range of estimates of ~Vih> the coefficient on the c.ooling degree days from the breakpoint temperature.  As expected, the estimated coefficients are larger for the daily peak hours between 3pm and 7pm and lower during the morning and night.

Calculating Peak Demand Reductions on "Curtailment D ays"
Having established each customer's baseline demand for air conditioning over a range of daily average temperatures, the next step in the analysis is to calculate the peak demand reduction from each air conditioner when the thermostat temperature setting is raised.
The SmartAC program included 15 curtailment events during the summer of 2007. Of those, two were conducted for the entire program population and the remainder of the events were conducted for the sample only. T he two population events occurred under relatively mild conditions but several of the sample-only events occurred on the hottest days of the summer. Table 6-5 lists the curtailment events that summer, in order of observed descending daily average temperature. The start time is the beginning of the peak period when the thermostats' temperature settings were raised and the end time is when utility intervention ceased.
Accounting for "Steep" Re-set vs. "Gradual" Re-set As mentioned in Chapter 3, the utility wanted to experiment with two different thermostat re-set strategies: a "steep" ramp and a "gradual" ramp. To test the impact of the alternative re-set strategies, the sample group was divided into groups A and B and the curtailment days alternated between "A" days and "B" days. On "A" days, group A was subject to the steep ramp (raising the temperature setting by 1 degree Fahrenheit per hour for four hours) and group B was subject to the gradual ramp (raising the temperature setting by 1 degree Fahrenheit at the beginning of the first, third, and fifth hours). On "B" days, group B was subject to the steep ramp and group A was subject to the gradual ramp. This experimental design was used because it helps isolate the impact of the ramping strategy from the effect of factors other than temperature. For example, suppose that customers are more likely to override the utility re-set on Wednesdays than on Fridays, but this is not known a priori. If the entire sample was subject to the steep ramp on Wednesday and the gradual ramp on Friday, incorrect conclusions might be reached based on observed behavior. By measuring the difference between the two different thermostat re-set strategies, the analysis will produce estimates of demand reduction for both strategies as well as overall. Table 6-5 indicates which curtailment days are "A" days and which are "B" days. On the two curtailment days that involved the entire participating population, only the steep ramping strategy was used.

Adjusting for Day-Specific Events
The weather-based regression model estimates a basic, customer-specific estimate of program savings. However, these estimates need to be taken a step further to address systematic day-specific effects that may bias these estimates. Day-specific effects could include wind, humidity, and cloud-cover that are out of the normal range. There are two ways to adjust the basic model estimates for systematic, day-specific effects. One adjusts each customer's estimated demand based on observed demand prior to the event. The other uses a comparison group, leaving half of the sample uncontrolled during each event.
Without using a comparison group, the simplest adjustment approach shifts or scales the provisional baseline to align it with the actual conditions of the curtailment day.
Effectively, the regression model provides a temperature-specific load shape for the customer and the adjustment shifts the modeled load shape up or down so that it matches the observed customer load during an earlier time interval on the day of the event. This approach maximizes the accuracy of the event savings estimate.
Here, the provisional baseline is adjusted to better reflect the customer's demand during the two hours prior to the curtailment event. The adjustment factor is calculated per Equation (6.3.5). (6.3.5) Where A 1 is the adjustment factor, Pis the two-hour period immediately prior to the reset, and nh is the number of 15 minute intervals in period P.
The adjusted baseline estimate is calculated as follows: (6.3.6) Where S ~ and S ', 11 are the adjusted savings estimates. Compared to the comparison group approach, this adjustment method provides a larger sample size of re-set air conditioners, and corresponding smaller standard error. However, this method does not reflect systematic effects that only occur during the curtailment period. Additionally, if the re-set affects pre-curtailment usage (for example, pre-cooling in anticipation of a likely curtailment period later in the day), the adjustment can introduce other error into the estimate. The adjustment approach assumes that, on average, the two hours prior to the re-set period are representative of the demand during the re-set period. It also assumes that the adjustment should be applied additively to the intervals of the curtailment period, rather than multiplicatively as a proportion. This is a standard approach for estimating customers' baselines and is generally considered to be simpler and less prone to scaling errors than a scalar approach. As discussed above, the regression model estimated electricity demand on an hourly basis.

Illustration of Peak Demand Savings
The meter data, however, was available on a quarter-hour basis. The demand reduction for each quarter-hour interval was calculated analogously to the hourly equations indicated above. For the quarter-hour estimates, the demand in each time increment was estimated using the regression coefficients for the hour that included that increment.
The first half-hour of each curtailment event was not included in the savings calculation because the participating air conditioners are re-set randomly throughout the first halfhour. However, the demand reductions are calculated through the official end of each curtailment event. There is the possibility of "snap-back" following the peak period.
"Snap-back" refers to higher-than-normal electricity consumption as the air conditioner tries to cool the room back down to its regular temperature setting. The impact of snapback on electricity demand is not included in this analysis.

Results
Demand savings are estimated using the difference between site-level estimated demand and actual consumption. On mild days when there is _little cooling there may be effectively no savings. In these cases, the per-air conditioner savings represents the model error relative to observed demand. When this is the case, small negative savings results are possible. Table 6-6 provides demand reduction estimates per participating air conditioner, in order of observed descending daily average temperature. The estimated demand reduction on the hottest curtailment day of the summer generated an average of 0.69 kW of energy savings per participating air conditioner. With repetition, in 95% of cases, the average demand reduction per-air conditioner will fall between the upper and lower confidence interval bounds  Table 6-7 provides the hourly savings estimates on the three hottest curtailment days of the summer. As expected, the largest demand reduction occurred between 6pm and 7pm on August 31, the last day of an extended heat wave. Savings reported at this time average 1.38 kW. Figure 6-11 shows the increase in savings across the days and the trend of savings across the hours on each of the three curtailment days. August 29th, the day of the system peak, had a daily average temperature of 89°F. The maximum temperature for that day was 103°F, between 4pm and 6pm. Air conditioner use and measured savings generally increase as heat waves extend to multiple days. This is clearly the case for this four day period. This trend would point to a system peak reduction estimate falling between the estimates of the 28 1 h and the 30th.  Comparing Peak Demand Savings: "Steep" Re-set vs. "Gradual" Re-set As mentioned above, the analysis also looked at the effectiveness of the two different reset strategies. Participants were randomly assigned to two groups, and remained in the same group throughout the summer. The two thermostat control strategies were applied to the two groups alternately in order to control for µifferences in the subgroups. Table   6-8 provides the results for the tWo re-set strategies during the three hottest curtailment days. As expected, the steep strategy generates more savings at the beginning of the curtailment period, while the gradual strategy generates more savings towards the end of the curtailment. Figure 6-12 also focuses on the hourly savings for the different re-set strategies.  temperatures. This means that this type of direct control program will reduce peak demand more effectively if the impact of climate change on daily average temperatures is moderate. Likewise, Part II concludes that this type of program may reduce peak demand more effectively in regions of the country with moderate temperatures and low humidity, such as north em California and the Pacific Northwest.

Electricity Demand and Peak Demand Reductions under Future Climate Change Scenarios
California's electric power system is confronting several technical and regulatory challenges. Electricity demand is growing at a rate exceeding that at which new supplies are being added to the system. Transmission constraints are becoming increasingly costly.
And, following the electricity crisis of 2000 and 2001, the state's move toward deregulation was suspended and there is no consensus on the appropriate path forward.
These issues are being faced in the context of an effort to increase the share of renewable sources in the electricity system-the renewable portfolio standard. And these challenges are magnified by the need to address the potential consequences of climate change.
Developing strategies to reduce greenhouse gas emissions has, in recent years, emerged as a major public policy issue in California.
California is unique in its emphasis on demand-side policies and programs to meet the state's energy challenges. Thus, California is preparing for the potential effects of climate change on the operation of the state's power system, on both the demand and the supply sides. This is also one of the most difficult areas for research and policy planning, inasmuch as it involves the future interactions among the climatic system, a highly complex engineered electrical system, socio-economic trends and human behavior, all of which are difficult to predict. The objective of the analysis in following sections is to forecast the potential contribution of air conditioner control programs to reducing peak demand in a future characterized by rising temperatures and climate change.
This analysis uses recent projections of regional climate change affecting California to generate simple illustrative estimates of possible peak demand savings from residential air conditioner control programs. The analysis is carried out in several steps. First, climate change projections are used to generate a collection of possible climatic futures for the Stockton area of California. These future scenarios are based on assumptions of different levels of greenhouse gas emissions, and thus, different levels of future warming. Next, the weather-based regression model described earlier is used to forecast customers' baseline electricity demand for air conditioning under the future climate scenarios. The last part of the analysis estimates the magnitude of the peak demand reduction and total cost savings possible under different rates of program participation in San Joaquin County.

California Climate Change Scenarios Project
In The scenarios were developed based on three levels of greenhouse gas emissions described in the IPCC Special Report on Emissions Scenarios. The "A2" emissions scenario assumes that greenhouse gas emissions continue to climb throughout the century, reaching almost 30 gigatons (Gt/year) per year. In this scenario, C0 2 concentrations reach more than triple their pre-industrial levels by the end of the twentyfirst century. The "B1" scenario is assumes that global C0 2 emissions peak at 27 "Sensitivity" refers to the models' predictions of the change in mean global surface temperature from a doubling of atmospheric C0 2 concentration above the pre-industrial level. The sensitivity of the PCM is approximately 3.2°F, the GFDL's sensitivity is approximately 5.4°F, and HadCM3's sensitivity is approximately 5.9°F. The IPCC has stated that the likely range for this quantity is 2.7 to 8.1°F. approximately 1 Gt/year in the mid-twenty-first century before dropping below current levels by 2100. This yields a doubling of C0 2 concentrations relative to its pre-industrial level by the end of the century, followed by a leveling of the concentrations. Cooperative Observer station data set. This data set, developed at a spatial scale of 1/8° (approximately 7 miles (12 km)), was aggregated to a 2° latitude-longitude spatial resolution (approximately 137mi x 137 mi. (220 km x 220 km)).
Downscaling the GCMs showed potential warming that can be grouped into three ranges-a lower warming range (3 to 5.4°F), a medium warming range (5.5 to7.9°F) , and a 140 higher warming range (8 to 10.4°F). The Scenarios Project did not attach probabilities to any of these outcomes. If greenhouse gas emissions trends follow the higher emissions scenarios (Al Fi or A2), California can expect substantial impacts on its economy, ecosystems, and the health of its citizens. However, if global emissions follow the lower emissions trend (Bl), temperatures would likely not rise above the lower warming range and many of the most severe impacts could be avoided. However, if the actual climate sensitivity to greenhouse gas emissions reaches the level of the more sensitive GCM models used, an even lower emissions path than the B 1 scenario may be required to avoid the medium and higher warming ranges. appropriate data, ClimGen generates precipitation, daily maximum and minimum temperature, solar radiation, air humidity, and wind speed.

Weather Generator Simulations
For each GCM/ emissions scenario combination, ClimGen was used to stochastically generate 300 weather scenarios based on the original California downscaled data. Each scenario includes 365 (366 for leap years) daily projections for minimum and maximum temperature. The daily projections for minimum and maximum temperature were averaged over all of the ClimGen-generated scenarios, yielding one scenario for each GCM/ emissions combination. Thus, each of the final 6 scenarios includes daily minimum and maximum temperature from January 1, 2010 to December 31, 2100.
The figures below illustrate the climate impacts projected by the PCM and GPDL model simulations. Due to differences in the two models' parameterizations, sensitivities, and responses to atmospheric greenhouse gas levels, there are substantial differences between the projections by the two models. As mentioned above, PCM has relatively low sensitivity of regional temperature to greenhouse gas levels and the GPDL model has a relatively high sensitivity. Northern California temperature warms significantly between 2010 and 2100, with mean temperature increases ranging from 2.5°P in the lower emissions B1 scenario within the less responsive PCM model to 8.14°P in the higher emissions A2 scenario within the more responsive GPDL model (see Table 6-9 and While the warming slows under the PCM B 1 scenario to 0.3 °P during the latter half of the century, under the GPDL A2 scenario it increases to 1.35 °P annually.

,,
Measuring cooling degree days helps to put this warming into perspective (see Figure 6 -14). Using a daily average temperature of 65 °F as the cooling base temperature, the number of annual cooling degree days was calculated for 2010 to 2100. Under the PCM Bl scenario, between 2010 and 2019 there are projected to be about 1,261 cooling degree days in the Stockton area. By the middle of the century this number rises to 1,495, and by 2100 the region will experience about 1,642 cooling degree days. This is roughly equivalent to moving from Stockton to Fresno, CA or from Newark, NJ to Nashville, TN. Under the GFDL A2 scenario, cooling degrees days increase from 1,526 in the early part of the century to 1,959 by mid-century, and to 2,798 cooling degree days by 2100.
This warming is roughly equivalent to a move from Raleigh, NC to New Orleans, LA 29 • To put this in perspective, peak electricity demand at 68°F in N orthem California is roughly 26,000 MW. At 86°F, or 18 cooling degree days, peak electricity demand in Northern California climbs to 37,000 MW (Barnett et al.).
In both models, beyond the first three decades of the century warming is greater under the higher emission A2 scenario than under the lower emission B 1 scenario. The warming during the century is approximately linear in each sunulation, although there are substantial year to year variations in temperature. Three of the simulations (all except PCM Bl) yield more warming in the summer than in the winter. July daily average temperatures rise from 76.3°F to 78.8°F (+ 2.5°F) and from 77.4°F to 85.6°F (+8.2°F), in the PCM Bl and GFDL A2 scenarios, respectively. January daily average temperatures rise from 48.0°F to 49.4°F (+1.4°F) and from 46.6°F to 53.8°F (+7.2°F), in the PCM Bl 29 Annual cooling degree days: Newark, NJ: 1,220; Raleigh, NC: 1,521; Nashville, TN: 1,652; New Orleans, LA: 2,773 From the National Climatic Data Center, NOAA. Available from http://www.ncdc.noaa.gov/ oaf climate/ on Ii ne/ ccd/n rmcdd. txt and GFDL A2 scenarios, respectively. Recent research indicates that the accentuation of summer warming is common to all continental areas and may be affected by earlier and greater drying of continental land surfaces. If the projected summer amplification of warming occurs, it has important implications for impacts such as the occurrence of heat waves, energy demand, and peak electricity demand.
In the 30 years from 2010 to 2040, warming-even under the lower emissions scenario B 1 ranges from 0.7°F in summer and to as great as 2.2 °F in the GFDL Al scenario. Already, this near-term warming is sufficient to increase substantially the number of warm days in summer, effectively eliminating summers that fall into the cool third of the temperature distribution in the GFDL projections. The occurrence of extremely warm daily average temperatures, exceeding the 95th percentile of their historical distributions, tallied for the PCM and GFDL Bl and A2 simulations (see Figure 6-15), increases by 3 to 500 times from 2010 to 2100. Again, the projected increase in extremely hot days has important implications for the impact on peak electricity demand and the role of demand management strategies since air conditioning use tends to increase on extremely hot days.   1°F. The graphs illustrate that those extremely hot days will remain relatively rare until the very end of the century as predicted by the PCM model, but will become increasingly frequent under the GFDL model, approaching more than 600 days in the last decade of the century.

Forecasting Electricity Demand for Air Conditioning
One of the purposes of this evaluation is to establish predicted per-air conditioner electricity demand across a range of climate change temperature scenarios for the Stockton area of California, assuming static air conditioning technology. These per-unit projections multiplied by the population provide estimates of future electricity demand for cooling. The analysis presented in this section provides illustrations of forecasted electricity demand and shows how air conditioner re-set programs can reduce peak demand in the future.
Forecasting electricity demand requires a model that relates changes in electricity demand to changes in the exogenous variables that drive demand. Several issues are unique to forecasting electricity demand in a future characterized by climate change. First, the analysis requires developing estimates for values of key drivers that are outside the boundaries of historical experience. For example, in the future California will experience extremely hot days with daily average temperatures that have not occurred in the past. In this range, the relationship betwe~n electricity demand and daily average temperature may differ from the relationship that exists within a narrower range of temperatures. Second, ex ante estimates are subject not only to the uncertainty associated with ex post estimates, but also to the additional uncertainty associated with exogenous factors that drive demand, such as uncertainty in weather, customer characteristics, etc. Lastly, customer education and technological innovation might impact the effectiveness of air conditioner re-set programs. With forecasting, it is important to consider not only the degree of uncertainty associated with the ex post estimation parameters, which is largely tied to the accuracy and statistical precision of model parameters, but also the uncertainty associated with the drivers that underlie the forecasts. Everything is uncertain in the future, and providing point estimates based on specific value for key variables can significantly overstate the true confidence that underlies the estimates.

30
The threshold temperature above which most or all air conditioners will be running will vary depending on the typical unit sizing practices for a location. It may be that many air conditioners will still be cycling above 100 degrees in some locations but most will be on in other locations.
Incorporating uncertainty into forecasts of electricity demand is straightforward using Monte Carlo simulation methods or similar approaches. With Monte Carlo analysis, each variable that drives demand can be represented by a probability distribution defined by an explicit set of characteristics. Correlations among exogenous variables can also be accommodated. The researcher draws a value from each input distribution and predicts the demand associated with that set of input values. This process is repeated many times (1,000 draws from each distribution is relatively common) in order to simulate the distribution of impact estimates that reflects the uncertainty associated with the exogenous variables as well as the model parameters.

Steps for defining uncertainty of forecast estimates
Since the parameters of the demand relationship are estimated, they are random variables, and the resulting estimate of electricity demand is still a random variable. Researchers are often interested in making inferences or constructing confidence intervals, but realistically wish to incorporate the uncertainty relating to the parameters in to the confidence intervals. However, demand is a non-linear function of the parameters, and depending on the relationship specified, may have different or unknown distributions. In this case, the best approach is to simulate the confidence intervals. This can be done using the Krinsky-Robb procedure.  caution that researchers should be wary of using linear approximations to get estimates of elasticities that are non-linear functions of random variables. Instead, Krinsky and Robb suggest a simulation approach to generate confidence intervals when the parameters are treated as random variables (Haab & McConnell, 2003;Jeanty, 2007;I. Krinsky & Robb, 1986b;I. Krinsky & Robb, 1990).
Because parameters are correlated, one cannot generate confidence intervals simply using 155 independent random draws of each of the random parameters. Rather the simulation must use the estimated variance-covariance matrix of the regression coefficients to accurately estimate confidence intervals for the predicted value. The Krinsky-Robb procedure involves the following steps: 1. Obtain the regression output, recording the parameters and their respective standard errors;  The following section describes the results of forecasting electricity demand under two global climate change models and two emissions scenarios, following the method described by the steps listed above.

Obtaining Regression Output from a Random Effects Model
Typically, electricity load analysis involves both a time series and a cross-sectional dimension. This type of data is referred to by a variety of names, including panel data. The random effects model has a major drawback, however: it assumes that the random error associated with each cross-section unit is uncorrelated with the other regressors, something that is not likely to be the case. The result is bias in the coefficient estimates from the random effects model. This may explain why the slope estimates from the fixed and random effects models are often so different. As with any dataset that includes a large number of time series observations, even trivial differences in results can be statistically significant when in fact the different between the two models is very minimal.
As a result, the magnitude of difference in results may be more important than statistically significant results, i.e., is the magnitude of the difference meaningful 31 • The random effects model estimated here can be viewed as a two-way design with covariates: would want a model that explains electricity use by the fact that it was unusually hot at that time. Excluding the time-specific dummy variables allows the temperature variable to capture more of the explanatory power. Because the temperature variable interacts with the hour of the day, they still capture the variation in electricity use across a 24 hour span.
The model specification depends on both the cross-section and the time-series to which each observation belongs-this is called a model with two-way effects. The specifications for the two-way model are Where &i 1 is a classical error term with zero mean and a homoscedastic covariance matrix.    The Krinsky-Robb procedure was executed here by drawing 500 32 observations on the parameter vector ~ from the estimated multivariate normal distribution of the parameters.
At each draw, electricity demand was calculated, resulting in 500 draws from the empirical distribution. The resulting draws can be used to calculate the sample average electricity demand. By ranking the draws in ascending order, a 90% confidence interval around the mean electricity demand is found by dropping the top and bottom 5% of observations.
The typical confidence interval constructed this way is not symmetric, reaffirming the absence of normality for energy demand. The first step in carrying out the Krinsky-Robb procedure is getting the N parameter vector draws from the multivariate normal distribution.
Let V(/J) represent the K x K estimated covariance matrix for the estimated parameter vector /3 of column dimension K. Let xk be a K-dimensional column vector of independent draws from a standard normal density function. Finally, let C be the K x K lower diagonal matrix square root of V(/J) such that CC'= V(/J). The matrix C is sometimes referred to as the Cholesky decomposition matrix. A single K-vector draw from the estimated asymptotic distribution of the parameters /Jd is: 32 N=SOO. In most cases, N is at least equal to 1,000. Due to computational constraints, however, N was limited to 500.
Calculating the measure of electricity demand at each draw produces N observations from the asymptotic distribution of the demand function.

Range of Per-Unit Electricity Demand Over Time
The Krinsky-Robb procedure was used to forecast average electricity demand per air conditioner across 4 future climate scenarios. As described earlier, the possible climate futures were generated from the GFDL and PCM global climate models under high and low emissions rates of greenhouse gas emissions. The average demand and lower and upper bounds of a 90% confidence interval were forecasted using the vector of parameter estimates and variance-covariance matrix from Equation (6.6.1). The analysis was limited to forecasting hourly electricity demand and the associated confidence intervals for the hours between 12pm and 7pm on each day of July and August in the first year of each decade for the period between 2010 and 2099. The forecasting was limited to this period because the afternoon and early evening hours are the most likely to be "peak hours," and July and August are typically the hottest months of the year in California. The correlation between hours is accounted for by drawing from the complete variance-covariance matrix.
The projected demand estimates in the fixed scenario are derived from the unit-level kW models. For any hour and daily average temperature, the estimated demand is determined by Equation (6.6.2): (6.6.2) If the daily average temperature is above the cooling degree day base then electricity demand will be greater than zero. Projected demand was estimated for each hour of the day across a range of daily average temperatures from 67°F to 95°F for all air conditioners in the sample.  This section presents the forecasts of base load and peak demand at the per-air conditioner level. That is, the results are expressed as average per-unit base load demand and average per-unit peak demand. While the Krinsky-Robb procedure was used to forecast electricity demand as a function of time and temperature across the afternoon and evening hours in July and August between 12pm and 7pm, the results presented here are all for demand during the hour ending at 6pm. Table 6-12 shows the forecasted average per-unit base load and peak demand across the four climate and emissions scenarios. It is important to keep in mind that peak demand as expressed in the tables below is electricity demand for cooling at the probable time of system peak-this figure does not include electricity demand for all other customer loads that might be operating at the time of system peak. Recall from Chapter 2, air conditioning accounts for 15% of residential end-use loads (see Figure 2-15, page 37) and that air conditioning accounts for between 40% and 50% of the total peak load (Yoshimura,2009). Table 6-12 shows the range of average per-unit electricity baseload demand and peak demand during the beginning, middle, and later parts of the 21st century. As expected, the range of both baseload demand and peak demand increases over time and demand is higher in the high greenhouse gas emissions scenario than in the low emissions scenario.  Table 6-13 below lists the average per-unit baseload demand and peak demand every 10 years beginning with 2010. Peak demand is assumed to occur on the day with the highest daily average temperature of the summer.   demand is forecasted to increase much more than peak demand-up to five times more in some cases. The most likely explanation is the increasingly frequency of very hot (daily average temperature greater than or equal to 78°F) days and the decreasing frequency of mild summer days-thus, pushing up the average base load demand. Table 6-14 suggests that, in this region of California, electricity demand would actually become less "peaky" over time; if average per-unit base load demand increases more than average peak demand, the gap between base load and peak load should shrink. This means that the grid would need less total peak load generating capacity, and could also mean that peak load capacity could sit idle for a smaller fraction of the tiffie.
This result is also interesting because it is so starkly different from Franco and Sanstad's (2008)   identifying, on average, the outdoor temperature above which the air conditioner starts to be used. Above that temperature, the model indicates the average load used for cooling for each temperature levels. The demand reduction forecasts use a change in outdoor temperatures a proxy for indoor thermostat set-point increase. This means that a five degree increase in the thermostat set-point is analogous to cooling the house at an outdoor temperature that is lower by five degrees. The analysis assumed that participating customers' AC thermostats would be turned up by 1° per hour for 5 hours, ending at 6pm. Then, hour-specific load differentials were estimated for various temperature differentials. Table 6-15 and Figure 6-23 (a) through (d) report the average per-unit peak demand during the hour between 5pm and 6pm, the average per-unit peak demand with a 5° thermostat re-set, and the average per-unit percentage peak demand reduction. The average peak demand reduction ranges from 18% to 50% of per-unit peak demand, which is consistent with the estimated load reductions from 2007.

Peak Demand and Peak Demand Reductions at the Population Level
The next step in assessing potential savings from peak load reduction is to expand the per-unit level demand reduction projections to the population level. Doing so will provide regulators with an estimate for the magnitude of potential peak demand savings under various climatic conditions.

Total Population
The population under consideration consists of the total number of households in San Joaquin County, which is the county where Stockton is located.

Map 6-2. San Joaquin County, California
The  reductions on the order of 7% to 15% of peak demand. This is the expected result. As mentioned earlier, the relationship between the change in energy demand for cooling and a change is temperature is highly non-linear at the high-end of the temperature range.
This means that a change in temperature from, say, 100 to 105 degrees may produce little change in air conditioning energy use if most air conditioners are already running flat out at 100 degrees, so energy consumption cannot increase as the temperature climbs. For the · same reasons, air conditioner re-set programs might not be very effective at extremely hot temperatures, regardless of the magnitude of the incentive provided, since the thermostat adjustments at these extremes will have little impact on energy use. Since the temperatures forecasted by the GFDL model are sigllificantly greater than those forecasted by the PCM model, it is expected that raising the temperature setting by 5° on participating air conditioners' thermostats will have less impact in the GFDL model scenarios than in the PCM scenarios. Thus, one conclusion to draw from this analysis is that direct control air conditioning programs will be more effective in some locations than in others. For example, this type of program might not be particularly effective in extremely hot places like Phoenix, AZ (or regions forecasted to get extremely hot), but perhaps might be more effective in moderate climates, such as northern California.
Tables 6-17 and 6-18 show a similar pattern. Table 6-17 shows the results of the analysis of the second scenario considered. In this scenario, central air conditioning saturation increases by 1 % annually through 2050, but this scenario also assumes that 50% of households in San Joaquin county with central AC will have smart thermostats and participate in the re-set program. In this case, peak demand reductions between 13% and 19% are expected in scenarios based on the GFDL model and peak demand reductions between 25% and 35% are expected in scenarios based on the PCM model.  cost of a simple cycle gas turbine is used as the basis of the cost for peaking capacity. Spees (2008) estimates that with the recent increases of capacity cost the price of a simple cycle turbine is $728/kW overnight or $81/kW y annually (Spees, 2008). The capacity needed to reliably serve the system is greater than the end-use load delivered because of system losses and the necessary reserve margin. The analysis here adopts the 8% transmission losses that ISO-NE assumes in its forecasting processes. This assumption for line losses is typical throughout the U.S. A required reserve margin of 15%, which was the requirement for the ISO-NE 2008/2009 capacity market auction, will also be used.
Based on these values of T&D losses and required reserve margin, $81/kW yin peak capacity cost translates into a value of $89 /kW y for peak load reductions if T&D losses are considered but the margin for reliability is not. If both reliability and T&D losses are considered, then the cost is $94/kW (Spees, 2008) . Highlighting the distinctions among the three numbers emphasizes that a kW of reduction in peak load is worth significantly more than a kW of additional new capacity.
Since even 15% smart thermostat deployment is an ambitious goal for the near term, this cost savings analysis focuses on the first scenario. Table 6-19 shows the potential generation capacity cost savings calculated for the forecasted peak demand reductions.
Cost savings were calculated as the total capacity savings (including both reliability and T&D losses) multiplied by $94/ kW. electricity for residential end-uses by 1 % to 2%. Also, a reduction of this size is roughly equivalent to the amount of peaking power produced by 2 to 6 500-MW power plants.

Costs of Demand Reductions
The EIA-861 database contains historic data on utility demand-side management programs (Spees, 2008) . Several hundred utilities reported costs related to demandmanagement and energy efficiency as well as total coincident peak load saved for residential, commercial, and industrial customers in 2006. Table 6-20 shows summary numbers for the residential sector. Clearly, achieving peak demand reductions could potentially be a much cheaper means of satisfying peak demand than are supplying more capacity and more electric energy, until scaled up to some percent of load_ where the marginal costs of achieving more reductions are higher and the marginal benefits much lower. Peak demand reductions are currently being achieved at $26/kW-y, or just over one fourth of the $94/kW-y it costs to build new capacity. Table 6-21 shows the net savings that could be achieved if 15% of eligible households participated in a thermostat re-set program in San Joaquin County.

197
Also noteworthy, is that in most cases peak demand reductions are being achieved at a lower cost with larger customers, as expected. Peak reductions are being achieved most inexpensively with industrial customers, followed by commercial and finally residential customers. With large industrial customers, the administrators of a demand-management program can examine a large quantity of energy use all under one roof, rather than incurring the costs of interacting with many small residential customers in order to have affected the same total load.

9 Conclusions
This chapter analyzed the direct load control SmartAC program to determine its past and future impact on reducing peak demand. The conclusion of the analysis is that direct load control programs that limit consumers' demand for electricity for air conditioning during critical hours are effective in reducing peak demand. This chapter also concludes that this type of direct control program has a smaller impact on peak demand at extremely hot daily average temperatures. This means that this type of direct control program will reduce peak demand more effectively if the impact of climate change on daily average temperatures is moderate. Likewise, this chapter condudes that this fype of program may reduce peak demand more effectively in regions of the country with moderate temperatures and low humidity, such as northern California and the Pacific Northwest.
The results of the forecasting analysis show that if 15% of households with central air conditioning in San Joaquin County participate in a direct control air conditioning program, a 5°F thermostat re-set at the time of peak demand could reduce peak demand by 13 to 131 MW, depending on the climatic scenario considered. This translates into 199 savings of approximately $1.3 million to $9.6 million in generation capacity costs. This would avoid between 8.3 and 83 tons of carbon dioxide emissions, between 4.2 and 42 pounds of sulfur dioxide emissions, and between 6.9 and 70 pounds of nitrogen oxide emissions. To put this estimate in perspective, if similar savings could be achieved throughout the state of California it could reduce its expenditures on electricity for residential end-uses by between 1 and 2% and eliminate the need for between 2 and 6 peaking power plants by 2018. After considering the cost of installing the necessary advanced metering infrastructure, achievable net savings are between approximately $800,000 and $3.3 million per year.

200
Chapter 7 Recommendations and Conclusion

.1 Peak Demand Policy Recommendations
Direct control programs such as SmartAC are generally unpopular among economists, who have at least a vague preference for reduce peak demand through real-time or peak load pricing to shift demand to off-peak periods (Baumol & Oates, 1988) . However, there is room in a well-designed policy for direct controls. The reason is that peak demand problems do not develop smoothly and gradually. Instead, peak demand problems are characterized by infrequent but serious crises whose timing is largely unpredictable. Such contingencies may require rapid temporary changes in the rules of the control mechanism, and it is here that pricing measures appear to subject some severe practical limitations (Baumol & Oates, 1988) . I recommend that the ideal peak demand policy package contains a mixture of tools, with real-time pricing, direct controls, and even moral suasion each used under certain conditions to reduce peak demand and maintain system reliability.
This section will first lay out the advantages of real-time pricing and show why real-time pricing is an important piece of any peak demand policy. Then, I will explain why the peak demand problem is unlike many other natural resource problerns and why direct controls are a necessary complement to real-time pricing for maintaining system reliability.

Advantages of Real-time Pricing
Regulators at state public utility commissions should consider real-time pricing tariffs as an essential component of peak demand management programs, and should be driven by concerns about meeting peak demand at the lowest cost, enhancing system reliability, and creating equity among users.

Maximizing Consumer Surplus
Quantity-based programs, such as the SmartAC program analyzed here, do not provide customers with the ability to take into consideration the value that they place on particular end-uses when limiting their consumption. For example, the SmartAC program targets only one end-use for the reason that it is easy to control, not because of its low value to the customer. A customer might prefer, for example, to postpone using the clothes dryer or dishwasher during peak hours instead of reducing his air conditioning load. Real-time pricing allow customers to create their own "loading order" of end-uses with which to respond.

Creating Equity Among Users
Real-time prices reflect the long run cost of avoided generation, transmission and distribution capacity, and the short run cost of energy. Under current conditions, customers with a flat or counter-cyclical load profile subsidize high coincident peak loads o f others. When faced with real-time prices, customers will either choose to shift electricity demand to low-cost hours or pay the full price of their load profile, rather than having it subsidized the rest of the system. Further, for the customers that place a high value on stability in price, retailers could provide any combination of hedges or flat rates; these rates should charge a premium above the RTP rate reflecting the higher cost of service.

Avoiding Issues with Estimated Customer Baselines
Real-time pricing also circumvents the challenges facing the use of estimated customer baselines for compensating customers for demand reductions in quantity-based programs. Instead of compensating customers for energy not used, customers simply pay for the amount of energy they consume at prices adjusted to reflect the real-time marginal supply costs. This avoids gaming, moral hazard, and adverse selection issues from customers who try to benefit from artificially inflating their baselines. Real-time pricing also avoids the double-payment problem that results in paying excessive demand reduction incentives to customers and causes customers to forego consumption whose value exceeds the cost of producing energy.

Advantages of Direct Controls
Those who advocate for the sole use of real-time pricing for reducing peak demand omit an important consideration. Peak electricity demand falls into an important class of serious resource problems: the occasional crises that call for the imposition of emergency measures. Typically, these crises cannot be predicted much in advance or with any degree of certainty; we can, however, be certain that at some unforeseen time they will recur. An energy policy incapable of dealing with such contingencies is very limited.

203
Consider the following analogy: the polluting effects of a given discharge of effluent into a river will depend upon the condition of the waterway at that time-whether it has been replenished by a rainfall or depleted by a drought. The amount of water and speed of its flow are critical determinants of the river's assimilative capacity. Similarly, problems within one utility service territory during peak hours quickly cascaded and escalated to affect millions of customers across the Northeast in Canada causing the widespread blackout in August 2003.
The point of this analogy is that electricity demand levels that are acceptable and rather harmless under usual conditions can, under other circumstances, become catastrophic.
Moreover, these conditions depend on factors that are largely outside the control of system planners and often are not predictable in advance. Temperature, the largest driver of peak demand, for example, is only imperfectly foreseeable.
Despite its virtues, real-time pricing suffers from one serious drawback as a means for regulating peak demand: it cannot guarantee a sufficient demand reduction to avoid system failure. This is because the price elasticity for electricity demand is largely unknown, particularly at extreme temperatures. A one-time high hourly price may not be able to produce the necessary reduction in demand quickly (or predictably) enough to avoid system failure. This suggests one major attraction of direct controls: if . the control is effective and can be deployed quickly enough, regulators can be assured of maintaining system integrity.
This is why a combination of real-time pricing and direct controls is the ideal peak demand policy: under normal conditions, real-time pricing improves the economic efficiency of the grid and maximizes consumer surplus, but during periods of severe stress direct controls give regulators the flexibility to achieve the necessary demand reduction and avoid catastrophic system failure. While the exact contribution of direct control to alleviate crisis conditions will vary from case to case, clearly peak demand reductions during such times can play significant roles in avoiding system failure. A strong lesson from California's experience, in fact, is that a variety of policies aimed at getting all types of customers to reduce their electricity demand can have a significant impact on maintaining system reliability. During the California energy crisis of 2001, the state averaged a 10% reduction in peak demand during the summer months (with a record reduction of 14% in June). No rolling blackouts occurred in 2001, despite rather dire forecasts that had been made prior to the onset of the spring and summer peak demand season.

Recommendation
Based on this research, my principal recommendation for policymakers, regulators, and utilities interested in furthering effective demand management policies is that it would be beneficial to thoroughly test policy and program design concepts that incorporate both direct peak demand control and real-time pricing objectives as an alternative to only offering distinct direct control or pricing programs. Such a policy might have three levels, as illustrated by Figure 7-1. For example, a direct control air conditioning program designed to reduce cooling demand during peak periods might also promote real-time pricing through the use of enabling smart meters. Additionally, having more information on energy prices and consumption might lead consumers to upgrade to more efficiency appliances.
Important considerations would be: 1) should real-time pricing and direct control programs be offered to all customers classes; 2) what level of participation is needed to meet peak demand reduction targets; and 3) how quickly should advanced metering infrastructure be phased in and who should pay for it? While questions (1) and (3) are largely outside the scope of this research, this research does provide some direction with regard to question (2). This research suggests that enrolling 15% of all households with central air conditioning in a direct control program will achieve 4% to 7% reductions in peak demand in the near term and 6% to 15% reductions in peak demand by the middle of the century. If concentrations of greenhouse gases are high and temperatures rise rapidly, air conditioner control programs are likely to become less effective in reducing peak demand. This research also shows that direct control programs are likely to be more effective in areas that are moderately hot and less effective in regions that are extremely hot (e.g. a direct control air conditioning program will most likely be more effective in northern California than in Phoenix, AZ).

.2 Recommendations for Areas of Further Research
One important area for future research is the relationship between direct control, pricing, and energy efficiency. All three of these measures affect customer demand for energy, but how exactly these primary objectives relate to each other is an unanswered question.
There is almost no published research on the issue of how direct control and pricing programs affect energy use during off-peak hours and overall building energy use and energy efficiency. There is some mostly anecdotal evidence that suggests certain types technologies that enable direct control and RTP during peak demand periods can also 207 understanding of the relationship among direct control, RTP, and efficiency investments.
Yet understanding this relationship is vitally important because there are many potential synergies, as well as potential conflicts, between these types of programs. Potential synergies include: • Energy efficiency can reduce demand permanently, at peak as well as off-peak times; • Focusing on peak-demand reductions can help identify inefficient energy uses that could be improved at other times, resulting in broader energy and demand savmgs; • Technologies that enable peak demand reductions can also be used to enable RTP; • Customers who participate in demand reduction programs may be good candidates for participating in RTP and efficiency programs (or vice versa).
Perhaps the most important potential synergy is simply the fact that participating in a demand reduction program, particularly one that features advanced metering equipment, helps a customer better understand their energy use and associated costs.
Some proponents of combining peak demand reduction, RTP, and efficiency efforts into integrated policies cite improved energy efficiency as one of the benefits of peak demand reductions: " [demand reduction programs] can also serve as a stimulus and platform for participating customers to undertake expanded and enhanced energy efficiency programs. By gaining access to information about their usage that was previously unavailable to them, and by gaining the means to act upon it, users can undertake energy management and efficiency practices that can provide embedded, more permanent benefits to the system as a whole. (York & Kushler, 2005)" On the other hand, conflicts may arise between direct control programs and energy efficiency programs in terms of their funding. There also may be difficulties in trying to blend funds from different sources together for the sake of seeking combined energy efficiency and peak demand reduction objectives.
Answering some of the following questions is the key to understanding the relationship between direct control, RTP, and efficiency programs. In turn, such understanding is important in guiding policy and funding decisions.
• What effects, if any, do peak demand reduction program have on overall customer energy use and energy efficiency?
• Are direct control and energy efficiency objectives necessarily complementary? Of can these programs have conflicting elements?
• Are there programs that have deliberately targeted both peak demand and energy efficiency? What has their experience shown?
• Does direct control program participation lead to broader energy savings? If yes, does it lead to actual energy efficiency measures or just energy savings from the use of the controls to shift demand in off-peak hours?
• If high peak prices encourage peak demand reductions, do the corresponding low off-peak prices result in less motivation to save energy during off-peak periods?
• Does providing greater information to customers on their energy use and market conditions result in more energy efficient behavior?
• Can direct control and energy efficiency programs sometimes work in opposition to their respective objectives? For example, does providing an incentive based on the amount of peak load reduction delivered from a facility's energy demand baseline create an indirect incentive to not take energy efficiency actions that would reduce that baseline and thereby reduce the amount of compensation for demand reductions that could be earned? 209

.3 Conclusion
This research has focused on the relative advantages and disadvantages of using pricebased and quantity-based controls for electricity markets. It also presents a detailed analysis of one specific approach to quantity-based controls: the SmartAC program implemented in Stockton, California. Finally, the research forecasts electricity demand under various climate scenarios, and estimates potential cost savings that could result from the SmartAC program over the next 50 years in each scenario.
Perhaps the most crucial feature that distinguishes electricity from other commodities is the need to balance supply and demand on virtually a minute-to-minute basis. Because electricity cannot be cost-effectively stored, supply must be kept constantly equal to demand. If more electricity is demanded than generated, brownouts or blackouts follow.
If more electricity is supplied than used, the heat from the extra energy can damage transmission and distribution lines.
A second critical feature of electricity markets is the large variability in electricity consumption over time, known as the peak load demand problem. In most areas of the country, electricity demand is greatest during summer heat waves, when electricity consumption can be almost double consumption during base load periods.
The traditional approach to dealing with these two issues is to invest in a large stock of excess capital that is rarely used, thereby greatly increasing production costs. Because this approach has proved so expensive, there has been a focus on identifying alternative approaches for dealing with these two key peak load demand problems.
The two primary approaches to dealing with peak load demand are price based approaches, such as real time pricing, and quantity based approaches, whereby the utility directly controls at least some elements of electricity used by consumers. Well-designed policies for reducing peak demand might include both price and quantity controls.
In theory, sufficiently high peak prices occurring during periods of peak demand and/ or low supply can cause the quantity of electricity demanded to decline until demand is in balance with system capacity, potentially reducing the total amount of generation capacity needed to meet demand and helping meet electricity demand at the lowest cost. However, consumers need to be well informed about real-time prices for the pricing strategy to work as well as theory suggests. While this might be an appropriate assumption for large industrial and commercial users who have potentially large economic incentives, there is not yet enough research on whether households will fully understand and respond to realtime prices.
Thus, while real-time pricing can be an effective tool for addressing the peak load problems, pricing approaches are not well suited to ensure system reliability. Direct quantity controls are better suited for avoiding catastrophic failure that results when demand exceeds supply capacity.
There are additional advantages to real-time pricing. Unlike direct quantity controls, realtime pricing gives consumers the ability to create their own "loading order" based on the value that they place on different end-uses for electricity. For example, when prices are high a given customer might choose to unplug his computer and turn off the dishwasher before turning up the temperature setting on his air conditioning system. Real-time prices also create equity among users. Under fixed retail rates, customers with a flat or countercyclical load profile subsidize customers with high coincident peak loads. When faced with real-time prices, customers must either shift electricity consumption to low-cost hours or pay the full price of their load profile, rather than having it subsidized by the rest of the system. Thus, pricing approaches have the advantage of allowing electricity consumers to choose how to reduce electricity demand, thereby potentially maximizing consumer surplus.
But, consumer response to real-time prices is not reliable enough to protect against catastrophic system failure. The reason is the distinction between higher (but wellbehaved) increases in marginal supply costs versus system failure. Peak demand problems do not develop smoothly and gradually. Instead, peak demand problems are characterized by infrequent but serious crises whose timing is largely unpredictable. It is the potential for system failure that requires rapid temporary changes, and it is here that pricing measures appear to subject some severe practical limitations. Real-time pricing cannot guarantee a sufficient demand reduction to avoid system failure. The price elasticity for electricity demand is largely unknown, particularly at extreme temperatures. A one-time high hourly price may not be able to produce the necessary reduction in demand quickly or predictably enough to avoid catastrophe. This suggests one major advantage of direct quantity controls: if the control is effective and can be deployed quickly, regulators can be assured of avoiding system catastrophe. For these reasons, the ideal peak demand policy might contain a mixture of tools, with real-time pricing and direct load controls to reduce peak demand and maintain system reliability under different climate change scenarios.
There are important drawbacks to the use of direct quantity controls, however, including gamesmanship and estimating customer baselines. In cases where incentive payments are paid according to the magnitude of the demand reduction, the program is subject to gaming, moral hazard, and adverse selection issues from customers who try to benefit from artificially inflating their baselines. Real-time pricing circumvents these challenges because customers simply pay for the amount of energy they consume at prices adjusted to reflect the real-time marginal supply costs. Real-time pricing also avoids the doublepayment problem that results in paying excessive demand reduction incentives to customers and causes customers to forego consumption whose value exceeds the cost of producing energy.
This research analyzed the direct load control SmartAC program to determine its past and future impact on reducing peak demand. The conclusion of the analysis is that direct load control programs that limit consumers' demand for electricity for air conditioning during critical hours are effective in reducing peak demand. This research also concludes that this type of direct control program has a smaller impact on peak demand at extremely hot daily average temperatures. This means that this type of direct control program will reduce peak demand more effectively if the impact qf climate change on daily average temperatures is moderate. Likewise, this research concludes that this type of program may reduce peak demand more effectively in regions of the country with moderate temperatures and low humidity, such as northern California and the Pacific Northwest.
The results of the forecasting analysis show that if 15% of households with central air conditioning in San Joaquin County participate in a direct control air conditioning program, a 5°F thermostat re-set at the time of peak demand could reduce peak demand by 13 to 131 MW, depending on the climatic scenario considered. This translates into 213 savings of approximately $1.3 million to $9.6 million in generation capacity costs. This would avoid between 8.3 and 83 tons of carbon dioxide emissions, between 4.2 and 42 pounds of sulfur dioxide emissions, and between 6.9 and 70 pounds of nitrogen oxide emissions. To put this estimate in perspective, if similar savings could be achieved throughout the state of California it could reduce its expenditures on electricity for residential end-uses by between 1 and 2% and eliminate the need for between 2 and 6 peaking power plants by 2018. After considering the cost of installing the necessary advanced metering infrastructure, achievable net savings are between approximately $800,000 and $3.3 million per year.