Lynne Kiesling
Mike beat me to it in commenting on David Cay Johnston’s NYT article from Wednesday about grid congestion. My thoughts are somewhat different from Mike’s, for what it’s worth.
Johnston hits on one of the most pressing problems in electricity restructuring, although I’m going to frame it differently from how he does: technological change and economic dynamism have changed the underlying environment in ways that make it more valuable/more efficient/less costly to move away from the vertically-integrated, government-granted service territory monopoly that we have had for a century. This is particularly true in generation and is seen in the estimates of value creation (new benefits, cost savings, etc.) from the liberalization of wholesale electricity markets. It is also true in retail provision of energy services to end-use customers, although we have less evidence of that because of the perpetuation of the bundling of the energy commodity sale with the wires rental from the point of view of the customer. The pesky network/”natural” monopoly (although I think it’s becoming quite unnatural) portion of the value chain is the wires. So as we liberalize generation and move toward liberalizing retail, the physical wires remain the part of the value chain that still has some economic justification for the continuation of regulation (although that justification will wither over time, as distributed generation and active demand response make those wires contestable).
Thus one part of this phase transition is that wires investment, particularly transmission investment, can get lost in the shuffle. This was particularly true toward the end of the 1990s, when the regulated rate of return on such investments was unattractive relative to average rates of return in other investments. Resources flow to their highest-valued use, and by having regulators determine the “reasonable” rate of return, that means relying on their estimate of the value of transmission investment instead of the interaction of many, many potential investors and owners and customers. In other words, regulator-determined rates of return on transmission investments are not adaptable and do not adjust to reflect changing opportunity costs over time (FERC has recently done some adjustments to this rate of return process, but it’s still inertial and non-adaptive). Regulatory determination of this rate of return also has a political economy problem, because of the effect of the political process on determining that rate of return; for example, regulators have to weigh the pros and cons of transmission investment against the political backlash that can arise from increases in rates to pay for such investments.
In brief, the process through which transmission investment is supposed to occur is currently a mess: too politicized, not sufficiently transparent, and not adaptive to changing conditions.
Johnston does offer the “official story” about transmission investment and how congestion charges have increased over the past decade as demand has increased and transmission capacity has not. Here are three important other points to consider as you read his analysis.
First, always remember that prices help resources flow to their highest valued use, and that congestion charges are prices that make the relative scarcity of transmission capacity more transparent to both consumers and transmission owners. So in the example he uses of Chambersburg, Pennsylvania, the increase of congestion charges is an important signal, communicating to those customers that something involved in the process of providing them with energy service is relatively more scarce than before. That is a signal to them to evaulate the value to them of using the amount of electricity that they do. If they are more sensitive to these prices, they will reduce their use to the extent they can. Similarly, these congestion charges are an investment signal (more on that in a second).
Second, transmission capacity is not congested to the same degree over the entire day; it’s more congested at some times and less congested at others. Unfortunately, though, that fact is not communicated transparently to end-use customers, because they pay a flat-rate transmission charge regardless of when they consume power. If the transmission prices that customers actually saw had more granularity, i.e., were able to reflect those changes in scarcity, then customers could see the real effect that their consumption choices had on the network as a whole, and they would have a real incentive to shift their use to non-congested periods.
Third, the way investment occurs in most industries, and the way it has occurred in most markets throughout human history, is through entry. One of the biggest problems with the current structure of electricity transmission is that the transmission owners receive those congestion charges, but they are the only ones who are legally capable of building new transmission capacity. If they did that, the congestion charges they receive would diminish or disappear, and all they would get in return is the regulated rate of return on the new investment. But because of the government-granted monopoly treatment of the wires, new transmission entry is illegal. Thus the natural dynamic by which new suppliers can come in and compete away those congestion charges is stifled by regulation.
One final point on which I cannot really fault Mr. Johnston, but I can fault myself and all of us who are working in grid modernization: his article says nothing, and I mean NOTHING, about using digital technology and interoperability to modernize the grid, make it adaptive, responsive, self-healing, and more automated all the way from generator to customer. That is the most effective way to deal with the grid congestion challenges facing us, because bits are cheaper than iron, and digital technology is not prone to the same NIMBY challenges that new wires are. There are lots of us working on this challenge:
- GridWise Architecture Council, of which I am a member, focuses on interoperability principles
- IntelliGrid, EPRI’s initiative to model an integrated, self-healing, automated grid at a technical and engineering level
- The Modern Grid Initiative, which is focused more on creating a national vision of a modern grid and on creating and publicizing demonstrations of technologies and other ways of bringing it about
- Galvin Electricity Initiative, which is building the case for the value of transforming the electric power system using digital technology and strong commercial and political leadership and vision
Thus in part this post is a way to communicate these digital “grid of the future” principles to a broader audience. One good way to keep up with smart grid developments is the Smart Grid Newsletter, which is chock full of useful and informative articles on modernization and digital technology, and its value in transforming the grid for all parties, including consumers.
Thus I hope the next time someone like Mr. Johnston writes an article about the electric power grid for a general audience, that article will emphasize the transformative capabilities of digital technology.
Comments
I think the Chambersburg details, which I glossed over because they are so technical, raise doubt about price sigansls argument above.
The town’s demand, as I reported, is only modestly changed. So is the flow of power. But the model PJM uses for measuring loop flows changed when it expanded its footprint.
The town and others argued that they shoiuld not be penalized because the model measures differently, they asserted that the tarriff was not properly applied and they argue dthat even if it was the result was unjust and unreasonable.
Note quotes in article about how it costs this small town $1m a year just to try to keep up with the fine technical details in all of the paperwork flooding them from PJM. They say that the never grasped that if their demand remained largely unchanged they could lose half their financial offset credits for congestion, which made them indifferent to previous policy, because PJM changed its boundaries and that resulted in a change how PJM measured the flow of electricity (the flows did not change, just the measurements of those flows in the model).
It may be that this is sending a price signal, but that is not how the town’s leaders and those in other towns I interviewed see it at all. The arguments and technical data are in the FERC edocket files.
Posted by: David Cay Johnston at December 15, 2006 4:34 PMI note the following:
1. AC power does not always flow to its highest-valued use. It is exceedingly common to get the (economically) counterintuitive result of power flowing from a high-priced node to a low-priced node. This problem is exacerbated by the poor approximation to AC power flow that PJM uses to calculate its prices — they ignore losses, reactive power, and so forth. I do not mean to pick on PJM here — the other RTOs do not do much better.
2. Differences in locational prices in a power network cannot be interpreted analagously to price differences in, say, a transport network. Locational price differences in power grids tell you that there is congestion somewhere in the network, but they do not necessarily tell you where, and they do not necessarily tell you the best way of relieving the congestion. It is also possible that the price signal being received by the good people of Chambersburg PA could reflect something happening elsewhere in the network, and conservation or demand response on their part would do little good. Given their location in the PJM grid, this is quite likely.
3. Thus, AC transmission investment is really a systems problem, and unlike generation is hard to decompose. The market-based transmission investment model in the U.S. has pretty much been a bust (partially for these reasons but also partially for political reasons). There is one merchant line in Long Island, but its cost has essentially been socialized. For the reasons noted above, free entry into the transmission grid is very problematic. Entry at the high-voltage level is not technically illegal, but there significant barriers. Entry at the distribution level is illegal, but this may start to change as FERC warms up to DG and microgrids.
4. It is well known among network scientists that it is often very easy to improve the flow through a network by removing links in the network. Economically speaking, some technology that would allow for rapid free disposal of lines could have immense value. FACTS, for example, is one possibility based on power electronics. There are many others. So I agree that there ought to be a lot more attention paid to non-wires solutions.
Posted by: Seth at December 16, 2006 8:12 AMChambersburg, PA… A victim of Braess’ Paradox. 😉
Posted by: D.O.U.G. at December 17, 2006 5:33 PMDavid,
Precisely. The lack of transparency and the obfuscation in our transmission pricing makes it almost impossible to use these price changes as a meaningful signal. That’s the pitiful shame of all of this, and it’s one of the biggest reasons why customer use and owner investment aren’t better optimized.
Seth,
w.r.t. your #1, resources flow to their highest-valued use when market designs and rules, and missing markets, do not impede them. Your example contradicts your first claim well; in particular, the lack of markets for reactive power, the incomplete design of congestion pricing, etc.
To clarify D.O.U.G.’s reference to Braess’ Paradox, from Wikipedia:
http://en.wikipedia.org/wiki/Braess’_paradox
“adding extra capacity to a network, when the moving entities selfishly choose their route, can in some cases reduce overall performance. This is because the equilibrium of such a system is not necessarily optimal.”
I don’t think this reality contradicts the underlying economic truth that markets and prices are tools that enable resources to find their highest-valued use, even in networks. The problem is harder and more dynamic, but to my mind that just means that we should concentrate on contracting and market design that allow participants to come up with ways to “govern the commons”. I’m in the middle of formulating some ideas about this for some writing I’m working on, but nothing ready for prime time yet.
Posted by: Lynne at December 18, 2006 9:59 AMBraess’ Paradox is also the academic fascination of a young man named Seth, with whom I’ve discussed these things before. En garde, Seth! 😉
I agree with you, Lynne, that “counterintuitive flow” does not invalidate the economic logic. It is much more of an electric systems issue, and it does suggest that the equilibrium is not optimal, at least in the moments when this occurs. When power flows on a line from a high-priced node to a lower-priced node, this indicates that the line has negative value, and the consumers on the system would be economically better off without it, at least on the whole. I don’t know whether Seth is correct that it happens quite often on real systems, but it is true that it is easy to create such a situation artificially. In practice, such facilities are often lower-voltage, low-capacity lines that have been overbuilt with high-voltage, high-capacity lines. The smaller lines become constraints for the larger lines, and the engineering response is often to simply operate the lower-capacity lines normally-open in the middle.
On highly networked systems you have to back off from the electrical detail level to see the larger economic flows of energy from producing to consuming areas. Such flows set up the conditions where such anomalies can occur, and redundant constrained facilities make them happen.
And now to Mr. Johnston…
The Chambersburg complaint is about a change in the muni’s allocation of Auction Revenue Rights, which you properly described above (but not in your article) as financial offset credits. Apparently Chambersburg was getting a full refund of congestion charges before, and now it isn’t. Without looking deep into the facts it is difficult to tell whether the new allocation is less reasonable or more reasonable than the old allocation. I’m sure that the change was a shock to the muni, though, and you’re correct that nothing much really changed on the physical side. It’s just not clear whether they are being penalized or they have lost a subsidy.
I don’t want to argue the merits of the Chambersburg case either way, but rather to point out that the existence of Auction Revenue Rights (ARRs) is one of the reasons that congestion costs are often overstated. Notionally there may be quite a bit of congestion “cost”, but much of it is returned to consumers through this financial crediting mechanism. The refund mechanism means that the true net cost of congestion may be much lower than we see reported. I suspect that the discrepancy you referred to in your article was a result of this “notional” versus “net” issue.
You are right that not many people understand Locational Marginal Pricing fully. You write, “When congestion charges force electric customers to pay more, many owners of power plants profit. They do so because, in many states, every power plant gets the same price as the highest bid accepted from any plant chosen to supply power. As utilities turn to less efficient, more expensive plants, the price paid to every other electricity producer rises.” This is a major oversimplification, to the point of being factually incorrect. The fact that congestion exists means that the generators are not all getting the same price. Rather, the ones located in the constrained importing region are getting a higher price than the generators located in the exporting region. This is proper because it is in the constrained importing region that the additional high-cost generators are being brought into service. The net congestion charges in the importing region are to pay for the high-cost generators. Somebody’s gotta pay’em. The notional congestion suggests that everybody in the importing region has to pay that high price, but much of the notional congestion is refunded through the financial crediting mechanism. The price signal is focused on those who aren’t covered by it, and the signal says “If you consume in these hours then we have to fire up something local and expensive, and you have to pay for it.”
You write, “The high-cost plants that must be fired up when there is a bottleneck operate for more hours than they would if the network were designed for the competitive market.” You’ve referred to a counterfactual condition: “if the network were designed for the competitive market.” Transmission providers often say such things, but it is not clear that a system optimally “designed for the competitive market” would be any different from the one we have. Nobody’s ever done such a thing. Transmission companies believe that we should build enough regulated transmission that no congestion remains, and they have a financial self-interest in that view. They point to notional congestion figures and blow them up as quite a big deal. But the “official story” on transmission is not universally agreed to in the industry, and it’s not just among people interested in their own profits. Some of us care about getting this right.
You write, “In some cases, expensive plants built just to meet peak demand, as on hot summer days, now run 40 percent of the year, Energy Department reports show… Most of these plants generate electricity from modified jet engines that burn natural gas, which is more expensive than electricity from coal or nuclear plants. But congestion has become so chronic that some century-old and very inefficient steam turbines must also be operated to avoid blackouts.” I have strong suspicion that “these plants” you refer to are combined-cycle plants which were not built just to meet peak demand. In fact, most of the country is now reliant on them to a surprising extent. There are exceedingly few simple-cycle combustion turbines running at high capacity factors for the reasons you say. Some of them burn oil, and they are in some notoriously bad coastal and/or peninsular situations. It is true that some older (but not century-old!) steam turbines are running a good bit in similar locations. The gas turbines at Astoria ran 8% capacity factor in 05, but the oil-fired steam units ran 30%. Why? The combustion turbines are actually designed for peaking duty and they run efficiently that way. The steam turbines aren’t designed for peaking duty at all. Rather, they were designed long ago to provide round-the-clock power to the city of NY. If you could flip them on and off like a combustion turbine, then today they’d probably run less often than the gas-fired turbines. As for avoiding blackouts, note that the reason these units have to run for reliability is because of local load growth and the lack of new generation in or transmission to that importing location. Congestion is descriptive of that infrastructure situation; it is not a cause. You didn’t say otherwise, but many people have the idea that congestion is some sort of manipulation that was brought upon us by markets. It is more fair to say that markets are now revealing these problems, whereas before only the utilities knew.
Posted by: D.O.U.G. at December 18, 2006 11:16 AM
Doug, thanks for the awesome reply to DCJ.
Just from an engineering point of view, can you give me a feel for what these peaker plants are? I know that they’ve got the power turbine (“jet engine”, as DCJ noted). Are they combined cycle? I thought that they were.
When DCJ talks about the “steam turbine”, is that just an old fashioned boiler? The combined cycle plants have steam turbines too, after all.
I’ve seen a lot of articles in engineering publications about upgrades to old boilers. Just because it’s old doesn’t mean that it hasn’t been upgraded. Steam turbine technology hasn’t changed much in recent times.
Finally, aren’t all the plants in NYC cogens? They use the waste steam for heating buildings, right? Or is that just in Manhattan?
Posted by: Buzzcut at December 19, 2006 9:02 AMOne of the main ways to address grid congestion is through the use of distributed generation. While Solar and Micro Turbines are not going to meet all of the electrical demand, they will go a long way to reducing problems in the grid.
Perhaps if building owners were compensated for the benefits of deffered infrastructure upgrades, we would see more distributed generation brought online.
Posted by: Co2action at December 19, 2006 9:48 AMPeaking plants are usually simple-cycle combustion turbines, which are modified jet engines with a generator on the shaft. A combined-cycle plant is the same combustion turbine with a steam generator/turbine/generator piggy-backing on the jet engine exhaust heat. Combined-cycle plants are very efficient, but the fuel is now expensive. Nevertheless, the landscape is littered with them and the power system is using most of those. I really think DCJ was confused about that, because there just aren’t very many gas-fired simple-cycle units out there running at high capacity factors.
At the same time, there are many older steam generators out there that are powered by gas- or oil-fired boilers. They are mostly pretty old and not efficient, but they persist because they still provide capacity in important locations. [Many of them have been retired in recent years, such as in TX.] Some older boiler units have been repowered with simple-cycle turbines, creating an efficient combined-cycle configuration. That’s a good thing. But there are still some older boiler units out there on their last legs.
NY does (or did) have a lot of cogen, but I don’t know if much of that is in the city. And I mean that… I really don’t know how much there is in-city. But I know that the city is a constrained importing area that with some really costly generation inside of it.
Posted by: D.O.U.G. at December 19, 2006 8:27 PM
Hooo boy, I’ve gotten behind 🙂
Re Braess’s Paradox: This phenomenon is actually much more general than the “selfish” networks in the Wikipedia article. In selfish networks (where each network user chooses her own route through the network) it really isn’t much of a paradox at all; it’s just a manifestation of the Prisoner’s Dilemma. The extent to which it exists in the U.S. grid is unknown, and if FERC would play nice and give me their Form 715 data, then I could get on with answering this question. I did some simulations with the standard IEEE test networks which suggest that the Paradox is actually very pervasive. The associated cost depends on what the load profile looks like. Lynne, I will let you chime in with the usual economist’s observation that evaluating externalities depends critically on identifying the relevant range of demand 🙂 Mathematically anyway, it is not hard to show that in a large enough network (other than D.O.U.G.’s overbuilt utility transmission networks), the probability of there existing a line whose removal would decrease congestion is pretty much 100%.
Re counter-intuitive flows: Assuming that you are solving something like an OPF, this represents a short-run (constrained) optimum; in the economic long run you might remove the line. But it is also possible that the line might figure differently in the cost-benefit calculus at different hours of the day or times of the year. So what you want is some kind of fast switch that can open and close the circuit according to system conditions. Back to Lynne’s plug for tech-geeky non-wires solutions, which I fully support. In any case, in realistic systems it is still true that LMP differences do not necessarily equate to congestion, and that they do not necessarily tell you the best (meaning social-welfare-maximizing) way to relieve whatever congestion exists. Since electric networks currently consist of a large number of coupled components, you can’t really think about the electric system effects in a different breath from the economic model.
Re engineering systems for regulation vs. competition: At least in PJM, I think there is an excellent argument that the system was not designed for the geographically wide-scale dispatch and the amount of long-distance transactions that are both happening today. The transmission constraints going west-to-east in PJM correspond pretty closely to old utility service territory boundaries. Western PA has massive excess capacity in generation but there is reasonably little transmission to get it anywhere in Eastern PA. Historically this is because the demand of the steel industry kept all that generation local; once all the mills shut down several decades back, Western PA didn’t need all that generation. If you or a market was designing PJM from the ground up today, you probably would not put several GW of generation within 40 miles of Pittsburgh, at least not without adequate transmission support.
There is an interesting analysis by a consulting outfit which shows that the westward expansion of PJM has lowered prices in some (but not all) congested areas and has increased prices in uncongested areas (like Western PA). To be fair, this analysis looks very favorably at PJM and calculates that the overall cost of power has decreased following the westward expansion. But the point is that nothing has changed other than the scope of the market. I am not sure I would say that there are any subsidies involved here, other than the normal batch that everyone has gotten since the beginning of time. The expanded PJM does have more west-to-east transmission capacity than the old PJM, so my guess is that what we have in Chambersburg and Pittsburgh is plain old Ricardian rent. So consumers are being “penalized” in the sense that there is a transfer of wealth (surplus) from them to generators and to consumers in eastern PJM. Whether this really amounts to a penalty all depends on which side of Kaldor-Hicks economics you sit 🙂
Re peakers: I am told that the owner of several peakers in Eastern PA has been begging to retire the units for years, but PJM keeps throwing huge out-of-market payments to them. CTs are on the margin an awful lot of the time in PJM, and part of that I’m told is for frequency regulation and other engineering reasons. Same thing apparently happens in the LA area.
Posted by: Seth at December 19, 2006 10:11 PMDoug,
If your “peaker” plant is no longer functioning as a peaker (and if it is running 40% of the time, it isn’t a peaker any longer), why wouldn’t you go in and make it a combined cycle? That would be a straight up capital expenditure, you would be adding no pollution, from the EPA’s perspective.
Seems like a business opportunity to me, but what do I know? I’m just an uncivilized engineer, I haven’t read Adam Smith like DCJ has.
I know that the peakers are only supposed to run occasionally, but I can’t believe that they just dump that high quality waste heat up a stack. Do they at least have a regenerator?
Posted by: Buzzcut at December 20, 2006 4:36 PM