Lynne Kiesling
The NIST smart grid interoperability roadmap workshop I attended last week has gotten me thinking about the similarities between system architecture (as the computer systems folks call it) and institutional design (as we political economy social scientists call it). Of course there’s quite a bit of similarity, as the work of the GridWise Architecture Council (to which I’ve been honored to contribute) demonstrates. And I’ve always argued for institutional design that allows for adaptation to unknown and changing conditions, applying institutions to what I think of as an evolutionary test.
One of the concepts that fall logically out of this way of thinking about systems and institutions is resilience. This article from the forthcoming issue of Foreign Policy (thanks to Jesse Walker at Reason) focuses on resilience, in the context of thinking about the financial crisis of the past year:
Resilience, conversely, accepts that change is inevitable and in many cases out of our hands, focusing instead on the need to be able to withstand the unexpected.
The author contrasts this dynamic, adaptation-focused concept with sustainability, which he views as a more static, maintain-the-status-quo approach. I’m not sure I agree with that; I think it’s possible to have a dynamic concept of sustainability … but then it becomes resilient sustainability. Or at least that’s what I teach my MBA students, that true sustainability is in fact resilience.
One crucial aspect of resilience in a complex “system of systems”, whether it’s a network of computer systems or a network of individual economic agents interacting in a network of markets (or the intersection of the two!), is the extent to which the different parts of the systems are “coupled”. That’s what I’ve been mulling over for the past week — loosely-coupled systems. Loose coupling is a term of art in computer systems. Interestingly, its Wikipedia entry caught my eye for the precise reason I wanted to incorporate it into this post:
Loose coupling describes a resilient relationship between two or more systems or organizations with some kind of exchange relationship. Each end of the transaction makes its requirements explicit and makes few assumptions about the other end.
This definition highlights the aspects that are important, especially from a smart grid architecture perspective and an institutional design perspective: different systems or organizations with some kind of exchange relationship. Loose coupling means that entities that are engaged in exchange have to understand and exchange certain kinds of information to make those exchanges happen, but these requirements are explicit, and they are not exhaustive. When I buy milk at the grocery store, I don’t have to know the name of the cow whose milk I’m buying … but I do want to know some product features, such as its fat content, the sterility of its production environment (here, admittedly, aided by safety regulations), as well as its price. If my transaction relies on that specific cow, that’s a more tightly-coupled relationship, and if she dies and the transaction relies on it being her milk, then the transaction fails. A simple-minded example, but you get the idea.
Loose coupling is like having shock absorbers at the interfaces between different entities and different systems in a complex “system of systems”. Loose coupling can help prevent the negative consequences of unexpected actions from propagating through the network, and that’s how it contributes to resilience.
The electric power network today is very tightly coupled, both physically and economically, which reduces its resilience in the face of unknown and changing conditions. One of the most important arguments in favor of smart grid investments is that digital smart grid technology enables looser physical coupling and looser economic coupling, to the mutual benefit of both producers and consumers. At an architectural and institutional design level, that means developing architectural principles (including interoperability principles and standards) and legal and regulatory institutions that enable individual economic agents in the electric power network to create long-lasting, resilient value in these loosely-coupled systems.
I’m going to continue thinking about how these ideas relate; at this point I’m just thinking out loud, so please chime in to help me refine my ideas.
Very nice post.
I was struggling with the same issues after the workshop. Within the context of the workshops, the team has been struggling with work for the present and work for the future. The proper distinction is instead between
endogenous vs. exogenous interfaces. Endogenous interfaces are appropriate for integrations of well known problems from within a problem space set. Exogenous interfaces require standards to deal with issues that are not only coming from outside but are not known specifically, yet. Exogenous interfaces, which could be loosely translated as “arm’s length negotiations are what you are describing.
There are three goals of interoperability: performance, generativity, and end user integration. End user integration is a cross-cutting goal with good market effects. While interoperability efforts and implementations can support each of the first two goals, the two goals can be antagonistic to each other. Interoperability creates a level playing field, but we must remember which game we want to play.
When a field is well known and well understood, the primary goals of interoperability are to reduce maintenance costs and drive producers into commodity products. There are many producers of nearly identical products. Competition becomes driven by price and performance. This sort of interoperability is quite detailed, as it needs to specify all uses, all circumstances, and all values. Such work is detailed and long running.
When a field is new, and you need innovation, you cannot know all the details and use cases. You need to specify minimal information, but enough information. Too much information creates too many interdependencies, and makes innovation more difficult. Too much information adds complexity and limits the entry of new participants. Too much information assumes there are no changes of ownership and responsibility at the interface. Generative interfaces must be light, and the integration loose.
Generativity and reliance run side by side. Performance is somewhere on the other side of the road. Each is the appropriate primary focus in its place. The smart grid needs relsilience and generativity.
Lynne,
I like your post and Toby’s comment. The last book I read (most of) was Brian Walker’s “Resilience Thinking — Sustaining ecosystems and people in a changing world.” Smart grid will be a standards ecosystem if there ever was one.
I discovered both you and Toby on the web in the weeks before the workshop, as I was getting up to speed. I was delighted to see that you were leading the Business and Policy track, and I was glad that the utility perspective was both well represented and well challenged in that track. Perhaps I will see you at the next workshop. I hope the thinking you are both doing will inspire policy.
Lance
The extent to which loose coupling can filter out ‘black swans’ is dependent on interconnections with the broader system – loose coupling can exacerbate instability through an increase in asynchronicity of feedback: Whatever you do in a system there is going to be a cost – loose coupling has plenty of costs.
According to discussion when the American Association of State Highway and Transportation Officials were developing the current bridge design code, the designs should be robust, redundant, and resillient.
Robustness seems to be a product of redundancy and resillience, but it was used in a differneet way by the bridge code writers. But do consider redundancy as an element of reslliency.
Adapability is another term that fits with the concept of resilliency. Loose coupling provides for adapability. In your example, while your requirements were adaptable with respect to the cow, they appear rigid with respect to fat content. For example, if skim milk is not available, I will drink 1% milk. The transaction could still occur. Obviously for complex systems the interface design is extremely important and interface requirements may well limit the adaptability in many respects.
re 4: Robustness – in the bridge case, and others – can be a way to represent design margin.
A bridge design could be redundant (extra piers or cables) and resillent (still works after rust or accident damage) – but not robust enough to handle the unanticipated 2X maximum load, or 150 mph winds.
Federalism (with most government functions provided at the state and local level)is a great example of the benefits of decentralized services, which tend to exhibit the fast response to change facet of resillence, and what might be called “customization” in some business environments.
Another thing to think about is building in negative feedback loops. That way, the system can moderate itself without manual intervention, which often comes too late to help and can badly hurt.
I got to the last (#6) comment before I saw what I was looking for. Resilience requires reasonably accurate feedback. In org theory I point to my students that no matter how good your system is, it can’t be perfect, and in the absence of accurate feedback and adjustment/change of the system it will inevitably fail.
Accurate feedback is essential to resilience.
Poli-Sci types should take a hard look at the way Comp-Sci types handle change control. Every time you write a law, you’re making a change to the way society operates. Not all those changes turn out to be good ones.
Programmers face the same problem. Every time they change a piece of code, they’re trying to improve it. But not all changes turn out to be improvements. Programmers have a remarkable array of “change-control systems” (RCS, CVS, git, SubVersion, etc.) to help them manage this problem. They’ve also learned to make changes that are self-contained and easily reversible, to the extent possible.
“Self-contained” and “easily reversible” should be goals for legislation, as well.
These are all nice terms with loose definitions–robust, resilient, sustainable, etc.. In a specific context one could imagine developing precise technical definitions. Two other points that should be kept in mind:
1) If all the local, real-time adjustments fail, it’s very handy to have a way to quickly reboot the system. If an airline gets into a cascade of late or cancelled flights where each miss causes another one, they can restart their schedule from scratch the next day.
2) If the capacity of a counterparty is high relative to the performance required by the transacting party, then the transacting party can afford to be relatively ignorant about the counterparty’s limitations. On the other hand, if the counterparty’s capability is challenged by the required performance level, then the transacting party has to thread the needle in choosing exactly what to request from the counterparty and so needs to have a good idea about the counterparty’s specific strengths and weaknesses.
For example, if you want to program a computer to invert a small matrix in a normal amount of time, you don’t need to know much about the hardware’s bottlenecks and limitations, but if you want to invert something gigantic you’d have to program with due regard to memory limitations and instruction speeds, As hardware has improved, the boundary where this change takes place has shifted to bigger and bigger matrices.
“One of the most important arguments in favor of smart grid investments is that digital smart grid technology enables looser physical coupling and looser economic coupling…”
I may just be picking at semantics here, but dynamic pricing and price response by demands and distributed generators don’t strike me as making the coupling looser. It seems to me that today’s power system is economically disconnected from most of the physical loads drawing energy from it. This disconnect causes dislocations both in the short and long term. Short term: What *should* the price have been in Cleveland at 3:30 p.m. on August 14, 2003? Long term: On-peak demand uninformed by adequate pricing leads to more peaking generation, the cost of which is blended into all hours, failing to inform continued growth of on-peak demand. Those are loose economic couplings that need to be tightened. The Cleveland example shows how too-loose economic coupling can lead to catastrophic breaking of the physical coupling.
Rather, dynamic pricing provides an important negative-feedback loop that currently doesn’t exist from generator to customer and back, and if that loop is tight/timely enough then it can greatly improve overall system stability, in both the economic and physical senses. (I really would love to know what a locational real-time price would have been in Cleveland, and whether loads in the city would have dropped themselves in time if they’d only known. MISO didn’t yet have LMP at the time.) As #3 points out above, the feedback does no good if its timing is too delayed. Delays can turn negative feedback positive, detracting from stability. So, closing communicative loops sounds like economic tightening to me, resulting in physical tightening… in a beneficial way.
Would loosening help the power system? After the blackout some suggested that the Eastern Interconnect should be broken up into asynchronous islands interconnected with DC links, like ERCOT. That would break the synchronous bonds, and would force each island to be more self-reliant. That would be looser coupling between market regions. But I would hate to see that done before dynamic pricing is ubiquitous. I really suspect that with better pricing the system would be much more stable, and that it would have made August 14, 2003, just another regular day for most people.
If I may add, what the communicative/transactive grid accomplishes is distributing knowledge, providing proper incentives to many more agents to act in ways that are mutually beneficial. The system becomes more *effectively* adaptive through what I consider to be tighter/better economic coupling which should result in better physical coordination. The need to positively control the system top-down should be lessened, and in that sense it is looser. That is, the visible hand of the central controller should have less to do; it can loosen its grip. The added coordination that makes the physical system tighter is emergent.
I have to disagree that the “smart grid” concept will make the grid more resilient. The primary design goal of the smart grid is efficiency not resiliency. Efficiency and resiliency are at cross purposes as design goals. Resilient systems possess redundancies so that if one component fails the remaining components can take up the slack. Efficient systems eliminate redundancies in order to eliminate their associated cost.
The smart grid concept seeks to reduce the future need to build power plants by rapidly shuffling power from many different sources to many different areas. It is a a computer controlled juggling act design expressly to eliminate any redundant capacity in the grid. It will eliminate compartmentalization in the grid in order to efficiently move power over long distances (such a from solar plants in Arizona to New York.)
I think the smart grid concept is your classic top-down design intended to accomplish goals largely unrelated to the function of the core system. Like all such systems, it will fail when it comes in contact with the real world.