Should regulated utilities participate in the residential solar market?

I recently argued that the regulated utility is not likely to enter a “death spiral”, but that the regulated utility business model is indeed under pressure, and the conversation about the future of that business model is a valuable one.

One area of pressure on the regulated utility business model is the market for residential solar power. Even two years hence, this New York Times Magazine article on the residential solar market is fresh and relevant, and even more so given the declining production costs of solar technologies: “Thanks to increased Chinese production of photovoltaic panels, innovative financing techniques, investment from large institutional investors and a patchwork of semi-effective public-policy efforts, residential solar power has never been more affordable.” In states like California, a combination of plentiful sun and state policies designed to induce more use of renewables brought growth in the residential solar market starting in the 1980s. This growth was also grounded in the PURPA (1978) federal legislation (“conservation by decree”) that required regulated utilities to buy some of their generated energy from renewable and cogeneration providers at a price determined by the state public utility commission.

Since then, a small but growing independent solar industry has developed in California and elsewhere, and the NYT Magazine article ably summarizes that development as well as the historical disinterest of regulated utilities in getting involved in renewables themselves. Why generate using a fuel and enabling technology that is intermittent, for which economical storage does not exist, and that does not have the economies of scale that drive the economics of the regulated vertically-integrated cost-recovery-based business model? Why indeed.

Over the ensuing decades, though, policy priorities have changed, and environmental quality now joins energy security and the social objectives of utility regulation. Air quality and global warming concerns joined the mix, and at the margin shifted the policy balance, leading several states to adopt renewable portfolio standards (RPSs) and net metering regulations. California, always a pioneer, has a portfolio of residential renewables policies, including net metering, although it does not have a state RPS. Note, in particular, the recent changes in California policy regarding residential renewables:

The CPUC’s California Solar Initiative (CPUC ruling – R.04-03-017) moved the consumer renewable energy rebate program for existing homes from the Energy Commission to the utility companies under the direction of the CPUC. This incentive program also provides cash back for solar energy systems of less than one megawatt to existing and new commercial, industrial, government, nonprofit, and agricultural properties. The CSI has a budget of $2 billion over 10 years, and the goal is to reach 1,940 MW of installed solar capacity by 2016.

The CSI provides rebates to residential customers installing solar technologies who are retail customers of one of the state’s investor-owned utilities. Each IOU has a cap on the number of its residential customers who can receive these subsidies, and PG&E has already reached that cap.

Whether the policy is rebates to induce the renewables switch, allowing net metering, or a state RPS (or feed-in tariffs such as used in Spain and Germany), these policies reflect a new objective in the portfolio of utility regulation, and at the margin they have changed the incentives of regulated utilities. Starting in 2012 when residential solar installations increased, regulated utilities increased their objections to solar power both on reliability grounds and based on the inequities and existing cross-subsidization built in to regulated retail rates (in a state like California, the smallest monthly users of electricity pay much less than their proportional share of the fixed costs of what they consume). My reading has also left me with the impression that if the regulated utilities are going to be subject to renewables mandates to achieve environmental objectives, they would prefer not to have to compete with the existing, and growing, independent producers operating in the residential solar market. The way a regulated monopolist benefits from environmental mandates is by owning assets to meet the mandates.

While this case requires much deeper analysis, as a first pass I want to step back and ask why the regulated distribution utility should be involved in the residential solar market at all. The growth of producers in the residential solar market (Sungevity, SunEdison, Solar City, etc.) suggests that this is a competitive or potentially competitive market.

I remember asking that question back when this NYT Magazine article first came out, and I stand by my observation then:

Consider an alternative scenario in which regulated distribution monopolists like PG&E are precluded from offering retail services, including rooftop solar, and the competing firms that Himmelman profiled can compete both in how they structure the transactions (equipment purchase, lease, PPA, etc.) and in the prices they offer. One of Rubin’s complaints is that the regulated net metering rate reimburses the rooftop solar homeowner at the full regulated retail price per kilowatt hour, which over-compensates the homeowner for the market value of the electricity product. In a rivalrous market, competing solar services firms would experiment with different prices, perhaps, say, reimbursing the homeowner a fixed price based on a long-term contract, or a varying price based on the wholesale market spot price in the hours in which the homeowner puts power back into the grid. Then it’s up to the retailer to contract with the wires company for the wires charge for those customers — that’s the source of the regulated monopolist’s revenue stream, the wires charge, and it can and should be separated from the net metering transaction and contract.

The presence of the regulated monopolist in that retail market for rooftop solar services is a distortion in and of itself, in addition to the regulation-induced distortions that Rubin identified.

The regulated distribution utility’s main objective is, and should be, reliable delivery of energy. The existing regulatory structure gives regulated utilities incentives to increase their asset base to increase their rate base, and thus when a new environmental policy objective joins the exiting ones, if regulated utilities can acquire new solar assets to meet that objective, then they have an incentive to do so. Cost recovery and a guaranteed rate of return is a powerful motivator. But why should they even be a participant in that market, given the demonstrable degree of competition that already exists?

“Grid defection” and the regulated utility business model

The conversations about the “utility death spiral” to which I alluded in my recent post have included discussion of the potential for “grid defection”. Grid defection is an important phenomenon in any network industry — what if you use scarce resources to build a network that provides value for consumers, and then over time, with innovation and dynamism, what if they can find alternative ways of capturing that value (and/or more or different value)? Whether it’s a public transportation network, a wired telecommunications network, a water and sewer network, or a wired electricity distribution network, consumers can and do exit when they perceive the alternatives available to them as being more valuable than the network alternative. Of course, those four cases differ because of differences in transaction costs and regulatory institutions — making exit from a public transportation network illegal (i.e., making private transportation illegal) is much less likely, and less valuable, than making private water supply in a municipality illegal. But two of the common elements across these four infrastructure industries are interesting: the high fixed costs nature of the network infrastructure and the resulting economies of scale, and the potential for innovation and technological change to change the relative value of the network.

The first common element in network industries is the high fixed costs associated with constructing and maintaining the network, and the associated economies of scale typically found in such industries. This cost structure has long been the justification for either economic regulation or municipal supply in the industry — the cheapest per-unit way to provide large quantities is to have one provider and not to build duplicate networks, and to stipulate product quality and degrees of infrastructure redundancy to provide reliable service at the lowest feasible cost.

What does that entail? Cost-based regulation. Spreading those fixed costs out over as many consumers as possible to keep the product’s regulated price as low as feasible. If there are different consumers that can be categorized into different customer classes, and if for economic or political reasons the utility and/or the regulator have an incentive to keep prices low for one class (say, residential customers), then other types of consumers may bear a larger share of the fixed costs than they would if, for example, the fixed costs were allocated according to share of the volume of network use (this is called cross-subsidization). Cost-based regulation has been the typical regulatory approach in these industries, and cross-subsidization has been a characteristic of regulated rate structures. The classic reference for this analysis is Faulhaber American Economic Review (1975).

Both in theory and in practice these institutions can work as long as the technological environment is static. But the technological environment is anything but static; it has had periods of stability but has always been dynamic, the dynamism of which is the foundation of increased living standards over the past three centuries. Technological dynamism creates new alternatives to the existing network industry. We have seen this happen in the past two decades with mobile communications eroding the value of wired communications at a rapid rate, and that history animates the concern in electricity that distributed generation will make the distribution network less valuable and will disintermediate the regulated distribution utility, the wires owner, which relies on the distribution transaction for its revenue. It also traditionally relies on the ability to cross-subsidize across different types of customers, by charging different portions of that fixed costs to different types of customers, and that’s a pricing practice that mobile telephony also made obsolete in the communications market.

Alternatives to the network grid may have higher value to consumers in their estimation (never forget that value is subjective), and they may be willing to pay more to achieve that value. This is why most of us now pay more per month for communications services than we did pre-1984 in our monthly phone bill. As customers leave the traditional network to capture that value, though, those network fixed costs are now spread over fewer network customers. That’s the Achilles heel of cost-based regulation. And that’s a big part of what drives the “death spiral” concern — if customers increasingly self-generate and leave the network, who will pay the fixed costs? This question has traditionally been the justification for regulators approving utility standby charges, so that if a customer self-generates and has a failure, that customer can connect to the grid and get electricity. Set those rates too high, and distributed generation’s economic value falls; set those rates too low, and the distribution utility may not cover the incremental costs of serving that customer. That range can be large.

This is not a new conversation in the industry or among policy makers and academics. In fact, here’s a 2003 Electricity Journal article arguing against standby charges by friend-of-KP Sean Casten, who works in recycled energy and combined heat and power (CHP). In 2002 I presented a paper at the International Association of Energy Economics annual meetings in which I argued that distributed generation and storage would make the distribution network contestable, and after the Northeast blackout in 2003 Reason released a version of the paper as a policy study. One typical static argument for a single, regulated wires network is to eliminate costly duplication of infrastructure in the presence of economies of scale. But my argument is dynamic: innovation and technological change that competes with the wires network need not be duplicative wires, and DG+storage is an example of innovation that makes a wires network contestable.

Another older conversation that is new again was the DISCO of the Future Forum, hosted over a year or so in 2001-2002 by the Center for the Advancement of Energy Markets. I participated in this forum, in which industry, regulators, and researchers worked together to “game out” different scenarios for the distribution company business model in the context of competitive wholesale and retail markets. This 2002 Electric Light & Power article summarizes the effort and the ultimate report; note in particular this description of the forum from Jamie Wimberly, then-CAEM president (and now CEO of EcoAlign):

“The primary purpose of the forum was to thoroughly examine the issues and challenges facing distribution companies and to make consensus-based recommendations that work to ensure healthy companies and happy customers in the future,” he said. “There is no question much more needs to be discussed and debated, particularly the role of the regulated utility in the provision of new product offerings and services.”

Technological dynamism is starting to make the distribution network contestable. Now what?

The “utility death spiral”: The utility as a regulatory creation

Unless you follow the electricity industry you may not be aware of the past year’s discussion of the impending “utility death spiral”, ably summarized in this Clean Energy Group post:

There have been several reports out recently predicting that solar + storage systems will soon reach cost parity with grid-purchased electricity, thus presenting the first serious challenge to the centralized utility model.  Customers, the theory goes, will soon be able to cut the cord that has bound them to traditional utilities, opting instead to self-generate using cheap PV, with batteries to regulate the intermittent output and carry them through cloudy spells.  The plummeting cost of solar panels, plus the imminent increased production and decreased cost of electric vehicle batteries that can be used in stationary applications, have combined to create a technological perfect storm. As grid power costs rise and self-generation costs fall, a tipping point will arrive – within a decade, some analysts are predicting – at which time, it will become economically advantageous for millions of Americans to generate their own power.  The “death spiral” for utilities occurs because the more people self-generate, the more utilities will be forced to seek rate increases on a shrinking rate base… thus driving even more customers off the grid.

A January 2013 analysis from the Edison Electric Institute, Disruptive Challenges: Financial Implications and Strategic Responses to a Changing Retail Electric Business, precipitated this conversation. Focusing on the financial market implications for regulated utilities of distributed resources (DER) and technology-enabled demand-side management (an archaic term that I dislike intensely), or DSM, the report notes that:

The financial risks created by disruptive challenges include declining utility revenues, increasing costs, and lower profitability potential, particularly over the long term. As DER and DSM programs continue to capture “market share,” for example, utility revenues will be reduced. Adding the higher costs to integrate DER, increasing subsidies for DSM and direct metering of DER will result in the potential for a squeeze on profitability and, thus, credit metrics. While the regulatory process is expected to allow for recovery of lost revenues in future rate cases, tariff structures in most states call for non-DER customers to pay for (or absorb) lost revenues. As DER penetration increases, this is a cost recovery structure that will lead to political pressure to undo these cross subsidies and may result in utility stranded cost exposure.

I think the apocalyptic “death spiral” rhetoric is overblown and exaggerated, but this is a worthwhile, and perhaps overdue, conversation to have. As it has unfolded over the past year, though, I do think that some of the more essential questions on the topic are not being asked. Over the next few weeks I’m going to explore some of those questions, as I dive into a related new research project.

The theoretical argument for the possibility of death spiral is straightforward. The vertically-integrated, regulated distribution utility is a regulatory creation, intended to enable a financially sustainable business model for providing reliable basic electricity service to the largest possible number of customers for the least feasible cost, taking account of the economies of scale and scope resulting from the electro-mechanical generation and wires technologies implemented in the early 20th century. From a theoretical/benevolent social planner perspective, the objective is, given a market demand for a specific good/service, to minimize the total cost of providing that good/service subject to a zero economic profit constraint for the firm; this will lead to highest feasible output and total surplus combination (and lowest deadweight loss) consistent with the financial sustainability of the firm.

The regulatory mechanism for implementing this model to achieve this objective is to erect a legal entry barrier into the market for that specific good/service, and to assure the regulated monopolist cost recovery, including its opportunity cost of capital, otherwise known as rate-of-return regulation. In return, the regulated monopolist commits to serve all customers reliably through its vertically-integrated generation, transmission, distribution, and retail functions. The monopolist’s costs and opportunity cost of capital determine its revenue requirement, out of which we can derive flat, averaged retail prices that forecasts suggest will enable the monopolist to earn that amount of revenue.

That’s the regulatory model + business model that has existed with little substantive evolution since the early 20th century, and it did achieve the social policy objectives of the 20th century — widespread electrification and low, stable prices, which have enabled follow-on economic growth and well-distributed increased living standards. It’s a regulatory+business model, though, that is premised on a few things:

  1. Defining a market by defining the characteristics of the product/service sold in that market, in this case electricity with a particular physical (volts, amps, hertz) definition and a particular reliability level (paraphrasing Fred Kahn …)
  2. The economies of scale (those big central generators and big wires) and economies of scope (lower total cost when producing two or more products compared to producing those products separately) that exist due to large-scale electro-mechanical technologies
  3. The architectural implications of connecting large-scale electro-mechanical technologies together in a network via a set of centralized control nodes — technology -> architecture -> market environment, and in this case large-scale electro-mechanical technologies -> distributed wires network with centralized control points rather than distributed control points throughout the network, including the edge of the network (paraphrasing Larry Lessig …)
  4. The financial implications of having invested so many resources in long-lived physical assets to create that network and its control nodes — if demand is growing at a stable rate, and regulators can assure cost recovery, then the regulated monopolist can arrange financing for investments at attractive interest rates, as long as this arrangement is likely to be stable for the 30-to-40-year life of the assets

As long as those conditions are stable, regulatory cost recovery will sustain this business model. And that’s precisely the effect of smart grid technologies, distributed generation technologies, microgrid technologies — they violate one or more of those four premises, and can make it not just feasible, but actually beneficial for customers to change their behavior in ways that reduce the regulation-supported revenue of the regulated monopolist.

Digital technologies that enable greater consumer control and more choice of products and services break down the regulatory market boundaries that are required to regulate product quality. Generation innovations, from the combined-cycle gas turbine of the 1980s to small-scale Stirling engines, reduce the economies of scale that have driven the regulation of and investment in the industry for over a century. Wires networks with centralized control built to capitalize on those large-scale technologies may have less value in an environment with smaller-scale generation and digital, automated detection, response, and control. But those generation and wires assets are long-lived, and in a cost-recovery-based business model, have to be paid for even if they become the destruction in creative destruction. We saw that happen in the restructuring that occurred in the 1990s, with the liberalization of wholesale power markets and the unbundling of generation from the vertically-integrated monopolists in those states; part of the political bargain in restructuring was to compensate them for the “stranded costs” associated with having made those investments based on a regulatory commitment that they would receive cost recovery on them.

Thus the death spiral rhetoric, and the concern that the existing utility business model will not survive. But if my framing of the situation is accurate, then what we should be examining in more detail is the regulatory model, since the utility business model is itself a regulatory creation. This relationship between digital innovation (encompassing smart grid, distributed resources, and microgrids) and regulation is what I’m exploring. How should the regulatory model and the associated utility business model change in light of digital innovation?

Building, and commercializing, a better nuclear reactor

A couple of years ago, I was transfixed by the research from Leslie Dewan and Mark Massie highlighted in their TedX video on the future of nuclear power.

 

A recent IEEE Spectrum article highlights what Dewan and Massie have been up to since then, which is founding a startup called Transatomic Power in partnership with investor Russ Wilcox. The description of the reactor from the article indicates its potential benefits:

The design they came up with is a variant on the molten salt reactors first demonstrated in the 1950s. This type of reactor uses fuel dissolved in a liquid salt at a temperature of around 650 °C instead of the solid fuel rods found in today’s conventional reactors. Improving on the 1950s design, Dewan and Massie’s reactor could run on spent nuclear fuel, thus reducing the industry’s nuclear waste problem. What’s more, Dewan says, their reactor would be “walk-away safe,” a key selling point in a post-Fukushima world. “If you don’t have electric power, or if you don’t have any operators on site, the reactor will just coast to a stop, and the salt will freeze solid in the course of a few hours,” she says.

The article goes on to discuss raising funds for lab experiments and a subsequent demonstration project, and it ends on a skeptical note, with an indication that existing industrial nuclear manufacturers in the US and Europe are unlikely to be interested in commercializing such an advanced reactor technology. Perhaps the best prospects for such a technology are in Asia.

Another thing I found striking in reading this article, and that I find in general when reading about advanced nuclear reactor technology, is how dismissive some people are of such innovation — why not go for thorium, or why even bother with this when the “real” answer is to harness solar power for nuclear fission? Such criticisms of innovations like this are misguided, and show a misunderstanding of both the economics of innovation and the process of innovation itself. One of the clear benefits of this innovation is its use of a known, proven reactor technology in a novel way and using spent fuel rod waste as fuel. This incremental “killing two birds with one stone” approach may be an economical approach to generating clean electricity, reducing waste, and filling a technology gap while more basic science research continues on other generation technologies.

Arguing that nuclear is a waste of time is the equivalent of a “swing for the fences” energy innovation strategy. Transatomic’s reactor represents a “get guys on base” energy innovation strategy. We certainly should do basic research and swing for the fences, but that’s no substitute for the incremental benefits of getting new technologies on base that create value in multiple energy and environmental dimensions.

Permissionless innovation in electricity: the benefits of experimentation

Last Monday I was scheduled to participate in the Utility Industry of the Future Symposium at the NYU Law School. Risk aversion about getting back for Tuesday classes in the face of a forecast 7″ snowfall in New York kept me from attending (and the snow never materialized, which makes the cost even more bitter!), so I missed out on the great talks and panels. But I’ve edited my remarks into the essay below, with helpful comments and critical readings from Mark Silberg and Jim Speta. Happy thinking!

If you look through the lens of an economist, especially an economic historian, the modern world looks marvelous – innovation enables us to live very different lives than even 20 years ago, lives that are richer in experience and value in many ways. We are surrounded by dynamism, by the change arising from creativity, experimentation, and new ideas. The benefits of such dynamism are cumulative and compound upon each other. Economic history teaches us that well-being emerges from the compounding of incremental changes over time, until two decades later you look at your old, say, computer and you wonder that you ever accomplished anything that way at all.

The digital technology that allows us to flourish in unanticipated ways, large and small, is an expression of human creativity in an environment in which experimentation is rife and entry barriers are low. That combination of experimentation and low entry barriers is what has made the Internet such a rich, interesting, useful platform for us to use to make ourselves better off, in the different ways and meanings we each have.

And yet, very little (if any) of this dynamism has originated in the electricity industry, and little of this dynamism has affected how most people transact in and engage with electricity. Digital technologies now exist that consumers could use to observe and manage their electricity consumption in a more timely way than after the fact, at the end of the month, and to transact for services they value – different pricing, different fuel sources, and automating their consumption responses to changes in those. From the service convergence in telecom (“triple play”) we have experimented with and learned the value of bundling. Such bundling of retail electricity service with home entertainment, home security, etc. are services that companies like ADT and Verizon are exploring, but have been extremely slow to develop and have not commercialized yet, due to the combination of regulatory entry barriers that restrict producers and reinforce customer inertia. All of these examples of technologies, of pricing, of bundling, are examples of stalled innovation, of foregone innovation in this space.

Although we do not observe it directly, the cost of foregone innovation is high. Today residential consumers still generally have low-cost, plain-vanilla commodity electricity service, with untapped potential to create new value beyond basic service. Producers earn guaranteed, regulation-constrained profits by providing these services, and the persistence of regulated “default service contracts” in nominally competitive states is an entry barrier facing producers that might otherwise experiment with new services, pricing, and bundles. If producers don’t experiment, consumers can’t experiment, and thus both parties suffer the cost of foregone innovation – consumers lose the opportunity to choose services they may value more, and producers lose the opportunity to profit by providing them. By (imperfect) analogy, think about what your life would be like if Apple had not been allowed to set up retail stores that enable consumers to engage in learning while shopping. It would be poorer (and that’s true even if you don’t own any Apple devices, because the experimentation and learning and low entry barriers even benefits you because it encourages new products and entry).

This process of producer and consumer experimentation and learning is the essence of how we create value through exchange and market processes. What Internet pioneer Vint Cerf calls permissionless innovation, what writer Matt Ridley calls ideas having sex — these are the processes by which we humans create, strive, learn, adapt, and thrive.

But regulation is a permission-based system, and regulation slows or stifles innovation in electricity by cutting off this permissionless innovation. Legal entry barriers, the bureaucratic procedures for cost recovery, the risk aversion of both regulator and regulated, all undermine precisely the processes that enable innovation to yield consumer benefits and producer profits. In this way regulation that dictates business models and entry barriers discourages activities that benefit society, that are in the public interest.

The question of public interest is of course central to any analysis of electricity regulation’s effects. Our current model of utility regulation has been built on the late 19th century idea that cost-based regulation and restricting entry would make reliable electric service ubiquitous and as cheap as is feasible. Up through the 1960s, while exploiting the economies of scale and scope in the conventional mechanical technologies, that concept of the public interest was generally beneficial. But by so doing, utility regulation entrenched “iron in the ground” technologies in the bureaucratic process. It also entrenched an attitude and a culture of prudential preference for those conventional technologies on the part of both regulator and regulated.

This entrenchment becomes a problem because the substance of what constitutes the public interest is not static. It has changed since the late 19th century, as has so much in our lives, and it has changed to incorporate the dimension of environmental quality as we have learned of the environmental effects of fossil fuel consumption. But the concept of the public interest of central generation and low prices that is fossilized in regulatory rules does not reflect that change. I argue that the “Rube Goldberg” machine accretion of RPS, tax credits, and energy efficiency mandates to regulated utilities reflects just how poorly situated the traditional regulated environment is to adapting to the largely unforeseeable changes arising from the combination of dynamic economic and environmental considerations. Traditional regulation is not flexible enough to be adaptive.

The other entrenchment that we observe with regulation is the entrenchment of interests. Even if regulation was initiated as a mechanism for protecting consumer interests, in the administrative and legal process it creates entrenched interests in maintaining the legal and technological status quo. What we learn from public choice theory, and what we observe in regulated industries including electricity, is that regulation becomes industry-protecting regulation. Industry-protecting regulation cultivates constituency interests, and those constituency interests generally prefer to thwart innovation and retain entry barriers to restrict interconnection and third-party and consumer experimentation. This political economy dynamic contributes to the stifling of innovation.

As I’ve been thinking through this aloud with you, you’ve probably been thinking “but what about reliability and permissionless innovation – doesn’t the physical nature of our interconnected network necessitate permission to innovate?” In the centralized electro-mechanical T&D network that is more true, and in such an environment regulation provides stability of investments and returns. But again we see the cost of foregone innovation staring us in the face. Digital switches, open interconnection and interoperability standards (that haven’t been compromised by the NSA), and more economical small-scale generation are innovations that make high reliability in a resilient distributed system more possible (for example, a “system of systems” of microgrids and rooftop solar and EVs). Those are the types of conditions that hold in the Internet – digital switches, traffic rules, TCP-IP and other open data protocols — and as long as innovators abide by those physical rules, they can enter, enabling experimentation, trial and error, and learning.

Thus I conclude that for electricity policy to focus on facilitating what is socially beneficial, it should focus on clear, transparent, and just physical rules for the operation of the grid, on reducing entry barriers that prevent producer and consumer experimentation and learning, and on enabling a legal and technological environment in which consumers can use competition and technology to protect themselves.

Interpreting Google’s purchase of Nest

Were you surprised to hear of Google’s acquisition of Nest? Probably not; nor was I. Google has long been interested in energy monitoring technologies and the effect that access to energy information can have on individual consumption decisions. In 2009 they introduced Power Meter, which was an energy monitoring and visualization tool; I wrote about it a few times, including it on my list of devices for creating intelligence at the edge of the electric power network. Google discontinued it in 2011 (and I think Martin LaMonica is right that its demise showed the difficulty of competition and innovation in residential retail electricity), but it pointed the way toward transactive energy and what we have come to know as the Internet of things.

In his usual trenchant manner, Alexis Madrigal at the Atlantic gets at what I think is the real value opportunity that Google sees in Nest: automation and machine-to-machine communication to carry out our desires. He couches it in terms of robotics:

Nest always thought of itself as a robotics company; the robot is just hidden inside this sleek Appleish case.

Look at who the company brought in as its VP of technology: Yoky Matsuoka, a roboticist and artificial intelligence expert from the University of Washington.

In an interview I did with her in 2012, Matsuoka explained why that made sense. She saw Nest positioned right in a place where it could help machine and human intelligence work together: “The intersection of neuroscience and robotics is about how the human brain learns to do things and how machine learning comes in to augment that.”

I agree that it is an acquisition to expand their capabilities to do distributed sensing and automation. Thus far Nest’s concept of sensing has been behavioral — when do you use your space and how do you use it — and not transactive. Perhaps that can be a next step.

The Economist also writes this week about the acquisition, and compares Google’s acquisitions and evolution to GE’s in the 20th century. The Economist article touches on the three most important aspects of this acquisition: the robotics that Alexis analyzed, the data generated and accessible to Google for advertising purposes, and the design talent at Nest to contribute to the growing interest in the Internet-of-things technologies that make the connected home increasingly feasible and attractive to consumers (and that some of us have been waiting, and waiting, and waiting to see develop):

Packed with sensors and software that can, say, detect that the house is empty and turn down the heating, Nest’s connected thermostats generate plenty of data, which the firm captures. Tony Fadell, Nest’s boss, has often talked about how Nest is well-positioned to profit from “the internet of things”—a world in which all kinds of devices use a combination of software, sensors and wireless connectivity to talk to their owners and one another.

Other big technology firms are also joining the battle to dominate the connected home. This month Samsung announced a new smart-home computing platform that will let people control washing machines, televisions and other devices it makes from a single app. Microsoft, Apple and Amazon were also tipped to take a lead there, but Google was until now seen as something of a laggard. “I don’t think Google realised how fast the internet of things would develop,” says Tim Bajarin of Creative Strategies, a consultancy.

Buying Nest will allow it to leapfrog much of the opposition. It also brings Google some stellar talent. Mr Fadell, who led the team that created the iPod while at Apple, has a knack for breathing new life into stale products. His skills and those of fellow Apple alumni at Nest could be helpful in other Google hardware businesses, such as Motorola Mobility.

Are we finally about to enter a period of energy consumption automation and transactive energy? This acquisition is a step in that direction.

Economist debate on solar power

The Economist often runs debates on their website, and their current one will be of interest to the KP community: Can solar energy save the world?

The debate is structured in a traditional manner, with a moderator and a proposer and a responder. Guest posts accompany the debate, and readers are invited to comment on each stage of the debate. The two debaters are Richard Swanson, founder of SunPower, and Benny Peiser of the Global Warming Policy Foundation. Geoff Carr, the Economist’s science editor, is moderating the debate.

One common theme among the debaters, the moderator, and the commenters is the distortions introduced due to decades of energy being politicized, which means (among other things) that the complicated web of subsidies across all fuel sources is hard to disentangle. Given the thorough and valuable discussion that Mike’s provided of his recent analysis of wind power cost estimates, this solar debate is a good complement to that discussion.