Permissionless innovation in electricity: the benefits of experimentation

Last Monday I was scheduled to participate in the Utility Industry of the Future Symposium at the NYU Law School. Risk aversion about getting back for Tuesday classes in the face of a forecast 7″ snowfall in New York kept me from attending (and the snow never materialized, which makes the cost even more bitter!), so I missed out on the great talks and panels. But I’ve edited my remarks into the essay below, with helpful comments and critical readings from Mark Silberg and Jim Speta. Happy thinking!

If you look through the lens of an economist, especially an economic historian, the modern world looks marvelous – innovation enables us to live very different lives than even 20 years ago, lives that are richer in experience and value in many ways. We are surrounded by dynamism, by the change arising from creativity, experimentation, and new ideas. The benefits of such dynamism are cumulative and compound upon each other. Economic history teaches us that well-being emerges from the compounding of incremental changes over time, until two decades later you look at your old, say, computer and you wonder that you ever accomplished anything that way at all.

The digital technology that allows us to flourish in unanticipated ways, large and small, is an expression of human creativity in an environment in which experimentation is rife and entry barriers are low. That combination of experimentation and low entry barriers is what has made the Internet such a rich, interesting, useful platform for us to use to make ourselves better off, in the different ways and meanings we each have.

And yet, very little (if any) of this dynamism has originated in the electricity industry, and little of this dynamism has affected how most people transact in and engage with electricity. Digital technologies now exist that consumers could use to observe and manage their electricity consumption in a more timely way than after the fact, at the end of the month, and to transact for services they value – different pricing, different fuel sources, and automating their consumption responses to changes in those. From the service convergence in telecom (“triple play”) we have experimented with and learned the value of bundling. Such bundling of retail electricity service with home entertainment, home security, etc. are services that companies like ADT and Verizon are exploring, but have been extremely slow to develop and have not commercialized yet, due to the combination of regulatory entry barriers that restrict producers and reinforce customer inertia. All of these examples of technologies, of pricing, of bundling, are examples of stalled innovation, of foregone innovation in this space.

Although we do not observe it directly, the cost of foregone innovation is high. Today residential consumers still generally have low-cost, plain-vanilla commodity electricity service, with untapped potential to create new value beyond basic service. Producers earn guaranteed, regulation-constrained profits by providing these services, and the persistence of regulated “default service contracts” in nominally competitive states is an entry barrier facing producers that might otherwise experiment with new services, pricing, and bundles. If producers don’t experiment, consumers can’t experiment, and thus both parties suffer the cost of foregone innovation – consumers lose the opportunity to choose services they may value more, and producers lose the opportunity to profit by providing them. By (imperfect) analogy, think about what your life would be like if Apple had not been allowed to set up retail stores that enable consumers to engage in learning while shopping. It would be poorer (and that’s true even if you don’t own any Apple devices, because the experimentation and learning and low entry barriers even benefits you because it encourages new products and entry).

This process of producer and consumer experimentation and learning is the essence of how we create value through exchange and market processes. What Internet pioneer Vint Cerf calls permissionless innovation, what writer Matt Ridley calls ideas having sex — these are the processes by which we humans create, strive, learn, adapt, and thrive.

But regulation is a permission-based system, and regulation slows or stifles innovation in electricity by cutting off this permissionless innovation. Legal entry barriers, the bureaucratic procedures for cost recovery, the risk aversion of both regulator and regulated, all undermine precisely the processes that enable innovation to yield consumer benefits and producer profits. In this way regulation that dictates business models and entry barriers discourages activities that benefit society, that are in the public interest.

The question of public interest is of course central to any analysis of electricity regulation’s effects. Our current model of utility regulation has been built on the late 19th century idea that cost-based regulation and restricting entry would make reliable electric service ubiquitous and as cheap as is feasible. Up through the 1960s, while exploiting the economies of scale and scope in the conventional mechanical technologies, that concept of the public interest was generally beneficial. But by so doing, utility regulation entrenched “iron in the ground” technologies in the bureaucratic process. It also entrenched an attitude and a culture of prudential preference for those conventional technologies on the part of both regulator and regulated.

This entrenchment becomes a problem because the substance of what constitutes the public interest is not static. It has changed since the late 19th century, as has so much in our lives, and it has changed to incorporate the dimension of environmental quality as we have learned of the environmental effects of fossil fuel consumption. But the concept of the public interest of central generation and low prices that is fossilized in regulatory rules does not reflect that change. I argue that the “Rube Goldberg” machine accretion of RPS, tax credits, and energy efficiency mandates to regulated utilities reflects just how poorly situated the traditional regulated environment is to adapting to the largely unforeseeable changes arising from the combination of dynamic economic and environmental considerations. Traditional regulation is not flexible enough to be adaptive.

The other entrenchment that we observe with regulation is the entrenchment of interests. Even if regulation was initiated as a mechanism for protecting consumer interests, in the administrative and legal process it creates entrenched interests in maintaining the legal and technological status quo. What we learn from public choice theory, and what we observe in regulated industries including electricity, is that regulation becomes industry-protecting regulation. Industry-protecting regulation cultivates constituency interests, and those constituency interests generally prefer to thwart innovation and retain entry barriers to restrict interconnection and third-party and consumer experimentation. This political economy dynamic contributes to the stifling of innovation.

As I’ve been thinking through this aloud with you, you’ve probably been thinking “but what about reliability and permissionless innovation – doesn’t the physical nature of our interconnected network necessitate permission to innovate?” In the centralized electro-mechanical T&D network that is more true, and in such an environment regulation provides stability of investments and returns. But again we see the cost of foregone innovation staring us in the face. Digital switches, open interconnection and interoperability standards (that haven’t been compromised by the NSA), and more economical small-scale generation are innovations that make high reliability in a resilient distributed system more possible (for example, a “system of systems” of microgrids and rooftop solar and EVs). Those are the types of conditions that hold in the Internet – digital switches, traffic rules, TCP-IP and other open data protocols — and as long as innovators abide by those physical rules, they can enter, enabling experimentation, trial and error, and learning.

Thus I conclude that for electricity policy to focus on facilitating what is socially beneficial, it should focus on clear, transparent, and just physical rules for the operation of the grid, on reducing entry barriers that prevent producer and consumer experimentation and learning, and on enabling a legal and technological environment in which consumers can use competition and technology to protect themselves.

Interpreting Google’s purchase of Nest

Were you surprised to hear of Google’s acquisition of Nest? Probably not; nor was I. Google has long been interested in energy monitoring technologies and the effect that access to energy information can have on individual consumption decisions. In 2009 they introduced Power Meter, which was an energy monitoring and visualization tool; I wrote about it a few times, including it on my list of devices for creating intelligence at the edge of the electric power network. Google discontinued it in 2011 (and I think Martin LaMonica is right that its demise showed the difficulty of competition and innovation in residential retail electricity), but it pointed the way toward transactive energy and what we have come to know as the Internet of things.

In his usual trenchant manner, Alexis Madrigal at the Atlantic gets at what I think is the real value opportunity that Google sees in Nest: automation and machine-to-machine communication to carry out our desires. He couches it in terms of robotics:

Nest always thought of itself as a robotics company; the robot is just hidden inside this sleek Appleish case.

Look at who the company brought in as its VP of technology: Yoky Matsuoka, a roboticist and artificial intelligence expert from the University of Washington.

In an interview I did with her in 2012, Matsuoka explained why that made sense. She saw Nest positioned right in a place where it could help machine and human intelligence work together: “The intersection of neuroscience and robotics is about how the human brain learns to do things and how machine learning comes in to augment that.”

I agree that it is an acquisition to expand their capabilities to do distributed sensing and automation. Thus far Nest’s concept of sensing has been behavioral — when do you use your space and how do you use it — and not transactive. Perhaps that can be a next step.

The Economist also writes this week about the acquisition, and compares Google’s acquisitions and evolution to GE’s in the 20th century. The Economist article touches on the three most important aspects of this acquisition: the robotics that Alexis analyzed, the data generated and accessible to Google for advertising purposes, and the design talent at Nest to contribute to the growing interest in the Internet-of-things technologies that make the connected home increasingly feasible and attractive to consumers (and that some of us have been waiting, and waiting, and waiting to see develop):

Packed with sensors and software that can, say, detect that the house is empty and turn down the heating, Nest’s connected thermostats generate plenty of data, which the firm captures. Tony Fadell, Nest’s boss, has often talked about how Nest is well-positioned to profit from “the internet of things”—a world in which all kinds of devices use a combination of software, sensors and wireless connectivity to talk to their owners and one another.

Other big technology firms are also joining the battle to dominate the connected home. This month Samsung announced a new smart-home computing platform that will let people control washing machines, televisions and other devices it makes from a single app. Microsoft, Apple and Amazon were also tipped to take a lead there, but Google was until now seen as something of a laggard. “I don’t think Google realised how fast the internet of things would develop,” says Tim Bajarin of Creative Strategies, a consultancy.

Buying Nest will allow it to leapfrog much of the opposition. It also brings Google some stellar talent. Mr Fadell, who led the team that created the iPod while at Apple, has a knack for breathing new life into stale products. His skills and those of fellow Apple alumni at Nest could be helpful in other Google hardware businesses, such as Motorola Mobility.

Are we finally about to enter a period of energy consumption automation and transactive energy? This acquisition is a step in that direction.

Adapting to technological change: solar power and fire

Here’s an important tradeoff I never really considered until reading this article: rooftop solar panels can be hazardous for firefighters. It’s an interesting example of how wide and varied the adaptations are to innovation. In this case the potential lethal electrocution from the traditional means of venting a roof on a burning building (creating holes in the roof with an axe) has meant that both firefighters and the solar industry have had to think about fire risk and how solar installations change firefighting and the expected cost of a fire. I wonder how many benefit-cost analyses of solar take into account the higher expected cost of a fire, and the logical associated higher fire insurance premium.

Joel Mokyr on growth, stagnation, and technological progress

My friend and colleague Joel Mokyr talked recently with Russ Roberts in an EconTalk podcast that I cannot recommend highly enough (and the links on the show notes are great too). The general topic is this back-and-forth that’s been going on over the past year involving Joel, Bob Gordon, Tyler Cowen, and Erik Brynjolfsson, among others, regarding diminishing returns to technological change and whether we’ve reached “the end of innovation”. Joel summarizes his argument in this Vox EU essay.

Joel is an optimist, and does not believe that technological dynamism is running out of steam (to make a 19th-century joke …). He argues that technological change and its ensuing economic growth are punctuated, and one reason for that is that conceptual breakthroughs are essential but unforeseeable. Economic growth also occurs because of the perpetual nature of innovation — the fact that others are innovating (here he uses county-level examples) means that everyone has to innovate as a form of running to stand still. I agree, and I think as long as the human mind, human creativity, and human striving to achieve and accomplish exist, there will be technological dynamism. A separate question is whether the institutional framework in which we interact in society is conducive to technological dynamism and to channeling our creativity and striving into such constructive application.

“If your toilet’s so smart, how come I can hack it?”

Thus reads the headlines on David Meyer’s Gigaom post on news that the Satis toilet, manufactured by the Japanese firm Lixii, comes with a smartphone app that can be used to control any Satis toilet (see also this BBC news article). You may wonder why a toilet needs an app, which is a valid question; this one allows recording of one’s activity (if you so choose …), remote flushing, remote air freshener spray, and remote bidet operation. Subjective utility being what it is, I’ll consider Lixii as entrepreneurs responding to what they perceive as some undersatisfied preference in the market, which the extent of their subsequent profits will indicate or not …

Although the story is scatologically humorous, Meyer’s closing observation hits upon exactly the same point I made recently in my post about the hackability of home management systems:

Of course, it’s not like someone will be exploiting this vulnerability to prank someone a continent away — Bluetooth is a pretty short-range wireless technology. However, it’s the kind of thing that should be borne in mind by manufacturers who are starting to jazz up previously low-tech appliances with new-fangled connectivity.

Because when it comes to security, as Trustwave SpiderLabs and others have warned, the home is the last place you want to be caught with your pants down.

NSA surveillance imperils the Internet as an economic platform

Today’s new revelations from Edward Snowden’s whistleblowing show that the NSA can, and does, use a program that surveils our Internet behavior in a general, blanket way (much in the nature of the “general warrants” that were the whole reason the authors of the Bill of Rights put the Fourth Amendment in there in the first place!).

Make no mistake: this deep and broad US government surveillance diminishes trust not just in the federal government (as if there is any general trust in the federal government any more), but also in Internet companies — communications companies, ISPs, Apple, Google, Yahoo, Amazon, and so on. The economic implications of the deep and broad US government surveillance are profound. How much economic activity on the Internet will leave those companies? Will government surveillance be able to access substitutes for these companies in other countries, if substitutes come into being? Isn’t this going to cause the commercial Internet to shrink?

The federal government may not have intended to stifle the role of the Internet as an economic value-creating commercial platform, but that consequence is almost certain.

UPDATE, 1 August, 3:19 CDT: Welcome readers from reddit, and thanks for the link! Since some commenters wanted more original analysis of this issue than I intended, I’ve recommended a follow-up post that provides deeper evaluation of the potential effects on the Internet as a commercial platform.

Joel Mokyr: Technopessimism is bunk

My department is currently a focal point in the debates over the future of innovation and economic growth. Technopessimist arguments from my colleague Bob Gordon (as profiled in this New York Magazine article from the weekend) join those in Tyler Cowen’s The Great Stagnation to suggest that the increase in living standards and the growth rates experienced over the past 200 years may be anomalous and not repeatable.

In the PBS Newshour Business Desk, my colleague (and former dissertation adviser) Joel Mokyr offers a different, more optimistic perspective. Joel emphasizes the dynamic aspects of new idea generation and the ensuing technological change and its effects on people and societies. Technology is never static, humans and our curiosity and our efforts to strive are never static, and that means that there’s not likely to be an “end of innovation” along the lines of an “end of history”:

Technology has not finished its work; it has barely started. Some lessons from history may show why. For one thing, technological progress has an unusual dynamic: it solves problems, but in doing so it, more often than not, creates new ones as unintended side-effects of the previous breakthroughs, and these in turn have to be solved, and so on. …

As we know more, we can push back against the pushback. And so on. The history of technology is to a large extent the history of unintended consequences. …

What will a future generation think of our technological efforts? During the Middle Ages, nobody knew they were living in the Middle Ages (the term emerged centuries later), and they would have resented a notion that it was an age of unbridled barbarism (it was not). During the early stages of the Industrial Revolution in the 18th century, few had a notion that a new technological dawn was breaking. So it is hard for someone alive today to imagine what future generations will make of our age. But to judge from progress in the past decades, it seems that the Digital Age may become to the Analog Age what the Iron Age was to the Stone Age. It will not last as long, and there is no way of knowing what will come after. But experience suggests that the metaphor of low-hanging fruit is misleading. Technology creates taller and taller ladders, and the higher-hanging fruits are within reach and may be just as juicy.

None of this is guaranteed. Lots of things can go wrong. Human history is always the result of a combination of deep impersonal forces, accidents and contingencies. Unintended consequences, stupidity, fear and selfishness often get in the way of making life better for more and more people. Technology alone cannot provide material progress; it’s just that without it, all the other ways of economic progress soon tend to fizzle out. Technological progress is perhaps not the cure-all for all human ills, but it beats the alternative.

Joel’s essay is well worth reading in its entirety. His argument highlights the decentralized, curiosity-driven process of technological change that does not proceed linearly, but is impossible to quash. These processes contribute to economic well-being in societies with good institutional and cultural contexts that facilitate and reward innovation when it generates value for others.

Disruptive innovation and the regulated utility

Over the weekend the New York Times ran a good story about how rooftop solar and regulatory rules allowing net metering are putting pressure on the regulated distribution utility business model:

The struggle over the California incentives is only the most recent and visible dust-up as many utilities cling to their established business, and its centralized distribution of energy, until they can figure out a new way to make money. …

“Net metering right now is the only way for customers to get value for their rooftop solar systems,” said Adam Browning, executive director of the advocacy group Vote Solar.

Mr. Browning and other proponents say that solar customers deserve fair payment not only for the electricity they transmit but for the value that smaller, more dispersed power generators give to utilities. Making more power closer to where it is used, advocates say, can reduce stress on the grid and make it more reliable, as well as save utilities from having to build and maintain more infrastructure and large, centralized generators.

But utility executives say that when solar customers no longer pay for electricity, they also stop paying for the grid, shifting those costs to other customers. Utilities generally make their profits by making investments in infrastructure and designing customer rates to earn that money back with a guaranteed return, set on average at about 10 percent.

In a nutshell, what’s happening is that environmental and global warming policy initiatives are resulting in government subsidies and tax credits for consumer investments in rooftop solar, especially in states like California. As more consumers install rooftop solar they both make less use of the electricity distribution network to receive electricity and can put the excess power generated from their solar panels onto the distribution grid (called net metering). Under net metering they receive a per-kilowatt-hour payment that ranges between the averaged, regulated retail rate and the wholesale price of electricity at that time, depending on the net metering rules that are in operation in that state. From the regulated utility’s perspective, this move creates a double whammy — it reduces the amount of electricity sold and distributed using the wires network, which reduces revenue and the ability of the utility to charge the customer for use of the wires, but since most of the costs for the network are fixed costs and the utility is guaranteed a particular rate of return on those assets, that means increasing rates for other customers who have not installed solar.

Offsetting some of that revenue decrease/fixed cost dilemma is the fact that net metering means that the utility is purchasing power from rooftop solar owners at a price lower than the spot price they would have to pay to purchase power in the wholesale market in that hour (i.e., wholesale price as avoided cost) … except what happens when they have already entered long-term contracts for power and have to pay anyway? And in California, the net metering payment to the customer is the fully-loaded retail rate, not just the energy portion of the rate, so even though the customer is essentially still using the network (to sell excess power to other users via the regulated utility instead of buying it), the utility is not receiving the wires charge portion of the per-kilowatt-hour regulated rate.

Sounds like a mess, right? It sure is. And, as Katie Fehrenbacher pointed out yesterday on Gigaom, the disruption of the regulated electric utility in the same way that Kodak, Blockbuster, and Borders have been disrupted out of existence is not a new idea. In fact, I made the same argument here at KP back in 2003, building on a paper I co-authored for the International Association of Energy Economics meetings in 2002 (and here are other KP posts that both Mike and I have made on net metering). I summarized that paper in this Reason Foundation column, in which I argued

Many technological and market innovations have reduced the natural monopoly rationale for traditional electric industry regulation. For example, consider distributed generation. Distributed generation (DG) is the use of an energy source (gas turbines, gas engines, fuel cells, for example) to generate electricity close to where it will be used. Technological change in the past decade and deregulation in the natural gas industry have made DG an economically viable alternative to buying electricity from a monopoly utility and receiving it over the utility’s transmission and distribution grid. The potential for this competition to discipline a transmission owner’s prices for transmission services is immense, but it still faces some obstacles. …

Technological change and market dynamics have made the natural monopoly model of electricity regulation obsolete. While technological changes and market innovations that shape the electricity industry’s evolution have received some attention, their roles in making natural monopoly regulation of transmission and distribution obsolete have not received systematic treatment. For that reason, the policy debate has focused on creating regional transmission organizations to rationalize grid construction, but has not dug more deeply into the possible benefits of dramatically rethinking the foundations of natural monopoly regulation.

I may have been a bit ahead of my time in making this argument, but the improvements in energy efficiency and production costs for solar technology and the shale gas revolution have made this point even more important.

Think a bit about how the regulated utilities and regulators have come to this point. They have come to this point by trying to retain much of the physical and legal structure of traditional regulation, and by trying to fold innovation into that structure. The top-down system-level imposition of requirements for the regulated utility to purchase excess solar-generated electricity and to pay a specific, fixed price for it. The attempts of regulated utilities to block such efforts, and to charge high “standby charges” to customers who install distributed generation but want to retain their grid interconnection as an insurance policy. The fact that regulation ensures cost recovery for the wires company and how that implies that a reduction in number of customers means a price increase to those customers staying on the wires network. And adding on top of that the subsidies and tax credits to induce residential customers to purchase and install rooftop solar. I don’t think we could design a worse process and set of institutions if we tried.

You may respond that there’s no real alternative, and I’d say you’re wrong. You can see the hint in my remarks above from 2003 — if these states had robust retail competition, then retailers could offer a variety of different contracts, products, and services associated with distributed generation. Wires companies could essentially charge standard per-unit transportation rates (assuming they would still be regulated). In that market design, much of the pressure on the business model of the wires company from distributed generation gets diluted. The wires company would still have to be forward-looking and think (with the regulators) about what increased penetration of distributed generation would mean for the required distribution capacity of the wires network and how to invest in it and recover the costs. But the wires company would be just that, a wires company, and not the party with the retail relationship with the residential customer, so all of these distortions arising from net metering would diminish. If I were a wires company I would certainly use digital meters and monitors to measure the amount of current flow and the direction of current flow, and I would charge a per-kilowatt-hour wires transportation charge regardless of direction of flow, whether the residential customer is consuming or producing. Digital technology makes that granular observation possible, which makes that revenue model possible.

That’s why states like California have created such an entangled mess for themselves by retaining the traditional regulated utility structure for integrated distribution and retail and trying to both absorb and incentivize disruptive distributed generation innovation in that traditional structure. Not surprisingly, Texas with its more deregulated and dis-integrated structure has escaped this mess — the only regulated entity is the wires (transmission and distribution) company, and retailers are free to offer residential customers compensation for any excess generation from distributed renewable generation sources, at a price mutually agreed upon between the retailer and the customer in their contract. In fact, Green Mountain Energy offers such a contract to residential customers in Texas. See how much easier that is than what is happening in California?

Honey, someone hacked our smart home

Ever since the first “vision” meeting I attended at the Department of Energy in 2003 about the technologically advanced electric power grid of the future, digital network security in a smart grid has been a paramount concern. Much of the concern emphasizes hardening the electrical and communication networks against nefarious attempts to access control rooms or substations. Less attention goes to the security of the home automation system itself.

Here’s why privacy and security issues matter so much in customer-facing smart grid products and services: how likely is it that someone can hack into your home energy management system? The resourceful technology and privacy journalist Kashmir Hill gained access to eight homes, merely by doing an Internet search to see if any homes had their devices set to be discoverable by a search engine:

Googling a very simple phrase led me to a list of “smart homes” that had done something rather stupid. The homes all have an automation system from Insteon that allows remote control of their lights, hot tubs, fans, televisions, water pumps, garage doors, cameras, and other devices, so that their owners can turn these things on and off with a smartphone app or via the Web. The dumb thing? Their systems had been made crawl-able by search engines – meaning they show up in search results — and due to Insteon not requiring user names and passwords by default in a now-discontinued product, I was able to click on the links, giving me the ability to turn these people’s homes into haunted houses, energy-consumption nightmares, or even robbery targets. Opening a garage door could make a house ripe for actual physical intrusion.

In this instance, early adopters of a now-discontinued home automation system had not changed their default settings to implement security protocols. They had not followed the simple security protocols that we have become habituated to in our home wireless networks, which most of us now routinely know to secure with a password at least. This security hurdle doesn’t seem very high, and it shouldn’t be; securing a home automation system separately with a username/password login is not difficult, and can be made less difficult for the technologically challenged through helpful customer service.

She goes on in the story to relate her interactions with some of the people whose houses she was able to access, as well as her discussion with people at Insteon:

Insteon chief information officer Mike Nunes says the systems that I’m seeing online are from a product discontinued in the last year. He blamed user error for the appearance in search results, saying the older product was not originally intended for remote access, and to set this up required some savvy on the users’ part. The devices had come with an instruction manual telling users how to put the devices online which strongly advised them to add a username and password to the system. (But, really, who reads instruction manuals closely?)

“This would require the user to have chosen to publish a link (IP address) to the Internet AND for them to have not set a username and password,” says Nunes. I told Nunes that requiring a username/password by default is good security-by-design to protect people from making a mistake like this. “It did not require it by default, but it supported it and encouraged it,” he replied.

One of the interesting aspects of her story (and you get a much deeper sense of it reading the whole article) is the extent to which these early adopters/automation hobbyists identified some but not all of the potential security holes in the home automation system. These are eager, knowledgeable consumers, and even they did not realize that some ports on the router were left open and thus made the system discoverable externally.

I think she’s right that for such technologies in such sensitive applications as home automation, default username/password authentication is good design. This is an application in which I think the behavioral economics arguments about setting defaults to overcome inertia bias are valid.

Insteon has since changed their default settings to require username/password authentication on the automation system separate from the home wireless network authentication, and the rest of the article describes some other companies that are working to close security holes in their home automation systems.

As we extend the smart grid into our home and the “Internet of things” becomes more deeply embedded in our lives, being aware of the value of securing our privacy and reducing the risk of unauthorized access to our homes and the devices and appliances in them becomes more important. The digital rules we apply to our financial transactions should guide our privacy and security awareness and decision in our home network too. That way we can enjoy the benefits of home automation and transactive energy that Hill lays out in her article while minimizing the risk of unauthorized access to our homes and our information.

Esther Dyson on the future of 3D printing

3D printing is incredible. Take, for example, recent Northwestern mechanical engineering graduate and softball player Lauren Tyndall, who designed and printed her own more ergonomic and comfortable cast for her broken pinkie finger. Or consider the cost and energy use benefits of 3D printing of metal airplane parts in titanium, rather than machining them out of aluminum (a topic that my mechanical engineering colleague Eric Masanet is researching). Its potential as a core general-purpose technology is profound.

In a Project Syndicate essay, Esther Dyson puts some meat on the futuristic bones that I enthused about above:

The Internet changed the balance of power between individuals and institutions. It enabled millions of people to have jobs without having bosses. Instead, they have agents – such as TaskRabbit or Amazon Web Services or Uber – who match providers and customers.

I think we will see a similar story with 3D printing, as it grows from a novelty into something useful and disruptive – and sufficiently cheap and widespread to be used for (relatively) frivolous endeavors as well. We will print not just children’s playthings, but also human prostheses – bones and even lungs and livers – and ultimately much machinery, including new 3D printers.

Dyson lays out some areas where she sees these disruptive changes occurring, and some of the economic and environmental impacts of, say, the reduction in the demand for freight transportation and the increased ability to recycle and reuse physical resources locally. Her conclusion is optimistic on both economic and environmental counts:

In the short run, this means greater efficiency and more and speedier recycling, happening locally rather than centrally. In the long run, 3D printing will allow more efficient use of physical resources and faster diffusion of the best designs, boosting living standards around the world.