The LIFX lightbulb: Bringing the Internet of things to electricity

Lynne Kiesling

The LIFX lightbulb is one of the most exciting things I’ve seen in a while, even in a period of substantial innovation affecting many areas of our lives. It’s a Kickstarter project, not coming from an established company like GE or Philips, not coming from within the electricity industry. Go watch the intro video, and then come back … you back? So how cool is that? Wifi enabled for automation and remote control from your smartphone. Automation of electricity consumption at the bulb level. You can set your nightstand bulb to dim and brighten according to your sleep cycle. It’s an LED bulb, so it can change colors, any combination in the Pantone scale, from your phone, anywhere. And, as an LED bulb, you get all of these automation and aesthetic features in a low-energy, low-carbon package.

This discussion of their project provides insight into the entrepreneurial future of consumer-facing energy technology — it’s not about the hardware, it’s about the software:

The LIFX app is one of our favorite aspects of the entire project, and we’ve spent countless hours thinking about how you can interact with your lights. We have mapped out a very smooth configuration UX from the app to the LIFX master bulb. In essence you place your LIFX smartbulb into a light socket, turn the switch on and then launch the app. You will be guided through a process of choosing your home network from a list and then entering your password. The LIFX master bulb will then auto configure itself to your router and all the slave bulbs will auto connect to the master. If you add more slave bulbs down the track  these will also auto connect.

Regarding security: LIFX will be as secure as your WiFi network. eg. without the WiFi network password you can’t control the smartbulbs.

We’re aware that while the hardware is the most visible and interesting part of this project our software is the soul.

This. This is the right thing to do, from my perspective, from both economic and environmental perspectives. And while I think Kevin Tofel at Greentech is right that there’s a network architecture issue here (separate control systems vs. a single server capturing and implementing your automation decisions throughout the house), a system like LIFX’s seems to me to be flexible enough to be incorporated into a whole-house energy management setup. And, given how enthusiastically consumers have adopted wireless mobile technologies, that seems to be a good place to start to get consumers comfortable with this degree of automation and functionality. Transactive capabilities and dynamic pricing are next! Unless our electricity network is transactive it’s not smart, and intelligent end-use devices (and the connectivity to network them for automation) create value for consumers from that intelligence.

Note also the implications of software like LIFX’s for having electricity enter the Internet of Things. As sensors and the connectivity among them become ubiquitous, we can automate our consumption decisions much more deeply, at a much more granular level (down to the bulb, here), in ways that do not inconvenience us. We can use the technology to make ourselves better off by automating our choices in response to variables we care about, which eventually will include variables like the retail price of electricity and the carbon content of the fuel used to generate it. The Internet of Things reflects Alfred North Whitehead’s observation that “civilization advances by extending the number of important operations which we can perform without thinking of them.”

The Internet of Things enables mass customization and the ability of each individual to choose a bundle, a set of features, a price contract that they expect to bring them the most net benefits. This is a dramatic technological and cultural break from the century-long custom and regulatory practice of uniform products, uniform quality, uniform pricing as a matter of social policy. The public interest ethic of uniformity ties us to mediocrity, to the extent that it constrains what features and pricing people can bundle and consume with technologies like these.

Another Internet of Things implication here is that, with each bulb having a unique sensor and identifier, we will generate very detailed, granular data about how the connected, sensing devices operate. Such “big data” can help us use less energy, save money, do more with less, and lots of other things I can’t imagine but some other entrepreneur will, and will bring to market, if regulation doesn’t stifle it, and with clear stipulations of consumer privacy and property rights in their data.

You can also tell that this is an interesting topic when I am not the first economist to write about it! I love seeing my colleagues interested in electricity-related technologies. Mark Perry shares my enthusiasm about the application of human creativity to generate such a product. Josh Gans shares my enthusiasm for the networking, the interoperability, and the open architecture. And Felix Salmon offers a worthy note of caution about the ability of LIFX to deliver on its promised features and timeline, given the time delays experienced in other Kickstarter projects.

A fashion coda on wireless charging

Lynne Kiesling

As a coda to my previous post on wireless induction charging, there’s a Kickstarter out right now for Everpurse, a wireless charging purse for iPhone 4. The battery pack in the purse has to be charged from an AC adapter, but it will charge a phone as long as it has charge.

Is wireless charging finally going to take off?

Lynne Kiesling

Since the pioneering research of Nikola Tesla (have you contributed to his museum yet?) we’ve dreamed of wireless transmission of electricity, including wireless charging of devices. Tesla’s magnetic induction experiments gave us proof of concept almost 140 years ago, so where are the wireless chargers? We were promised wireless charging!

Jessical Leber at Technology Review suggests that it’s not a lack of supply, but rather slow consumer adoption that’s the reason why we don’t have ubiquitous wireless charging.

Why hasn’t cord-free charging—where a device gets charged when you place it on a charging surface—caught on? It’s not due to a shortage of products, nor from a shortage of companies that want to sell them. More than 125 businesses have joined the Wireless Power Consortium, formed in late 2008 to create a global charging standard. While the consortium hopes the technology will one day become as common as Bluetooth in most devices and, like Wi-Fi, available in many public spaces, wireless charging has been slow to take off.

I think a common, open standard, like we have for Bluetooth and USB, will reduce adoption hurdles, although she does discuss toward the end of the article the current set of two competing standards. History tells us, though, from rail gauges to DVD formats, convergence eventually occurs.

This article is full of valuable and interesting information about the technical hurdles of getting induction receivers in devices, investing in charging mats for public places, and so on, but it’s frustratingly oblique in its answer to the question in the quote above. Leber doesn’t offer any specific answers to the question of why consumers have been slow to adopt wireless charging, although she implies two reasons: non-binding power requirement constraints and the coordination-complementarity bottleneck between devices and charging pads. As more consumers bump up against the constraints of battery capacity, wireless charging will become more attractive. The article also mentions the potential inclusion of charging mats in cars, and the investment involvement of automobile manufacturers in wireless charging companies.

Note here the interaction of three different innovation processes — device, battery, and charging pad. Devices are becoming more functional, requiring more power, draining batteries faster. Batteries have become more robust and longer-lived, but have been outpaced by the power demands of the increased functionality of mobile devices. Layer that on top of the innovation process in charging pads and you have quite a moving target, technologically and economically.

Frontiers in dynamic pricing: spot advertising auctions

Lynne Kiesling

According to this Ars Technica story (and a linked Bloomberg article), Facebook is going to offer a new advertising model to its potential advertisers: a spot auction for real-time ads based on changes in current events or time-sensitive things like sporting event results.

The service, called Facebook Exchange, will use partnerships with other companies to track users as they visit other sites using tracking “cookies” placed on those sites, and allow advertisers to bid “in real time” to display ads based on the interests the browsing history represents. …

The real-time nature of the bidding system means that advertisers can target ads based on both recent behavior of Facebook users and real-world events. For example, people who have a web history related to following a specific Olympic event could get offers based on the outcome of that event.

For the moment, set aside the corresponding privacy issues associated with this use of cookies (although if it does present individuals with ads targeted to sites they’ve visited, that targeting may benefit consumers, and we should not forget to take that into account).

Instead, think about this as a matching or a search problem. Producers want to identify high-value consumers, and whether or not a consumer is high-value or low-value is a function of their context of time and place. Here’s where the Hayekian diffuse knowledge point comes in — the “man on the spot” has private knowledge about how much value he places on, say, buying a Spain jersey to celebrate a victory in a Euro2012 game (yes, I am expecting them to beat Ireland this afternoon!), and that value is itself a function both of whether or not Spain wins the match and of the fan’s perception of the value he attaches to getting a Spain jersey right in that moment. Before digital technology and social media, producers could not identify those high-value consumers in the moments when they are truly high value consumers, so the technology opens up new business models and reduces those search and matching costs in a much more dynamic way. Similarly, from the consumer’s perspective, if I’m exuberant because I’ve just watched a brilliant soccer match and Spain totally dominated (thanks in large part to the outstanding field marshaling and traffic direction of holding midfielder Xabi Alonso), I’m going to be happy to have lower search costs of finding a Spain jersey because of the targeted advertising.

Models of dynamic pricing suggest what we should expect to see in Facebook’s ad pricing — lower prices for time-sensitive products and services at times that are more distant from the event, higher prices for ads closer to and during the events. This advertising price discrimination may also be a better revenue model for Facebook, for whom advertising revenue has not been reliable in the model they are currently pursuing.

 

Smart meter cybersecurity and moral panics

Lynne Kiesling

In March I wrote about Adam Thierer’s paper on technopanics — “a moral panic centered on societal fears about a particular contemporary technology” — and I argued that we should bear the moral panic phenomenon in mind when evaluating objections to smart grid technologies. In the past two weeks we’ve seen news articles on this topic: according to the FBI, smart meter cybersecurity is loose enough that hackers have been able to hack into smart meters and steal electricity.

Chris King from eMeter has done some digging into this question, and writes at Earth2Tech suggesting that the problem is old-fashioned criminal human behavior, not any technology-specific security failure:

Upon a closer look, this situation is not so much about smart meters as it is about criminal human behavior. Former Washington Post reporter Brian Krebs explained that it was not actually the smart meters themselves which were “hacked.” The meters’ own security measures were not breached.

Instead, criminals accessed the smart meters by stealing meter passwords as well as some devices used to program the meters. This is more like stealing a key and opening a door, rather than breaking the lock on the door.

These criminals were former employees of the utility involved, and of the vendor who provided the smart meters. These people were paid (bribed) by customers to illegally reprogram the meters so that those meters would record less energy consumption than actually occurred. This is not fundamentally different from bribing human meter readers to under report consumption — which happens often in some developing countries.

Which brings us back to Adam’s original point: why are we so willing to accept the technopanic argument? Why are so many people so suspicious of new technology, and so willing to give up both the consequentialist potential benefits and the moral defense of individual liberty and impose controls and limits on technology?

How fear affects policy: Adam Thierer on technopanics

Lynne Kiesling

Fear is a strong motivating factor, having evolved over millennia as we have protected ourselves against predators. Fear supports self-preservation by making us risk-averse and cautious. But such a deep, visceral, evolved emotion does not always serve our long-term objectives of thriving; it leads to maximin outcomes, and it is often mismatched to the actual threats to our self-preservation. As our environments change around us, we can fear things we shouldn’t and may not fear things that we should; we overthink everything and tend toward a “precautionary principle” approach, making us risk-averse and cautious.

I think such fear is a component in the persistence of regulation when it’s maladaptive to technological change, so I was happy to read Adam Thierer’s new Mercatus working paper, Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle. Adam lays out a framework for analyzing fear-based attitudes toward technology and technological change that’s informed by economics, sociology, psychology, and rhetoric. He tackles the question of why, and how, participants in public policy debates use appeals to fear to sway opinion toward anticipatory regulation and forms of censorship:

While cyberspace has its fair share of troubles and troublemakers, there is no evidence that the Internet is leading to greater problems for society than previous technologies did. That has not stopped some from suggesting there are reasons to be particularly fearful of the Internet and new digital technologies. There are various individual and institutional factors at work that perpetuate fear-based reasoning and tactics.

He analyzes the use of “appeal to fear” and “appeal to force” logic in the construction of arguments in favor of regulation and censorship, focusing on case studies of online child safety and violent media and online privacy and cybersecurity. In deconstructing these arguments he identifies four ways that fear can be a myth: it may be empirically unfounded and lacking evidence, other variables may be more important in affecting behavior than the feared variable, not all individuals have the same reaction to the feared variable, and other approaches than regulation exist that can mitigate the consequences of the feared variable (pp. 5-6).

Adam introduces the phenomenon of the “technopanic”, which is “… a moral panic centered on societal fears about a particular contemporary technology” (p. 7). Because culture often evolves more slowly than technology, as we are adapting culturally to the new technology we can see these panic phenomena, which can result in demonizing the technology and can lead to calls to “do something”, typically some form of control-based anticipatory regulation or censorship. A crucial part of manipulating individual attitudes to tap into fear and create advocacy for and acceptance of such regulation is what Adam calls “threat inflation”:

Thus, fear appeals are facilitated by the use of threat inflation. Specifically, threat inflation involves the use of fear-inducing rhetoric to inflate artificially the potential harm a new development or technology poses to certain classes of the population, especially children, or to society or the economy at large. These rhetorical flourishes are empirically false or at least greatly blown out of proportion relative to the risk in question. (p. 9)

Allowing threat inflation and technopanics to drive policy outcomes is socially corrosive and wasteful; it diverts resources from their higher-valued uses in dealing with actual risks rather than inflated ones, and it creates an environment of suspicion and social control, particularly censorship and information control. After analyzing six factors that create conditions favorable for the development of threat inflation and technopanics regarding Internet technology (nostalgia, special interests, etc., well worth reading in detail), he proposes two categories of policy response that we should pursue instead of prohibition and anticipatory regulation: resiliency and adaptation. We build resiliency to threats through education, transparency, labeling, etc., and we adapt to living with risk through experimentation, trial-and-error, experience, and social norms. These two are complementary; information-sharing about best practices can shape social norms and get people to change their behavior without regulation. For example, I don’t sign my credit cards, but instead write “CHECK ID” in the signature line and present a photo ID when using them. Having store clerks and other shoppers witness my behavior to protect my identity may lead to their replication of it, and has led over time to a change in behavior (remember back in the 1990s when they used to write your phone number on the receipt? Yikes! But that behavior’s gone extinct.).

We cannot eliminate risk through resilience and adaptation, but we can’t eliminate it through regulation either. Better to have strong, flexible, adaptable institutions and practices that enable us to continue thriving in unknown and changing conditions, while we enjoy the substantial benefits of technological creativity. While I heartily recommend Adam’s paper to you all as a good and thought-provoking read, he also summarizes it in this recent Forbes column.

I would extend Adam’s argument to apply to two case studies. The first is smart grid technology. Fear-based arguments abound in electricity, usually grounded (pun intended!) in the physical reality that electricity is dangerous. But after a century of economic regulation to serve particular social policy objectives, fear-based arguments also show up in arguments against moving away from the status quo both technologically and more economically in general; in my experience these fear-based arguments are used most to advocate for the status quo on behalf of low-income consumers and the elderly, and for that reason I find the use of fear-based arguments heart-wrenching, because when they succeed they deprive vulnerable populations of the benefits of innovation. Another current example is the arguments that digital meters, which transmit data using radio frequency wireless networks and thus emit low-level electromagnetic fields, are making people sick. Despite the absence of any scientific evidence consistent with this hypothesis, California and Maine are using these fear-based claims as a basis for allowing customers to opt out of having a digital meter installed (I have other analyses of this phenomenon, but that’s for another time …).

The second case is threat inflation and the exaggeration of fear to extend the security state. Each of Adam’s six factors contributing to threat inflation is applicable to the growth of the security state — nostalgia, pessimistic bias, “bad news sells”, the political power of the military-security-industrial complex, and so on. The persistence of threat inflation enables these special interests to use fear-based arguments to perpetuate the false belief that we are under constant, persistent threat beyond the actual threat level; this false belief creates the incentives in politicians to “do something” so that they don’t appear “soft on terror” and therefore risk not getting reelected; that political incentive enables security and defense companies to lobby politicians to buy their cutting-edge technologies at very great taxpayer expense to demonstrate to voters that they are “doing something” (even though the technologies have high false positive rates, can be fooled easily, and are more for symbolic security theater than for addressing the most relevant risks that we actually do face).

In both cases, a resiliency-oriented public policy approach would be a substantial improvement on the control-oriented regulation that is not focused on the most meaningful or relevant threats, be they health threats, economic threats, or security threats, from technological dynamism.

SOPA/PIPA protests and the economics of content market power

Lynne Kiesling

I found some things striking in yesterday’s SOPA/PIPA protests. One was Jim Harper’s clear and cogent statement that the Internet is not a thing, it’s a set of protocols stipulating how computers communicate with each other. That set of protocols is a platform, and those protocols are not the government’s to regulate.

Jim’s Cato colleague, the ever-reliable Julian Sanchez, points out that if you estimate the profits/surplus at stake from piracy relative to the lost value all of the other Internet activities that would be stifled under SOPA/PIPA, the cost of piracy is just not that large. Sure, it’s concentrated in the hands of politically-powerful entertainment content companies, but relative to the rest of the vibrant, dynamic value creation that would “be disappeared” it’s small. Moreover, domestic and international legal institutions already exist to deal with piracy; like any other human institution they are imperfect, but as a consequence of them the losses from piracy are small relative to what would be lost if Congress imposed SOPA/PIPA. Here’s a good, short video from Julian covering some of the basics:

At Digitopoly, Joshua Gans makes an analogy near and dear to my heart: consider how SOPA/PIPA would make the Internet more like the arbitrary, intrusive, Constitution-free zone that is our airports:

But the notion that enforcement and prevention matters will be put in place that create massive harm to the lives of innocent individuals while being unlikely to really actually led to less of the activity targeted is not unprecedented. You can think about this every time you go through a US airport and think about who is winning there. …

So the scenario that US people should be concerned about is if publishing on the Internet becomes like airport security. That is, if copyright enforcers are able to automate enforcement without due process. That will raise the costs of publishing and will deter many. As is often the case with over-reaching laws, the problem is that it creates too few incentives for enforcers to enforce discriminately rather than indiscriminately.

These contributions to the discussion have all been outstanding, but the most useful one in my estimation is this TED video posted yesterday from Clay Shirky on the issues at stake in the SOPA/PIPA debate:

It really is a must-watch video, well worth 10 minutes of your time. Shirky describes the technological issues clearly for non-techies and delves helpfully into the legal history of copyright in media, but then makes the crucial economic point when he says “Time Warner wants us all back on the couch and not creating our own content”. In all of the justifiable furor about censorship, this is the economic point that gets a bit lost. For the past 70 years the entertainment companies have had a lot of market power, because entertainment was essentially an oligopoly. They profited handsomely from their market power over content. But with the decentralization and edge content generation now possible due to technology, and with the way that their content provides an input into that edge creation, we now have many more substitutes for their content. They are using the piracy red herring (which is not as large as they claim it is, as Julian points out above) to try to retain the viability of their decades-old business model and market power over content. That’s the real economic issue here — they want us back on the couch and in the movie theater.

This is a fight that is not new with SOPA/PIPA and the Internet, nor will it end with the Congressional retreat from these ill-designed pieces of proposed legislation. Yesterday raised a lot of awareness of the issues, but it’s going to have to happen over and over and over …

I’m going to give the last word to my friend Sarah, who makes a useful analysis of language and its use in the context of both SOPA/PIPA and the recently signed into law National Defense Authorization Act, complete with its provisions that allow extralegal detention of American citizens without due process on suspicion of terrorist activity. Sarah offers an analysis of Orwellian Newspeak language, and identifies disturbing parallels with our current environment:

It struck me today that the combination of SOPA/PIPA and the NDAA move us terrifyingly close to an Orwellian world where people, language, history, and information can disappear at any time. Forever. As if they never were. And worse than that, our primary way to discuss/protest/remedy that disappearance–the Web–will be taken from us as well. …

Newspeak as a language, then, mirrors the political system that creates it, and serves to support it and perpetuate it by creating an agreed upon reality where meanings are strictly limited, the possibility for unorthodox thought is all but eliminated, and an agreed upon “reality” allows Ingsoc to have been always in control. Winston’s friend Syme is correct that “Newspeak is Ingsoc and Ingsoc is Newspeak.”

I leave further connections to the contemporary political situation as an exercise for the reader.