Frontiers in dynamic pricing: spot advertising auctions

Lynne Kiesling

According to this Ars Technica story (and a linked Bloomberg article), Facebook is going to offer a new advertising model to its potential advertisers: a spot auction for real-time ads based on changes in current events or time-sensitive things like sporting event results.

The service, called Facebook Exchange, will use partnerships with other companies to track users as they visit other sites using tracking “cookies” placed on those sites, and allow advertisers to bid “in real time” to display ads based on the interests the browsing history represents. …

The real-time nature of the bidding system means that advertisers can target ads based on both recent behavior of Facebook users and real-world events. For example, people who have a web history related to following a specific Olympic event could get offers based on the outcome of that event.

For the moment, set aside the corresponding privacy issues associated with this use of cookies (although if it does present individuals with ads targeted to sites they’ve visited, that targeting may benefit consumers, and we should not forget to take that into account).

Instead, think about this as a matching or a search problem. Producers want to identify high-value consumers, and whether or not a consumer is high-value or low-value is a function of their context of time and place. Here’s where the Hayekian diffuse knowledge point comes in — the “man on the spot” has private knowledge about how much value he places on, say, buying a Spain jersey to celebrate a victory in a Euro2012 game (yes, I am expecting them to beat Ireland this afternoon!), and that value is itself a function both of whether or not Spain wins the match and of the fan’s perception of the value he attaches to getting a Spain jersey right in that moment. Before digital technology and social media, producers could not identify those high-value consumers in the moments when they are truly high value consumers, so the technology opens up new business models and reduces those search and matching costs in a much more dynamic way. Similarly, from the consumer’s perspective, if I’m exuberant because I’ve just watched a brilliant soccer match and Spain totally dominated (thanks in large part to the outstanding field marshaling and traffic direction of holding midfielder Xabi Alonso), I’m going to be happy to have lower search costs of finding a Spain jersey because of the targeted advertising.

Models of dynamic pricing suggest what we should expect to see in Facebook’s ad pricing — lower prices for time-sensitive products and services at times that are more distant from the event, higher prices for ads closer to and during the events. This advertising price discrimination may also be a better revenue model for Facebook, for whom advertising revenue has not been reliable in the model they are currently pursuing.

 

Smart meter cybersecurity and moral panics

Lynne Kiesling

In March I wrote about Adam Thierer’s paper on technopanics — “a moral panic centered on societal fears about a particular contemporary technology” — and I argued that we should bear the moral panic phenomenon in mind when evaluating objections to smart grid technologies. In the past two weeks we’ve seen news articles on this topic: according to the FBI, smart meter cybersecurity is loose enough that hackers have been able to hack into smart meters and steal electricity.

Chris King from eMeter has done some digging into this question, and writes at Earth2Tech suggesting that the problem is old-fashioned criminal human behavior, not any technology-specific security failure:

Upon a closer look, this situation is not so much about smart meters as it is about criminal human behavior. Former Washington Post reporter Brian Krebs explained that it was not actually the smart meters themselves which were “hacked.” The meters’ own security measures were not breached.

Instead, criminals accessed the smart meters by stealing meter passwords as well as some devices used to program the meters. This is more like stealing a key and opening a door, rather than breaking the lock on the door.

These criminals were former employees of the utility involved, and of the vendor who provided the smart meters. These people were paid (bribed) by customers to illegally reprogram the meters so that those meters would record less energy consumption than actually occurred. This is not fundamentally different from bribing human meter readers to under report consumption — which happens often in some developing countries.

Which brings us back to Adam’s original point: why are we so willing to accept the technopanic argument? Why are so many people so suspicious of new technology, and so willing to give up both the consequentialist potential benefits and the moral defense of individual liberty and impose controls and limits on technology?

How fear affects policy: Adam Thierer on technopanics

Lynne Kiesling

Fear is a strong motivating factor, having evolved over millennia as we have protected ourselves against predators. Fear supports self-preservation by making us risk-averse and cautious. But such a deep, visceral, evolved emotion does not always serve our long-term objectives of thriving; it leads to maximin outcomes, and it is often mismatched to the actual threats to our self-preservation. As our environments change around us, we can fear things we shouldn’t and may not fear things that we should; we overthink everything and tend toward a “precautionary principle” approach, making us risk-averse and cautious.

I think such fear is a component in the persistence of regulation when it’s maladaptive to technological change, so I was happy to read Adam Thierer’s new Mercatus working paper, Technopanics, Threat Inflation, and the Danger of an Information Technology Precautionary Principle. Adam lays out a framework for analyzing fear-based attitudes toward technology and technological change that’s informed by economics, sociology, psychology, and rhetoric. He tackles the question of why, and how, participants in public policy debates use appeals to fear to sway opinion toward anticipatory regulation and forms of censorship:

While cyberspace has its fair share of troubles and troublemakers, there is no evidence that the Internet is leading to greater problems for society than previous technologies did. That has not stopped some from suggesting there are reasons to be particularly fearful of the Internet and new digital technologies. There are various individual and institutional factors at work that perpetuate fear-based reasoning and tactics.

He analyzes the use of “appeal to fear” and “appeal to force” logic in the construction of arguments in favor of regulation and censorship, focusing on case studies of online child safety and violent media and online privacy and cybersecurity. In deconstructing these arguments he identifies four ways that fear can be a myth: it may be empirically unfounded and lacking evidence, other variables may be more important in affecting behavior than the feared variable, not all individuals have the same reaction to the feared variable, and other approaches than regulation exist that can mitigate the consequences of the feared variable (pp. 5-6).

Adam introduces the phenomenon of the “technopanic”, which is “… a moral panic centered on societal fears about a particular contemporary technology” (p. 7). Because culture often evolves more slowly than technology, as we are adapting culturally to the new technology we can see these panic phenomena, which can result in demonizing the technology and can lead to calls to “do something”, typically some form of control-based anticipatory regulation or censorship. A crucial part of manipulating individual attitudes to tap into fear and create advocacy for and acceptance of such regulation is what Adam calls “threat inflation”:

Thus, fear appeals are facilitated by the use of threat inflation. Specifically, threat inflation involves the use of fear-inducing rhetoric to inflate artificially the potential harm a new development or technology poses to certain classes of the population, especially children, or to society or the economy at large. These rhetorical flourishes are empirically false or at least greatly blown out of proportion relative to the risk in question. (p. 9)

Allowing threat inflation and technopanics to drive policy outcomes is socially corrosive and wasteful; it diverts resources from their higher-valued uses in dealing with actual risks rather than inflated ones, and it creates an environment of suspicion and social control, particularly censorship and information control. After analyzing six factors that create conditions favorable for the development of threat inflation and technopanics regarding Internet technology (nostalgia, special interests, etc., well worth reading in detail), he proposes two categories of policy response that we should pursue instead of prohibition and anticipatory regulation: resiliency and adaptation. We build resiliency to threats through education, transparency, labeling, etc., and we adapt to living with risk through experimentation, trial-and-error, experience, and social norms. These two are complementary; information-sharing about best practices can shape social norms and get people to change their behavior without regulation. For example, I don’t sign my credit cards, but instead write “CHECK ID” in the signature line and present a photo ID when using them. Having store clerks and other shoppers witness my behavior to protect my identity may lead to their replication of it, and has led over time to a change in behavior (remember back in the 1990s when they used to write your phone number on the receipt? Yikes! But that behavior’s gone extinct.).

We cannot eliminate risk through resilience and adaptation, but we can’t eliminate it through regulation either. Better to have strong, flexible, adaptable institutions and practices that enable us to continue thriving in unknown and changing conditions, while we enjoy the substantial benefits of technological creativity. While I heartily recommend Adam’s paper to you all as a good and thought-provoking read, he also summarizes it in this recent Forbes column.

I would extend Adam’s argument to apply to two case studies. The first is smart grid technology. Fear-based arguments abound in electricity, usually grounded (pun intended!) in the physical reality that electricity is dangerous. But after a century of economic regulation to serve particular social policy objectives, fear-based arguments also show up in arguments against moving away from the status quo both technologically and more economically in general; in my experience these fear-based arguments are used most to advocate for the status quo on behalf of low-income consumers and the elderly, and for that reason I find the use of fear-based arguments heart-wrenching, because when they succeed they deprive vulnerable populations of the benefits of innovation. Another current example is the arguments that digital meters, which transmit data using radio frequency wireless networks and thus emit low-level electromagnetic fields, are making people sick. Despite the absence of any scientific evidence consistent with this hypothesis, California and Maine are using these fear-based claims as a basis for allowing customers to opt out of having a digital meter installed (I have other analyses of this phenomenon, but that’s for another time …).

The second case is threat inflation and the exaggeration of fear to extend the security state. Each of Adam’s six factors contributing to threat inflation is applicable to the growth of the security state — nostalgia, pessimistic bias, “bad news sells”, the political power of the military-security-industrial complex, and so on. The persistence of threat inflation enables these special interests to use fear-based arguments to perpetuate the false belief that we are under constant, persistent threat beyond the actual threat level; this false belief creates the incentives in politicians to “do something” so that they don’t appear “soft on terror” and therefore risk not getting reelected; that political incentive enables security and defense companies to lobby politicians to buy their cutting-edge technologies at very great taxpayer expense to demonstrate to voters that they are “doing something” (even though the technologies have high false positive rates, can be fooled easily, and are more for symbolic security theater than for addressing the most relevant risks that we actually do face).

In both cases, a resiliency-oriented public policy approach would be a substantial improvement on the control-oriented regulation that is not focused on the most meaningful or relevant threats, be they health threats, economic threats, or security threats, from technological dynamism.

SOPA/PIPA protests and the economics of content market power

Lynne Kiesling

I found some things striking in yesterday’s SOPA/PIPA protests. One was Jim Harper’s clear and cogent statement that the Internet is not a thing, it’s a set of protocols stipulating how computers communicate with each other. That set of protocols is a platform, and those protocols are not the government’s to regulate.

Jim’s Cato colleague, the ever-reliable Julian Sanchez, points out that if you estimate the profits/surplus at stake from piracy relative to the lost value all of the other Internet activities that would be stifled under SOPA/PIPA, the cost of piracy is just not that large. Sure, it’s concentrated in the hands of politically-powerful entertainment content companies, but relative to the rest of the vibrant, dynamic value creation that would “be disappeared” it’s small. Moreover, domestic and international legal institutions already exist to deal with piracy; like any other human institution they are imperfect, but as a consequence of them the losses from piracy are small relative to what would be lost if Congress imposed SOPA/PIPA. Here’s a good, short video from Julian covering some of the basics:

At Digitopoly, Joshua Gans makes an analogy near and dear to my heart: consider how SOPA/PIPA would make the Internet more like the arbitrary, intrusive, Constitution-free zone that is our airports:

But the notion that enforcement and prevention matters will be put in place that create massive harm to the lives of innocent individuals while being unlikely to really actually led to less of the activity targeted is not unprecedented. You can think about this every time you go through a US airport and think about who is winning there. …

So the scenario that US people should be concerned about is if publishing on the Internet becomes like airport security. That is, if copyright enforcers are able to automate enforcement without due process. That will raise the costs of publishing and will deter many. As is often the case with over-reaching laws, the problem is that it creates too few incentives for enforcers to enforce discriminately rather than indiscriminately.

These contributions to the discussion have all been outstanding, but the most useful one in my estimation is this TED video posted yesterday from Clay Shirky on the issues at stake in the SOPA/PIPA debate:

It really is a must-watch video, well worth 10 minutes of your time. Shirky describes the technological issues clearly for non-techies and delves helpfully into the legal history of copyright in media, but then makes the crucial economic point when he says “Time Warner wants us all back on the couch and not creating our own content”. In all of the justifiable furor about censorship, this is the economic point that gets a bit lost. For the past 70 years the entertainment companies have had a lot of market power, because entertainment was essentially an oligopoly. They profited handsomely from their market power over content. But with the decentralization and edge content generation now possible due to technology, and with the way that their content provides an input into that edge creation, we now have many more substitutes for their content. They are using the piracy red herring (which is not as large as they claim it is, as Julian points out above) to try to retain the viability of their decades-old business model and market power over content. That’s the real economic issue here — they want us back on the couch and in the movie theater.

This is a fight that is not new with SOPA/PIPA and the Internet, nor will it end with the Congressional retreat from these ill-designed pieces of proposed legislation. Yesterday raised a lot of awareness of the issues, but it’s going to have to happen over and over and over …

I’m going to give the last word to my friend Sarah, who makes a useful analysis of language and its use in the context of both SOPA/PIPA and the recently signed into law National Defense Authorization Act, complete with its provisions that allow extralegal detention of American citizens without due process on suspicion of terrorist activity. Sarah offers an analysis of Orwellian Newspeak language, and identifies disturbing parallels with our current environment:

It struck me today that the combination of SOPA/PIPA and the NDAA move us terrifyingly close to an Orwellian world where people, language, history, and information can disappear at any time. Forever. As if they never were. And worse than that, our primary way to discuss/protest/remedy that disappearance–the Web–will be taken from us as well. …

Newspeak as a language, then, mirrors the political system that creates it, and serves to support it and perpetuate it by creating an agreed upon reality where meanings are strictly limited, the possibility for unorthodox thought is all but eliminated, and an agreed upon “reality” allows Ingsoc to have been always in control. Winston’s friend Syme is correct that “Newspeak is Ingsoc and Ingsoc is Newspeak.”

I leave further connections to the contemporary political situation as an exercise for the reader.

Cost savings and value creation are different

Lynne Kiesling

The cost saving-focused mindset has prevailed in regulated industries for over a century, slowing innovation in the process. In electricity, regulation that bases firms’ profits on cost recovery erects market barriers by recognizing only a business model that involves providing a specified product (110v power to the home) transported over a monopoly network. Even in 2011, well into the third decade of the digital revolution, this narrow focus and cost-saving mindset persists, and it fetters smart grid-enabled economic growth by emphasizing cost recovery and ignoring value creation.

In fact, one of the main reasons why smart grid investments face regulatory and political opposition is that focus on cost recovery (among others). I think this Greentech Media article gets the story right: the ways that smart grid investments can lead to cost savings are limited. We’ve discussed this idea here at KP quite a bit — a limitation on the benefits of transactive technologies and dynamic pricing is the fact that for most people, electricity bills are not a large share of their annual expenses, so even saving 15% on the electricity bill may not be a salient enough benefit to induce a lot of people to make technology investments. In other words, smart grid may or may not lead to cost savings for a lot of residential customers.

But is that the right metric by which to evaluate smart grid investments? Of course not. The Greentech Media article linked above starts with a telecom metaphor that I use frequently. In nominal terms, most of us pay much more for our communication services today than we did when all we had was a single land line (and leased Western Electric phone!) back in the 1980s, and even in real terms we probably still pay more than we did then. But look at how much more value we get — mobility, Internet, automation, all of the services that have been created at the edge of the network. We are much richer and better off because of the change in communication technologies and services since the 1980s, even taking into account that we pay more for them. Apply this metaphor to the regulatory calculus today, and the mismatch of its cost recovery focus and the benefits arising from new value creation is apparent. Innovation in telecommunications didn’t occur and thrive and expand because of cost savings and cost recovery, but instead because of new value creation.

Those who argue that the business model for customer-facing smart grid investments has to be grounded only in cost savings are incorrect, and are looking too narrowly at consumer value propositions. This debate came up in the post I wrote in October about the new Nest thermostat, a gorgeous and beautifully designed piece of consumer-focused in-home technology from a group of former Apple engineers, and in other articles about Nest around the same time. Observers from this traditional cost savings mindset dismissed the Nest thermostat because of its $250 price tag, saying that consumers would not save enough money to make the payback period on it make sense, even with dynamic pricing. This criticism overlooks the additional features and capabilities of such a device — motion sensing, serving as a hub to integrate and manage and automate in-home digital devices, learning algorithms, extensibility to be able to bundle with other digital services in the home, and so on. It also overlooks the persistent pattern in the history of new technology adoption, from the Roman baths onward; there will always be consumers with strong “first adopter” preferences, who are willing to pay more to be the first ones to have the novelty, and in the case of digital devices, incur that cost fully aware that prices will fall in the future as the technology matures. They guinea pig new technologies for the rest of us.

Those two aspects — additional features and first adopter preferences — mean that a lot of the value proposition in consumer-facing smart grid technologies is new value creation, not cost savings. This means that the regulatory calculus and the traditional electricity cost-focused mindset misses the real action, the real opportunity, the real potential that the investments could unleash.

One data point supporting my claim is that, only one week after its commercial release, the Nest thermostat was sold out and is now only available on backorder. Such innovation is about value creation more than cost savings, and ignoring and stifling that process holds back the contribution of the electricity industry to economic growth and well-being.

Exelon’s John Rowe and Google’s Eric Schmidt: Truth to power?

Lynne Kiesling

Here’s an interesting juxtaposition of two prominent executives performing sound public choice analyses, and I think they complement each other, at least in my work! This weekend’s Wall Street Journal featured an interview with Exelon’s John Rowe, A Life in Energy and (Therefore) Politics. Exelon is the third largest investor-owned utility/generation owner in the country, with one of the largest nuclear generation fleets outside of France. Between the growth of Exelon through mergers and the provenance of Commonwealth Edison (a substantial chunk of Exelon) as Samuel Insull’s pioneering origins of the electricity industry, Rowe has experienced many of the crucial business and policy aspects that have characterized this industry for the past century.

And for my part he pretty much nails the public choice analysis. In discussing the politics of electricity in general, and in particular Exelon’s support of active federal carbon policy:

In a visit to The Wall Street Journal’s offices recently, Mr. Rowe was eager to strip the altar of green jobs—and the many other political pieties that distort the energy industry, even a few that he says belong to the Journal editorial page.

“The utility business is a funny business and almost no one in any political authority in either party really believes in orderly markets in electricity,” Mr. Rowe says. …

The reason for this seeming contradiction—between simultaneously supporting free markets and interventions like an economy-wide CO2-reduction plan—is that “we’re always being asked to do things that are in our view bleeding crazy,” as he’ll go on to explain.

For starters, the anti-market demands made on Mr. Rowe are bipartisan.

He discusses the cost differential between, in this instance, wind power and other lower-carbon means of generation, such as natural gas, and the bipartisan political support for wind despite the reality that we get more carbon reduction “bang for the buck” from natural gas. Interestingly, in this interview he does not tout nuclear as the be-all-end-all carbon-free energy approach, given construction costs; he also dismisses clean coal.

Rowe is also an enthusiastic amateur historian, so is very well-versed in the origins of the politicization of the electricity industry:

This political economy is an artifact of the historical electricity market—which, through most of the 20th century, was not really a market at all. Until recently, almost all consumers bought electricity from a monopoly supplier at rates set by the government, with a guaranteed return for utilities. That model eroded amid deregulation in the 1980s and ’90s, and the rise of more efficient wholesale electricity markets and independent generators. Commercial and now even some residential consumers are no longer captive, but the political habits persist. …

Mr. Rowe continues that “Somebody else wants clean coal; it’s a non sequitur and it’s not economic either. Somebody else wants wind or solar, and meanwhile . . . the market says the only thing that makes sense for a decade, maybe two, is for new generation to be gas-fired. Natural gas is cheaper than everything else,” thanks to domestic shale finds via fracking and other factors. “It’s likely to stay that way for a long time—but it isn’t what politicians want.”

I encourage you to read the entire interview; I’ve omitted a very interesting discussion of the debate over the extent to which EPA rules would shut down sufficient coal-fired generation to cause reliability problems, which has been asserted in, for example, Texas (but I don’t see how that makes sense, given the loooooong portion of the generation supply stack that is natural gas).

Rowe’s public choice analysis of his industry is complementary to one offered by Eric Schmidt of Google in this Washington Post interview in early October, on the heels of his first-ever Senate testimony experience (Gordon Crovitz analyzes Schmidt’s interview in today’s WSJ, Google Speaks Truth to Power). It’s an absolute must-read in its entirety, but here’s one piece of sound public choice analysis:

Washington—having spent a lot of time there, I grew up there and have spent a lot of time there recently—is largely defined by detailed analytical views and policy choices that are not very good. You know, each policy choice has a winner and a loser, right? Somebody’s ox is getting gored. They’re complex arguments: They’re economic and political and social, and everyone has an opinion on those. Here, the arguments are, how do we make something that affects a million people? How do we change the economics of an industry?

And one of the consequences of regulation is regulation prohibits real innovation, because the regulation essentially defines a path to follow—which by definition has a bias to the current outcome, because it’s a path for the current outcome. …

Come on. Give me a break. The press is so young, they don’t understand the history here. We’re still a small component of what a whole bunch of other companies have done, and certainly most other industries. So I reject all such charges [about the magnitude of Google's lobbying]. And I’m very clear on that because people can’t do math. Take the numbers of the amounts of money that go into the regulated industries of all sorts—and then compare high tech, and compare Google in specific, and it’s miniscule.

And privately the politicians will say, “Look, you need to participate in our system. You need to participate at a personal level, you need to participate at a corporate level.” We, after some debate, set up a PAC, as other companies have. And it’s basically in the interest of our customers to do this, because the government can make mistakes. And for every one of these Internet-savvy senators, there’s another senator who doesn’t get it at all. And it’s not a partisan issue. It’s true in both parties.

This excerpt highlights two timely insights. First, note the “you need to participate in our system” dynamic that defines the corporatist political system. Companies like Google feel compelled to engage in lobbying to rectify what they see as ill-informed political decisions (a reasonable stance, given the lack of technological sophistication in Congress) that would impair their ability to create value for consumers and profit from doing so. Add this incentive to the more cynical and craven one of manipulating the political process and ensuing legislation to favor your company, and you have a range of high-powered and low-powered incentives that drive toward increasing corporatism in politics.

Second, note his observation that “… one of the consequences of regulation is regulation prohibits real innovation, because the regulation essentially defines a path to follow—which by definition has a bias to the current outcome, because it’s a path for the current outcome.” This is the clearest articulation I’ve seen of a hypothesis that I’m currently working on with respect to electricity regulation (here’s the complementarity between the two analyses). Regardless of industry, regulation does specify a path to follow, and it’s a backward-looking definition. Combine Schmidt’s observation with the summary of the history of electricity regulation from the Rowe article, and you get a potent combination leading to technological inertia … which, when you’re talking about an industry that enables and is the driving force of a lot of our productivity and lifestyle, is a costly impediment to economic growth.

Combining these two interviews shows the breadth and depth, and costliness, of today’s corporatist regulatory and political environment.

Sanchez on Netflix

Lynne Kiesling

You probably received the same apologetic email from Reed Hastings of Netflix that I did on Monday, stating the impending decision to split Netflix’s streaming business and its DVD subscription business. Foresightful or bad business decision, PR nightmare, or all of the above? The best analysis of its likely drivers and impacts is from Julian Sanchez:

Just as many people spend hours “watching TV” rather than watching any particular show, people often just want to “watch a movie”—de dicto, rather than de re, as the philosophers say—or rather have the option to watch any one of a number of movies, more than they want to see any particular one.

Julian goes through a good analysis comparing buying a DVD, getting a DVD from Netflix, or streaming a movie, both from the consumer’s perspective and from the studio’s perspective, and how all of these affect Netflix’s business model. Although he doesn’t couch it in economic terms specifically, his analysis hinges on the differences in option value of the various alternatives to consumers, and how both the studios and Netflix/Quikster can profit from these different option values and changes in them. Very thoughtful analysis.

My Kellogg colleague Sandeep Baliga also comments on the decision from a more operational and strategy perspective.

Smart appliances and the innovation cycle

Lynne Kiesling

Appliance and consumer electronics manufacturers are starting to incorporate digital technology with energy-related applications into their products … but as with most new technologies, the first commercial stage of the innovation cycle takes the form of “because we can” product differentiation rather than use-specific innovation. Take the example that Technology Review highlighted this week: Samsung’s new refrigerator with an Android LCD display panel on the door. This high-end, gorgeous, stainless-steel refrigerator has an Android touch screen, which I think is pretty neat even if it does not read from the scripture of “the Internet of Things” that Christopher Mims wants it to — if I store my recipes in Evernote or in an Epicurious app, I can pull up a recipe on the door as I’m cooking, or make a shopping list as I’m looking in the fridge to see what I need. Mims’ tone is almost condescending when he observes that

A really smart fridge, part of the Internet of Things, would know when you put that lettuce in the crisper, so it could alert you when it was about to become inedible. It would tweet its current temperature so you know when your kid failed to close the door all the way. A really smart fridge probably doesn’t even have a display — far better to control it from any other internet-connected device.

The cynical view of Samsung’s move to embed a tiny tablet in its fridge is that these devices have become so cheap that sticking one in a fridge hardly makes any difference to their margins. It’s just one of those features — like a pop-up spoiler on the back of a luxury sports car — that makes a buyer feel like they got their splurge’s worth. If that’s the case, we can all look forward to Android-powered microwave ovens and clothes washers.

No. First of all, while I agree that automated monitoring features like produce spoilage detection and door ajar detection are desirable and user-friendly, Mims’ hyperbole about “Oh noes! We’re doomed to useless technology kitchen candy because of this!” shows a strong misunderstanding of the economics of consumer product innovation life cycles. The “version 1.0 mass-market user friendly right out of the gates model” is an outlier, a great exception to the typical evolution of technology; in fact, I’m having trouble thinking of a consumer technology product other than the iPod that comes close to that description. The “because we can” product differentiation puts the technology in the hands of early adopters, who are eager to kick the tires and are willing to spend their income to do so. These customers guinea-pig the technology for the rest of us, and provide companies like Samsung with feedback, which I’m sure will include comments like “it would be great if this technology enabled me to detect produce spoilage” and “this screen is pretty useless if all I can do is get to the Internet and not monitor my food”. Those experiences get incorporated into the evolution of the technology. Starting with the “because we can” technology is not necessarily going to lead to missed opportunities, as Mims argues, as long as companies like Samsung combine their engineering and business knowledge of what’s possible with the feedback they receive throughout the new technology adoption process.

Second, I think his definition of a “really smart fridge” is too limited. A really smart fridge would be transactive. A really smart fridge would enable its owner to program in price triggers to change settings on the chiller during expensive hours, saving the owner (an admittedly fairly small amount of) money and reducing energy use (good if the owner cares about conservation) and reducing peak demand on the distribution infrastructure (good for the wires company) — all without changing the quality of the refrigeration that the owner experiences, thanks to the beauty that is thermal mass. A transactive fridge would enable its owner to choose to cycle the chiller down if there isn’t much green power available, up to the point where the temperature change impairs the refrigeration, if the owner has a preference for green power. A transactive fridge is empowering for consumers.

A better article on the same topic comes from Greentech Media from earlier this summer (and has been sitting open in my browser to be blogged for too long!). In it Katherine Tweed argues that the current, first generation of smart appliances are oversold relative to their features. Without saying it explicitly, she makes the point that these first-generation smart appliances are expensive and likely to appear to early adopters — buyers more at the Viking end of the product line than the bottom of the Whirlpool line. She also, correctly, points out that if the value proposition to the consumer is saving money by reducing energy savings, the smart appliance features do not contribute much at the margin beyond the EnergyStar appliance standard; so if you are buying to save money by saving energy, you aren’t going to get much bang for the buck at the margin by choosing a smart fridge over an EnergyStar fridge. But as I remarked earlier, that’s not the only value proposition, because consumers care about other features.

It’s going to take some time to get the technology integration across the value chain to create all of these features, from spoilage detection to transactive automated response to dynamic pricing to preferentially choosing green power to sending the beer order to the store when my supply is low. It’s also going to take some choice in terms of electricity pricing for residential customers, and (surprise surprise) monopoly utilities and regulators are dragging their feet on that front. But don’t dismiss smart appliances today simply because V1.0 isn’t perfect. V1.0 never is.

ETA: I also recommend reading the comment thread on the Greentech Media article; it has a good back-and-forth about dynamic pricing.

Free the electricity consumer!

Lynne Kiesling

In late July I spoke at Cato University, which was great; I met so many interesting and thoughtful people, and learned a lot from my fellow participants and speakers. I’m also happy that Cato has made the presentation notes and recordings of the presentations available on their website, so you can see and hear them too!

One of my talks was called “The Economics of Intervention”, which is a large topic … so I focused on the interplay of technological change and regulation, ranging from Schumpeterian disruptive innovation to the history of the electricity industry and its regulation to current smart grid issues. You can also listen to a recording of my talk. If you are a regular KP reader you will recognize the themes and connections that I drew in the talk — innovation makes monopolies temporary, regulation that purports to “stand in for competition” cannot do so, and unless smart grid includes transactive technology and transactive market options, it’s not smart. The best way to deliver these potential benefits, and to avoid the distrust and Orwellian concerns attached to having such technology at the behest of government-granted monopolies and regulators is to open up retail electricity markets, reduce entry barriers, and enable innovators and entrepreneurs to transition electricity from a commodity product to a service that can be differentiated, bundled with other services, etc.

While I was there I also talked with Caleb Brown about the potential value creation from smart grid technologies and customer-focused business models, and he has posted our conversation as a podcast. I like his framing of the issue: free the electricity consumer!

Schumpeterian tablet competition

Lynne Kiesling

If you want good examples of Schumpeterian competition, it doesn’t get much better than this: Amazon to take on Apple this summer with a Samsung-built tablet? The Engadget folks make

… a very reasoned argument that paints Amazon, not Samsung or the rest of the traditional consumer electronics industry, as Apple’s chief competition in the near-term tablet space. An idea that’ll be tough to argue against if Amazon — with its combined music (downloadable and streaming), video, book, and app ecosystem — can actually launch a dirt-cheap, highly-customized, 7-inch Android tablet this summer as Pete predicts.

This evolution is Schumpeterian in several ways, the most obvious of which is the process of creative destruction that disrupts equilibration by entrepreneurs creating a new product that will make some old products less valuable and ultimately obsolete. Note, interestingly, that one of the products likely to be made obsolete is Amazon’s own Kindle.

But the essential product, the tablet computer, is not actually new, which gets to the second, and in some ways more meaningful, Schumpterian aspects of this evolution: this is a good example of competition for the platform. This is not just about coming up with some new gadget that consumers might like; this is about integration of the various applications and services that might create value for consumers into an elegant platform. Given Apple’s announcement this week of iCloud and Amazon’s existing cloud services, this Amazon tablet is part of that platform competition.