No population bomb

From the op-ed pages of the New York Times, Erle C. Ellis explains, “Overpopulation Is Not the Problem“:

MANY scientists believe that by transforming the earth’s natural landscapes, we are undermining the very life support systems that sustain us. Like bacteria in a petri dish, our exploding numbers are reaching the limits of a finite planet, with dire consequences. Disaster looms as humans exceed the earth’s natural carrying capacity. Clearly, this could not be sustainable.

This is nonsense. Even today, I hear some of my scientific colleagues repeat these and similar claims — often unchallenged. And once, I too believed them. Yet these claims demonstrate a profound misunderstanding of the ecology of human systems. The conditions that sustain humanity are not natural and never have been. Since prehistory, human populations have used technologies and engineered ecosystems to sustain populations well beyond the capabilities of unaltered “natural” ecosystems.

All in all it seems a little better grounded than Paul Ehrlich’s approach in 1968, flatly declaring, “The battle to feed all of humanity is over. In the 1970s hundreds of millions of people will starve to death in spite of any crash programs embarked upon….” Unfortunately for Professor Ehrlich, hundreds of millions of people did not starve to death in the 1970s. His neo-Malthusian pessimism was popular in the Carter administration, but even Carter’s policies at the end tilted a bit more toward markets and optimism and away from bureaucracy and pessimism.

How cool is that nickel-iron battery?

It’s been too long since I’ve done a “how cool is that?” expression of awe and wonder at a piece of ingenious creativity. You may recall that early automobiles were battery-powered — the origins of the electric car are deep and over a century old. One battery technology, courtesy of (you guessed it) Thomas Edison, was nickel-iron; Edison was a proponent of electric vehicles.

Stanford researchers have been working on improving the performance of Edison’s nickel-iron battery, contributing to the portfolio of battery technologies that can improve electricity storage, which is the Holy Grail of electricity technology. As described by Stanford News,

“The Edison battery is very durable, but it has a number of drawbacks,” said Hongjie Dai, professor of chemistry. “A typical battery can take hours to charge, and the rate of discharge is also very slow.”

Now, Dai and his colleagues have dramatically improved the performance of this century-old technology. The Stanford team has created an ultrafast nickel-iron battery that can be fully charged in about 2 minutes and discharged in less than 30 seconds. …

Carbon has long been used to enhance electrical conductivity in electrodes. To improve the Edison battery’s performance, the Stanford team used graphene – nanosized sheets of carbon that are only 1-atom thick – and multi-walled carbon nanotubes, each consisting of about 10 concentric graphene sheets rolled together. …

“Our battery probably won’t be able to power an electric car by itself because the energy density is not ideal,” Wang said. “But it could assist lithium-ion batteries by giving them a real power boost for faster acceleration and regenerative braking.”

This approach to energy storage, using strongly-coupled nanomaterials, has a lot of promise for battery researchers working to improve efficiency, density, and charge decay over time.

A call for controlled experimentation in California’s energy efficiency programs

Michael Giberson

UC-Berkeley economist Catherine Wolfram has an op-ed in the Sacramento Bee advocating the state use controlled experimentation to discover with energy efficiency programs work best. As she explains, retailers are increasingly using experimentation and advanced data analysis to discover how to increase sales. Surely, she suggests, when planning to spend nearly half a billion tax dollars annually on energy efficiency California ought to devote a bit of effort into separating programs that sound good and work well from those that merely sound good.

[HT to Elizabeth M. Bailey at Energy Economics Exchange.]

Doing what seems like it should work: Experiments, tests, and social progress

Michael Giberson

My title is a little grand, at least the “and social progress,” but maybe it will be justified in some later, more carefully worked out version of the ideas clashing about in my head. As this is a blog, I’m sharing the more immediate, less carefully worked out version. ;-)

I’ve been reading Redirect: The Surprising New Science of Psychological Change. My wife brought it home from the library and then recommended it to me. (Thanks!) The book makes some surprisingly strong claims for personal improvements from what the author calls “story editing,” a bundle of techniques that subtly (or sometimes not so subtly) get people to revise their self narratives. (More from the Scientific American blog, more from Monitor on Psychology.)

Counterpart to that focus of the book is an emphasis on testing social and psychological interventions to discover what actually seems work. Author Timothy Wilson details numerous self-help and social change projects, some of which capture millions or even billions of dollars in public support, which seem like they should work but when subjected to careful evaluation show no evidence of success. In fact, some very expensive programs actually seem to worsen the problem that the program was designed to fix: programs to fight teenage smoking that lead to higher smoking rates, programs to discourage teenage pregnancy that lead to higher pregnancy rates, efforts to discourage littering – or cheating – on campus that have the opposite effects. Wilson advocates a strong preference for testing social interventions with randomized control experiments when possible (and ethical). When randomized control tests are not possible, then other attempts at measurement and replication are important even though difficult to do well.

Whether or not “story editing” is key to successful personal and social change – Wilson makes a strong case, but he could be cherry picking his evidence and I’m sure he has his professional critics – the emphasis on experimentation and testing interventions is an important one.

Lynne’s posts last week on experimentation in social contexts are related: Economic experimentation, economic growth, and regulation and Experimentation, Jim Manzi, and regulation/deregulation. I’m most of the way through Russ Robert’s EconTalk interview with Jim Manzi that Lynne mentioned in second-listed link (recommended); Manzi makes related arguments in favor of well-designed experiments where possible, and for trial and error experimentation where controlled experimentation is not possible.

In both Wilson’s book and the Manzi interview (and apparently in Manzi’s book Uncontrolled, which I haven’t read yet), the limits of multivariate analysis of naturally generated data – i.e., almost all econometric analysis – are examined and found wanting. As Manzi explains, “omitted variable bias” is massive when examining data on human systems; the systems are simply too complex to produce reliable, non-obvious predictions via multivariate analysis because you cannot control for all of the possible effects and interactions influencing the data. He suggests that while 90 percent of studies relying on well-designed randomized control experiments are subsequently replicated, that figure drops to 20 percent or so for studies relying primarily on well-designed multivariate analysis.

In a post on the deterrence effect of the death penalty, Timothy Taylor provides an example of the difficulties of using multivariate analysis to examine social policy. Taylor draws on a recent National Research Council study on the topic, which like a similar study published in 1978 has concluded “available studies provide no useful evidence on the deterrent effect of capital punishment.” Taylor then explains several reasons why it has been hard to draw firm conclusions from the data. While he doesn’t use the term “omitted variable bias,” it is among the problems that the NRC study finds hampering results in this area.

The views of both Wilson and Manzi, and the case study on the effects of the death penalty, all point to a certain humility concerning our claims to understand how the world works. But humility isn’t the end of the story, it isn’t an argument to stop; it is an argument to trust our beliefs about the social world less conclusively and also to trust them selectively: trust knowledge derived from replicated randomized-control experiments most, trust knowledge from replicated multivariate analysis much less, trust knowledge based on trial and error learning less as well.

These ideas will, once better worked out in my head, probably also mention Vernon Smith’s work on constructivist and ecological rationality. Of course, V. Smith is known to be a fan of experimental approaches to understanding social phenomena as well.

The constructivist way forward: experimentation! testing! social progress!

Adam Smith and mirror neurons paper published

Lynne Kiesling

I mentioned a while ago my working paper on the neuroscience research on mirror neurons and its relevance for Adam Smith’s theory of sympathy developed in The Theory of Moral Sentiments (1759). After revision and some extremely helpful referee guidance, the paper has been published in The Review of Austrian Economics:

Mirror neuron research and Adam Smith’s concept of sympathy: Three points of correspondence

In The Theory of Moral Sentiments, Adam Smith asserts that humans have an innate interest in the fortunes of other people and desire for sympathy with others. In Smith’s theory, sympathy is an imperfectly reflected combination of emotion and judgment when one observes someone (the agent) in a particular situation, and imagines being that person in that situation. That imagination produces a degree of interconnectedness among individuals. Recent neuroscience research on mirror neurons provides evidence consistent with Smith’s assertion, suggesting that humans have an innate capability to understand the mental states of others at a neural level. A mirror neuron fires both when an agent acts and when an agent observes that action being performed by another; the name derives from the “mirroring” of the action in the brain of the observer. This neural network and the capabilities arising from it have three points of correspondence with important aspects of the Smithian sympathetic process: an agent’s situation as a stimulus or connection between two similar but separate agents, an external perspective on the actions of others, and an innate imaginative capacity that enables an observer to imagine herself as the agent, in the agent’s situation. Both this sympathetic process and the mirror neuron system predispose individuals toward coordination of the expression of their emotions and of their actions. In Smith’s model this decentralized coordination leads to the emergence of social order, bolstered and reinforced by the emergence and evolution of informal and formal institutions grounded in the sympathetic process. Social order grounded in this sympathetic process relies on a sense of interconnectedness and on shared meanings of actions, and the mirror neuron system predisposes humans toward such interconnection.

If you are not a subscriber and would like to read the paper, the manuscript version is available on my SSRN page.

Monsters of Grok t-shirts

Lynne Kiesling

Here’s some outstanding geek attire! Monsters of Grok is a line of t-shirts that use rock band t-shirt logo designs, but the names are instead famous scientists and intellectuals such as Ada Lovelace (done as a Ladytron logo), Isaac Newton (as Iron Maiden), and Benjamin Franklin (as Black Flag). I fell over laughing when I first saw these, literally hyperventilating and weeping. Guess that makes me a geek rocker …

Today, to make myself feel better for having such a nasty ear infection (with gratitude to those of you who have sent get well wishes!), I finally broke down and purchased two of them. The first one’s easy to guess if you’re a regular KP reader, the second one is a little more tricky as there were several contenders. If you guess them both you get a gold star!

Economics of power market design compared unfavorably to climate science

Michael Giberson

From the Harvard Electricity Policy Group meeting in February 2011. By convention the meetings are off-the-record, so the speaker’s name is not identified in the summary:

I think the most important distinction between the fields of climate science and economics for me is the question of evidence. Science is characterized by a subtle interplay between conceptual models and the evidence that supports or contradicts them. There’s a rigorous process of analyzing and evaluating evidence and improving or discarding the conceptual models as the evidence dictates. In economics, evidence can often be harder to come by and more ambiguous in nature. This instance is a strong case in point. There is no real precedent. The markets are brand new. And with a few exceptions, the RTO regions have been basically in capacity surplus since the markets came into being for reasons having nothing to do with the capacity markets themselves.

Where evidence is lacking, theorists can find themselves somewhat less constrained. Under these circumstances, whichever side has the loudest voices or the most money or the most impressive resumes can dominate the conversation. This should never be mistaken as proof that their position are correct.

[...]

I’m aware that many will argue, and have argued, that a focus on market efficiency will in the long run lead to the greatest consumer benefit. This may be true in a nonexistent, two-sided perfect market with no barriers to entry. But it is a tenuous article of faith when applied to real electricity markets. And given the untold billions in costs to get to that uncertain future, it’s no wonder that consumer advocates basically unanimously are not eager to take that bet.

The implementation of capacity markets based on these unproven theories has already led, predictably, to the transfer of tens of billions of dollars of ratepayer wealth to generation owners. I say predictably because this outcome was clearly anticipated by all parties and articulated by many. The whole point was to raise costs. On the other hand, there’s not a shred of hard evidence that this process has led to new generation where it is most needed, or to avoided retirements of needed capacity or to cost-saving transmission investments. These are the ostensible purposes of the construct. There is no reason to believe that it would. It’s just too good an arrangement for existing generation owners as it is.

The speaker observes that capacity markets have also spurred development of demand-side resources, but this “positive benefit … has come at an astronomical cost.”

As an alternative to capacity markets, the speaker suggests a combination of state-sponsored investments, long term contracts, and short term spot markets. Not that he presents any evidence that this approach will work better for consumers, it just seems good to him. I wonder, scientifically speaking, why not just examine the existing evidence on prices and investments in “energy only” power markets in Texas, Alberta, and Australia?