Comments on economics, mystery fiction, drama, and art.

Friday, November 19, 2004

The Secret Sins of Economics

Deirdre McCloskey's pamphlet published by Prickly Paradigm Press, LLC--The Secret Sins of Economics--is a treat. It takes on what many consider to be the failings of economics, debunks them, and then concludes with what she feels are the two true sins--the failure of economic theory to move (much) beyond qualitative arguments on the theorizing hand and the fetish of statistical (rather than quantitative, economic) significance on the empirical hand.

While a few conclusions drawn from economic theory (Purchasing Power Parity) have strong quantitative implications, mostly our conclusions are qualitative--in general, rising incomes cause people to purchase more of most goods and services, for example. Or, other things equal, rising prices include people to buy less. The problem is that there is no way around this. I can think of no purely theoretical analysis which would let us say more than that higher prices are associated with smaller purchases. There's any number of reasons for this, but the primary one is that economics is not rooted in the physical laws of the universe, but in the psychology of individual decision-makers, and there's no reason to expect that the quantitative aspects of those decisions will always and forever be the same.

There is, for example, substantial evidence that the income elasticity of demand for food declines as income rises. And, in fact, this is not a surprise, and has been predicted from solely theoretical arguments. However, the precise quantitative nature of that relationship is almost certainly contingent, not determined or unchanging.

So we are dependent on empirical work to determine by how much people change their behavior when their circumstances change. And I have no problem with that. And I have no problem with the reality that how much people's behavior changes in response to (say) a change in income differs for different goods and services, and was different 20 or 30 or 40 years ago from what it is now. So the first of McCloskey's two sins strikes me as an aspect of the world economists (and other social scientists) have to deal with, and not at all as a sin.

What this does, however, is make the nature of the empirical work even more crucial to our enterprise. McCloskey's argument against over-reliance on significance testing is rather simple to make, but rather complex to understand. She argues that our analysis does give us estimates of the magnitude of the impact of changes in some X on some Y, and that in every case those magnitudes are accompanied by noise. Her argument is that statistical significance testing elevates an assessment of the magnitude of the noise to the primary position, relegating the estimated magnitude of the effect to secondary importance.

Her primary example here is drawn from the analysis of mammograms on early detection and treatment of breast cancer (rather than from economics), but the point is clear. The studies show that regular mammograms (beginning at age 40) are associated with a greater chance of early detection and treatment of breast cancer, thus saving lives. However, the magnitude of this result is not statistically significant (studies of mammograms for women over 50 yield similar, but larger and statistically significant results). Failure to use the results from studies of 40 - 50 year olds, she argues, is tantamount to murder: "Are you telling me, Mr. Medical Statistician, that even though there is a life-saving effect of early mammograms in the data on average, you are uncomfortable about claiming it? I thought the purpose of medical research was to save lives. Your comfort is not, as I understand it, what we are chiefly concerned about...The over-50 people [those who recommend regular mammograms only for women over 50] are killing patients. Maybe only slightly more than zero patients. But more than zero is murder" (pp. 50 - 51).

My problem is that the issue is more complicated than that. Let's suppose that the data show that the point estimate of the effect of early mammograms is to reduce death rates, and that the standard error of the estimate is such that we can be 70% certain that the true effect is to reduce death rates. That means there is a 30% chance that the true effect is either to leave death rates unchanged--or to raise them. Why might early mammograms raise death rates? Because treatment based on "false positives" might kill people. My (admittedly made-up) numbers, then suggest odds of 7 - 3 that early mammograms save lives--and 3 - 7 that they might lead to additional deaths. That's what significance testing tells us. Are we comfortable with 7 - 3 odds? Or do we want something better?

Her aspirin example makes the point. By early in the study is was apparent that the odds were much better than even that aspirin reduced repeat heart attacks.

It's true that we tend to emphasize statistical significance, and I wouldn't argue that we, in some cases, over-estimate it. But sometimes it demonstrates our reluctance to take too large a chance or doing something wrong.

How many of us, for example, would be willing to take these odds?

If the Federal Reserve raises interest rates (in the face of inflation), there's a 7-in-10 chance of resolving inflation without a depression--and a 3-in-10 chance of cratering the economy?

I don't know the answer to that, either. But significance testing is implied in our answer.


Post a Comment

<< Home