Deterrence of Homicide
Frank Warner has posted on an article that purports to demonstrate the effectiveness of the death penalty at reducing the murder rate. He doubts it. He is worried that the source study is "rigged". I have, in response, tried to trace the underlying analysis back to a specific study. The general impression I get from the article, though, is that there is an accumulation of results, mostly coming out of a certain school of thought, a school which advocates an economic approach to such issues. I was able to identify a specific study, Mocan and Gittings (2001), which I thought was an appropriate exemplar. It estimates that 5 to 6 murders are prevented for every execution and that 1.5 murders are encouraged by each pardon. The study, available in Adobe format through SSRN, can be requested for delivery to an e-mail address.
I’m not willing to say that the study was rigged, but I will go so far as to say that I have my doubts about the specific result and much greater doubts about the moral issues involved. Let me preface this discussion by saying that I am unsure of my own position on the issue of capital punishment. With some people, such as Saddam, I believe we have no choice, but it is generally not such a forced decision. The process is expensive and distasteful. It’s morally questionable. If we acknowledge that murder is forbidden to individuals, then how can we allow it to the state?
I can’t decide whether capital punishment is desirable in principle, but I do think our bias ought to be against it. The question is, can it be justified on a rational utilitarian basis?
This paper uses a data set that consists of the entire history of 6,143 death sentences between 1977 and 1997 in the United States to investigate the impact of capital punishment on homicide. This data set is merged with state panels that include crime and deterrence measures as well as state characteristics to analyze the impact of executions and governors' pardons on criminal activity. Because the exact month and year of each execution and pardon can be identified, they are matched with criminal activity in the relevant time frame. Controlling for a variety of state characteristics, the paper investigates the impact of the execution rate, pardon rate, homicide arrest rate, the imprisonment rate and the prison death rate on the rate of homicide. The models are estimated in a number of different forms, controlling for state fixed effects, common time trends, and state-specific time trends. Each additional execution decreases homicides by 5 to 6, while three additional pardons generate one to 1.5 additional homicides. These results are robust to model specifications and measurement of the variables.
… So you have here a study that suggests -- suggests, mind you -- that multiple lives will be saved across a dispersed population in exchange for the lives of a certain number of convicted murderers. The number of lives to be saved is a statistical estimate presumably based on the analysis of reliable social and criminal datasets along with various supporting variables that have been determined by researchers to be relevant. The presumed mechanism for this process is the dissemination of information concerning actual punishment of one or more actual murderers. Potential murderers will, upon leaning of someone recently executed for murder, will be at least somewhat less likely to consummate the act. Since the author describes the study as an "economic" analysis, the result is to be interpreted as the marginal rate of return for each additional execution. How many lives are saved per execution? The effect of executions is touted in this study as having a statistically significant suppressive effect on the number of murders.
The moral issues involved include the following:
- Uncertain Models and Consequentialism – How certain are we that the statistical estimate is actually non-zero? In fact there is some probability that it could be negative. What if it turned out that it actually cost extra lives to execute a prisoner? Would we stop immediately?
- Consequentialism and Consistency – If capital punishment did not exist today, would we use this evidence to institute it?
- Opportunity Cost – How much additional cost, in terms of dollars, would we be willing to tolerate in order to accomplish these executions? We know it costs much more to impose the death penalty than it costs to maintain a prisoner for life. Money, believe it or not, is convertible into lives. The difference in cost could be used to build hospitals, buy body armor for soldiers and police, or pay for more police salaries, or even pay for more of this university research into effects of the death penalty. Think about that! These investments might actually save more lives for the same money.
- Untried Strategies – Are there any other ways to accomplish the same putative benefit? Could we cut down on murders, as Mayor Giuliani did, by picking up the trash and addressing petty crime more vigorously? How about enforcing court orders concerning wife-beaters? Can we honestly say that we have picked all the low-hanging fruit in this effort to reduce murders?
- Unacknowledged Benefits – Is there a hidden "real" reason why people might approve of capital punishment? For instance, they might approve of capital punishment for reasons of biblical interpretation. It might just be tradition or retribution or even sadistic satisfaction. Can any of these or other reasons be justified on their own, rendering the study results irrelevant? What is the cost/benefit value of these alternative reasons?
- Overlooked Costs – Likewise, have we measured the cost of vicarious suffering? Are there reasons to eliminate capital punishment based on the impact that it has on non-murderers? Perhaps sensitive people suffer a psychological impact. Perhaps it perpetuates elements of our culture that are vengeance oriented and discourages forgiveness. Perhaps it prevents us all from attaining satori.
- Hidden Agendas – Might some people be using the study results for polemical reasons only, disguising their true motivations? Would that matter?
- Missed Opportunities – If we are using these deterrence numbers to justify capital punishment, are we sure that we are getting the maximum effect? Are there means to magnify and target the impact? For instance, we could distribute pictures of the process in the dead convict’s former neighborhood. Or post it on a special 3-D web-site. We could send 3-D glasses to anyone who requests them. (Keep the names and addresses.)
- Unexplored Alternatives – Do we really need to use real people? Maybe we could disseminate disinformation explaining how capital prisoners are sometimes renditioned to Saudi Arabia. Perhaps an annual caning would be more effective. Convicts can be given the choice annually. You don’t even have to pursue the sentencing phase as long as they accept the caning. Videos available upon request. (Keep the names and addresses.)
- Reliability of the Conclusions – Once again, how much are we willing to base important public policy on numbers which are currently disputable. Sometimes there is no choice but to go with what you’ve got, but in this case we seem to have a choice.
- Minority Discrimination – There is still, in spite of major social and judicial efforts, evidence that arrests and convictions are influenced by stereotypes and racial prejudice.
- Reliability of the Convictions – And most importantly, IMO, how sure are we, in specific cases, that the conviction was fair and accurate? There have been at least 200 exonerations in the US based on new DNA analysis of old evidence. There have been many further exonerations based on other evidential weaknesses as well. Authorities have not been notably willing to have their judgment questioned, and the process of correcting these decisions has been hampered, as well, by a number of unavoidable evidence losses over time. But these reversals have damaged the credibility of the legal process and of important techniques that had previously been relied on, such as witness identification. Justice, it turns out, is not just blind, but prone to hallucination and obsessive false certainty. Yes, we usually get it right, but we’re not doing nearly as well as we used to think.
I have looked specifically at the 2001 paper by H. Naci Mocan and R. Kaj Gittings, both of the University of Colorado at Denver. It looks pretty professional and (unlike the Iraq Mortality Study for instance) exhibits signs that a tremendous amount of effort was expended – perhaps too much effort. Some of the scientific and statistical aspects that I see as problematic are:
- Researcher Bias – Steve Levitt is acknowledged for his input. Levitt is a highly regarded economist and author of Freakonomics, a very discussed book, a thought-provoking and controversial popularization of Levitt’s economic ideas. I admire him, myself, but there is a problem. He is the proponent of a number of unorthodox and even extravagant theories. The relevant one here is his bias that even very small incentives have measurable effects on behavior in the aggregate. The authors of the study may well share this bias, and may even have a very small incentive to identify a measurable effect where there is none. As Levitt himself might say, the incentive is there, and, while not determinative in any specific case, incentives have effects.
- Overspecification – Modern statistical packages provide the opportunity to explore any data set to the point of exhaustion, applying multiple statistical measures to all the possible combinations of large numbers of candidate variables. Some combinations are going to produce effects merely due to the law of large numbers. The study also complicates the issue with a number of subtle variations on the model, some of which IMO, include excessive numbers of interaction variables. Some variables will be identified as important by accident.
- Possible Filtering of Cases – I’m not clear on how the observations were allocated and why the dataset has apparently been reduced from 6,143 to less than 1,000. Researchers can sample or eliminate cases for all sorts of justifiable reasons, some of which may be influenced by subconscious expectations. It might be that the researchers have appropriately divvied up the cases for separately testing the distinctively different species of models. That would be a very admirable practice, but it opens the possibility that the appearance of certainty is amplified by separate reporting of similar models. Possibly cases were lost in the process of matching multiple datasets. In that event, there is more than a little chance that the matched cases were different from the unmatched cases.
- Auto-correlation – There are too many variables. I can’t tell if they’re all being used or if final conclusions are based only on the significant variables. I would be inclined to do a preliminary factor analysis to help eliminate redundant variables and to explore the underlying structure of the data. It is silly to imagine that there are more than, say, seven important sources of variance. And if there are that many, it is bold indeed to assume that the effect of one of the residual variables can be measured with any reasonable degree of precision. There would be too much play and inaccuracy in the major sources. I’m guessing that the authors don’t believe that the death penalty has a major impact on the murder rate, and non-major effects will be lost in the noise.
- Sources of Variance – Judging from the first line in the tables, three things, location, time and trend effects, explain a huge proportion of the variance in homicide rates. I’m willing to bet that better granularity of these measures would be more important than any of the variables that the authors are trying to isolate. I’m suggesting that they look at specific cities, at neighborhoods within cities, and at seasonal effects. (One idiosyncratic effect that they did remove was the 1995 terrorist attack in Oklahoma City.) Removing all these effects, they could first study the residual for non-randomness before they proceed to more subtle analyses.
- Unnecessary Time-series Assumptions – A component of the model is time-series analysis. This implies a certain pattern that relates deterrence effects over time with composite deterrence. As well as adding a lot of new lagged variables and further complexity, it ignores the most likely nature of the relationship between the fact of execution and the potential perpetrator (pp). That impact is zero/one: does capital punishment exist, and does it actually occur in the pp’s domain? Time series analysis assumes that an exponentially diminishing effect persists from previous time periods. I’m inclined to think that the opposite is true. What matters is: when did the pp learn of this punishment. The imprinting of this social message would have occurred much earlier – probably around the age of eight.
- Insufficient Significance – The reported level of significance for the execution rate is worse than the ninety-five percent usually expected. The primary model identifies it as better than ninety percent, but less than ninety-five. The impact of the pardon rate is apparently much smaller, but much more significant. Why should that be? Could it be that the pardon rate reflects the frequency of errors committed by the legal system in the past? Could a generally incompetent legal system be a contributor to the murder rate?
I know this analysis seems very critical. I’m doing my best on that score, because this kind of attitude is the method of science. I disagree with the methods and the results of the study, but I actually approve of the authors’ efforts to a great extent. The authors are not claiming certainty, nor are they suggesting that their result trumps any other analysis that might determine social policy. They are merely asking us to consider the possibility that such an effect might exist, implying that the effect is relevant, and they are giving an initial estimate of that effect. It needs to be confirmed, I think, and I’m not at all sure it can be. I personally believe that decisions on the death penalty will remain controversial and cannot be resolved in the realm of social science research.