Rachel's Precaution Reporter #103, August 15, 2007


[Rachel's introduction: Adam Finkel Responds to Peter Montague: "The biggest challenge I have for you is a simple one: explain to me why 'bad precaution' doesn't invalidate the precautionary principle, but why for 25 years you've been trashing risk assessment based on bad risk assessments!"]

By Adam Finkel

Dear Peter,

Whether we agree more than we disagree comes down to whether means or ends are more important. To the extent you share my view (and given your influence on me early in my career, I should probably say, "I share your view...") that we have a long way to go to provide a safe, healthy, and sustainable environment for the general and (especially) the occupational populations, our remaining differences are only those of strategy. Outcomes "producing fewer harms" for nature and public health are, I agree, the goal, and I assume you agree that we could also use fewer decisions for which more harm is the likely -- perhaps even the desired -- outcome of those in power.

But being on the same side with respect to our goals makes our differences about methods all the more important, because knowing where to go but not how to get there may ultimately be little better than consciously choosing to go in the wrong direction.

Your long-standing concern about quantitative risk assessment haunts me, if there's such a thing as being "haunted in a constructive way." I tell my students at the University of Medicine and Dentistry of New Jersey and at Princeton during the first class of every semester that I literally haven't gone a week in the past 10 years without wondering, thanks to you, if I am in fact "helping the state answer immoral questions" about acceptable risk and in so doing, am "essentially keeping the death camp trains running on time" (quote from Rachel's #519, November 7, 1996). I don't consider this analogy to be name-calling, because I have such respect for its source, so I hope you won't take offense if I point out that everyone who professes to care about maximizing life expectancy, human health, and the natural functioning of the planet's ecosystems ought to ask the same question of themselves. I do worry about quantitative risk assessment and its mediocre practitioners, as I will try to explain below, but I also wish that advocates of the precautionary principle would occasionally ask themselves whether more or fewer people will climb unwittingly aboard those death-camp trains if they run on a schedule dictated by "precaution."

And if the outcomes we value flourish more after an action based on quantitative risk assessment than they do after an action motivated by precaution, then a preference for the latter implies that noble means matter more than tangible ends -- which I appreciate in theory, but wonder what would be so noble about a strategy that does less good or causes more harm.

I distrust some versions of the precautionary principle for one basic reason.[1] If I re-express your first three-part definition in one sentence (on the grounds that in my experience, the fact of scientific uncertainty goes without saying), I get "if you reasonably suspect harm, you have a duty to act to avert harm, though the kind of action is up to you." Because I believe that either inaction or action can be unacceptably harmful, depending on circumstances, I worry that a principle that says "act upon suspicion of harm" can be used to justify anything. This was my point about the Iraq war, which I agree is despicable, but not only because the suspicion of harm was concocted (at least, inflated) but because the consequences of the remedy were so obviously glossed over.

Whatever principle guides decision makers, we need to ask how harmful the threat really is, and also what will or may happen if we act against it in a particular way. Otherwise, the principle degenerates into "eliminate what we oppose, and damn the consequences." I'm not suggesting that in practice, the precautionary principle does no better than this, just as I trust you wouldn't suggest that quantitative risk assessment is doomed to be no better than "human sacrifice, Version 2.0." Because I agree strongly with you (your "verbose version, point 5") that when health and dollars clash, we should err on the side of protecting the former rather than the latter, I reject some risk-versus-risk arguments, especially the ones from OMB [Office of Management and Budget] and elsewhere that regulation can kill more people than it saves by impoverishing them (see, for example, my 1995 article "A Second Opinion on an Environmental Misdiagnosis: The Risky Prescriptions of Breaking the Vicious Circle [by Judge Steven Breyer]", NYU Environmental Law Journal, vol. 3, pp. 295-381, especially pp. 322-327). But the asbestos and Iraq examples show the direct trade-offs that can ruin outcomes made in a rush to prevent. Newton's laws don't quite apply to social decision-making: for every action, there may be an unequal and not-quite-opposite reaction. "Benign" options along one dimension may not be so benign when viewed holistically. When I was in charge of health regulation at OSHA, I tried to regulate perchloroethylene (the most common solvent used in dry-cleaning laundry). I had to be concerned about driving dry-cleaners into more toxic substitutes (as almost happened in another setting when we regulated methylene chloride, only to learn of an attempt -- which we ultimately helped avert -- by some chemical manufacturers to encourage customers to switch to an untested, but probably much more dangerous, brominated solvent). But encouraging or mandating "good old-fashioned wet cleaning" was not the answer either (even if it turns out to be as efficient as dry cleaning), once you consider that wet clothes are non-toxic but quite heavy -- and the ergonomic hazards of thousands of workers moving industrial-size loads from washers to dryers is the kind of "risk of action" that only very sophisticated analyses of precaution would even identify.

This is why I advocated "dampening the enthusiasm for prevention" -- meaning prevention of exposures, not prevention of disease, which I agree is the central goal of public health. That was a poor choice of words on my part, as I agree that when the link between disease and exposure is clear, preventing exposure is far preferable to treating the disease; the problem comes when exposures are eliminated but their causal connection to disease is unfounded.

To the extent that the precautionary principle -- or quantitative risk assessment, for that matter -- goes after threats that are not in fact as dire as worst- case fears suggest, or does so in a way that increases other risks disproportionately, or is blind to larger threats that can and should be addressed first, it is disappointing and dangerous. You can say that asbestos removal was not "good precaution" because private interests profited from it, and because the remediation was often done poorly, not because it was a bad idea in the first place. Similarly, you can say that ousting Saddam Hussein was not "good precaution" because the threat was overblown and it (he) could have been "reduced" (by the military equivalent of a pump-and-treat system?) rather than "banned" (with extreme prejudice). Despite the fact that in this case the invasion was justified by an explicit reference to the precautionary principle ("we have every reason to assume the worst and we have an urgent duty to prevent the worst from occurring"), I suppose you can argue further that not all actions that invoke the precautionary principle are in fact precautionary -- just as not all actions that claim to be risk-based are in fact so. But who can say whether President Bush believed, however misguidedly, that there were some signals of early warning emerging from Iraq? Your version of the precautionary principle doesn't say that "reasonable suspicion" goes away if you also happen to have a grudge against the source of the harm.

Again, in both asbestos removal and Iraq I agree that thoughtful advocates of precaution could have done much better. But how are these examples really any different from the reasonable suspicion that electromagnetic fields or irradiated food can cause cancer? Those hazards, as well as the ones Hussein may have posed, are/were largely gauged by anecdotal rather than empirical information, and as such are/were all subject to false positive bias. We could, as you suggest, seek controls that contain the hazard (reversibly) rather than eliminating it (irrevocably), while monitoring and re-evaluating, but that sounds like "minimizing" harm rather than "averting" it, and isn't that exactly the impulse you scorn as on the slippery slope to genocide when it comes from a risk assessor? And how, by the way, are we supposed to fine-tune a decision by figuring out whether our actions are making "things go badly," other than by asking "immoral questions" about whose exposures have decreased or increased, and by how much?

We could also, as you suggest, "really engage the people who will be affected," and reject alternatives that the democratic process ranks low. I agree that more participation is desirable as an end in itself, but believe we shouldn't be too sanguine about the results. I've been told, for example, that there exist people in the U.S. -- perhaps a majority, perhaps a vocal affected minority -- who believe that giving homosexual couples the civil rights conferred by marriage poses an "unacceptable risk" to the fabric of society. They apparently believe we should "avert" that harm. If I disagree, and seek to stymie their agenda, does that make me "anti-precautionary" (or immoral, if I use risk assessment to try to convince them that they have mis-estimated the likelihood or amount of harm)?

So I'm not sure that asbestos and Iraq are atypical examples of what happens when you follow the precautionary impulse to a logical degree, and I wonder if those debacles might even have been worse had those responsible followed your procedural advice for making them more true to the principle. But let's agree that they are examples of "bad precaution." The biggest challenge I have for you is a simple one: explain to me why "bad precaution" doesn't invalidate the precautionary principle, but why for 25 years you've been trashing risk assessment based on bad risk assessments! Of course there is a crevasse separating what either quantitative risk assessment or precaution could be from what they are, and it's unfair to reject either one based on their respective poor track records. You've sketched out a very attractive vision of what the precautionary principle could be; now let me answer some of your seven concerns about what quantitative risk assessment is.

(1) (quantitative risk assessment doesn't work for unproven hazards) I hate to be cryptic, but "please stay tuned." A group of risk assessors is about to make detailed recommendations to address the problem of treating incomplete data on risk as tantamount to zero risk. In the meantime, any "precautionary" action that exacerbates any of these "real-world stresses" will also be presumed incorrectly to do no harm...

(2) (quantitative risk assessment is ill-equipped to deal with vulnerable periods in the human life cycle) It's clearly the dose, the timing, and the susceptibility of the individual that act and interact to create risk. quantitative risk assessment depends on simplifying assumptions that overestimate risk when the timing and susceptibility are favorable, and underestimate it in the converse circumstances. The track record of risk assessment has been one of slow but consistent improvement toward acknowledging the particularly vulnerable life stages and individuals (of whatever age) who are most susceptible, so that to the extent the new assumptions are wrong, they tend to over-predict. This is exactly what a system that interprets the science in a precautionary way ought to do -- and the alternative would be to say "we don't know enough about the timing of exposures, so all exposures we suspect could be a problem ought to be eliminated." This ends up either being feel-good rhetoric or leading to sweeping actions that may, by chance, do more good than harm.

(3) (quantitative risk assessment leaves out hard-to-quantify benefits) Here, as in the earlier paragraph about "pros and cons," you have confused what must be omitted with "what we let them omit sometimes." I acknowledge that most practitioners of cost-benefit analysis choose not to quantify cultural values, or aggregate individual costs and benefits so that equitable distributions of either are given special weight. But when some of us risk assessors say "the benefits outweigh the costs" we consciously and prominently include justice, individual preferences, and "non-use values" such as the existence of natural systems on the benefits side of the ledger, and we consider salutary economic effects of controls as offsetting their net costs. Again, "good precaution" may beat "bad cost-benefit analysis" every time, but we'd see a lot more "good cost-benefit analysis" if its opponents would help it along rather than pretending it can't incorporate things that matter.

(4) (quantitative risk assessment is hard to understand) The same could be said about almost any important societal activity where the precise facts matter. I don't fully understand how the Fed sets interest rates, but I expect them to do so based on quantitative evaluation of their effect on consumption and savings, and to be able to answer intelligent questions about uncertainties in their analyses. "Examining the pros and cons of every reasonable approach," which we both endorse, also requires complicated interpretation of data on exposures, health effects, control efficiencies, costs, etc., even if the ruling principle is to "avert harm." So if precaution beats quantitative risk assessment along this dimension, I worry that it does so by replacing unambiguous descriptions ("100 deaths are fewer than 1000") with subjective ones ("Option A is 'softer' than Option B").

(5) (Decision-makers can orchestrate answers they most want to hear) "Politics" also enters into defining "early warnings," setting goals, and evaluating alternatives -- this is otherwise known as the democratic process. Removing the numbers from an analysis of a problem or of alternative solutions simply shifts the "torturing" of the number into a place where it can't be recognized as such.

(6) (quantitative risk assessment is based on unscientific assumptions) It sounds here as if you're channeling the knee-jerk deregulators at the American Council for Science and Health, who regularly bash risk assessment to try to exonerate threats they deem "unproven." quantitative risk assessment does rely on assumptions, most of which are grounded in substantial theory and evidence; the alternative would be to contradict your point #1 and wait for proof which will never come. The 1991 European Commission study you reference involved estimating the probability of an industrial accident, which is indeed a relatively uncertain area within risk assessment, but one that precautionary decision-making has to confront as well. The 1991 NAS study was a research agenda for environmental epidemiology, and as such favored analyses based on human data, which suffer from a different set of precarious assumptions and are notoriously prone to not finding effects that are in fact present.

(7) (quantitative risk assessment over-emphasizes those most exposed to each source of pollution) This is a fascinating indictment of quantitative risk assessment that I think is based on a non sequitur. Yes, multiple and overlapping sources of pollution can lead to unacceptably high risks (and to global burdens of contamination), which is precisely why EPA has begun to adopt recommendations from academia to conduct "cumulative risk analyses" rather than regulating source by source. The impulse to protect the "maximally exposed individual" (MEI) is not to blame for this problem, however; if anything, the more stringently we protect the MEI, the less likely it is that anyone's cumulative risk will be acceptably high, and the more equitable the distribution of risk will be. Once more, this is a problem that precautionary risk assessment can allow us to recognize and solve; precaution alone can at its most ambitious illuminate one hazard at a time, but it has no special talent for making risks (as opposed to particular exposures) go away.

I note that in many ways, your list may actually be too kind given what mainstream risk assessment has achieved to date. These seven possible deficiencies pale by comparison to the systemic problems with many quantitative risk assessments, which I have written about at length (see, for example, the 1997 article "Disconnect Brain and Repeat After Me: 'Risk Assessment is Too Conservative.'" In Preventive Strategies for Living in a Chemical World, E. Bingham and D. P. Rall, eds., Annals of the New York Academy of Sciences, 837, 397-417). Risk assessment has brought us fallacious comparisons, meaningless "best estimates" that average real risks away, and arrogant pronouncements about what "rational" people should and should not fear. But these abuses indict the practitioners -- a suspicious proportion of whom profess to be trained in risk assessment but never were -- not the method itself, just as the half-baked actions taken in precaution's name should not be generalized to indict that method.

So in the end, you seem to make room for a version of the precautionary principle in which risk assessment provides crucial raw material for quantifying the pros and cons of different alternative actions. Meanwhile, I have always advocated for a version of quantitative risk assessment that emphasizes precautionary responses to uncertainty (and to human interindividual variability), so that we can take actions where the health and environmental benefits may not even exceed the expected costs of control (in other words, "give the benefit of the doubt to nature and to public health"). The reason the precautionary principle and quantitative risk assessment seem to be at odds is that despite the death-camp remark, you are more tolerant of risk assessment than the center of gravity of precaution, while I am more tolerant of precaution than the center of gravity of my field. If the majorities don't move any closer together than they are now, and continue to be hijacked by the bad apples in each camp, I guess you and I will have to agree to disagree about whether mediocre precaution or mediocre quantitative risk assessment is preferable. But I'll continue to try to convince my colleagues that since risk assessment under uncertainty must either choose to be precautionary with health, or else choose to pretend that errors that waste lives and errors that waste dollars are morally equivalent, we should embrace the first bias rather than the second. I hope you will try to convince your colleagues (and readers) that precaution without analysis is like the "revelation" to invade Iraq -- it offers no justification but sorely needs one.

[1] As you admit, there are countless variations on the basic theme of precaution. I was careful to say in my review of Sunstein's book that I prefer quantitative risk assessment to "a precautionary principle that escews analysis," and did not mean to suggest that most or all current versions of it fit the description.