.
::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

Rachel's Precaution Reporter #103

"Foresight and Precaution, in the News and in the World"

Wednesday, August 15, 2007...........Printer-friendly version
www.rachel.org -- To make a secure donation, click here.
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::: 

Table of Contents...

Two Friends Debate Risk Assessment and Precaution
  In this issue of the Precaution Reporter, Adam Finkel, a risk
  assessor, and Peter Montague, an advocate for precaution, engage in a
  dialog about risk and precaution.
Adam Finkel Reviews Cass Sunstein's Book, Risk and Reason
  This book review by Adam Finkel started the dialog: "I believe that
  at its best, QRA [quantitative risk assessment] can serve us better
  than a 'precautionary principle' that eschews analysis in favor of
  crusades against particular hazards that we somehow know are
  needlessly harmful and can be eliminated at little or no economic or
  human cost."
A Letter To My Friend Who Is a Risk Assessor
  In response to Adam Finkel's review of Cass Sunstein's book,
  Peter Montague wrote this letter explaining why the nation's asbestos-
  removal programs for schools, and President Bush's invasion of Iraq,
  are not examples of precaution -- and listing some major problems with
  decisions based narrowly on quantitative risk assessment.
Risk Assessment and Precaution: Common Strengths and Flaws
  "The biggest challenge I have for you is a simple one: explain to
  me why 'bad precaution' doesn't invalidate the precautionary
  principle, but why for 25 years you've been trashing risk assessment
  based on bad risk assessments!"
Two Rules for Decisions: Trust in Economic Growth vs. Precaution
  Why do we need precaution? Because there is growing evidence that
  the entire planet and all its inhabitants are imperiled by the total
  size of the human enterprise. As a result, the precautionary principle
  has arisen as a new way to balance our priorities. Two overarching
  decision rules are competing for supremacy-- "trust in economic
  growth" vs. "precaution." Europe is edging toward precaution. The U.S.
  is, so far, sticking with "trust in economic growth."

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

From: Rachel's Precaution Reporter #103, Aug. 15, 2007
[Printer-friendly version]

TWO FRIENDS DEBATE RISK ASSESSMENT AND PRECAUTION

By Peter Montague

Recently my friend Adam Finkel -- a skilled and principled risk
assessor -- won two important victories over the Occupational Safety
and Health Administration (OSHA).

In 2000, Adam was appointed Regional Administrator for OSHA in charge
of the Denver office. When he began to suspect that OSHA inspectors
were being exposed to dangerous levels of the highly-toxic metal,
beryllium, he took precautionary action -- urging OSHA to test
beryllium levels in OSHA inspectors. It cost him his job.

The agency did not even want to tell its inspectors they were being
exposed to beryllium, much less test them. So Adam felt he had no
choice -- in 2002, he blew the whistle and took his concerns public.
OSHA immediately relieved him of his duties as Regional
Administrator, and moved him to Washington where they changed his
title to "senior advisor" and assigned him to the National Safety
Council -- a place where "they send people they don't like," he would
later tell a reporter.

Adam sued OSHA under the federal whistleblower protection statute and
eventually won two years' back pay, plus a substantial lump sum
settlement, but he didn't stop there. In 2005, he lodged a Freedom of
Information Act law suit against the agency, asking for all
monitoring data on toxic exposures of all OSHA inspectors.
Meanwhile, OSHA began testing its inspectors for beryllium, finding
exactly what Adam had suspected they would find -- dangerously high
levels of the toxic metal in some inspectors.

Adam is now a professor in the Department of Environmental and
Occupational Health, School of Public Health, University of Medicine
and Dentistry of New Jersey (UMDNJ); and a visiting scholar at the
Woodrow Wilson School, Princeton University. At UMDNJ, he teaches
"Environmental Risk Assessment."

Earlier this summer, Adam won his FOIA lawsuit. A judge ruled that
OSHA has to hand over 2 million lab tests on 75,000 employees going
back to 1979. It was a stunning victory over an entrenched
bureaucracy.

Meanwhile in 2006 the American Public Health Association selected
Adam to receive the prestigious David Rall Award for advocacy in
public health. You can read his acceptance speech here.

When Adam's FOIA victory was announced early in July, I sent him a
note of congratulations. He sent back a note, attaching a couple of
articles, one of which was a book review he had published recently
of Cass Sunstein's book, Risk and Reason. Sunstein doesn't "get" the
precautionary principle -- evidently, he simply sees no need for it.
Of course the reason why we need precaution is because the cumulative
impacts of the human economy are now threatening to wreck our only
home -- as Joe Guth explained last week in Rachel's News (reprinted
in this issue of the Precaution Reporter).

In any event, I responded to Adam's book review by writing "A
letter to my friend, who is a risk assessor," and I invited Adam to
respond, which he was kind enough to do.

So there you have it.

Do any readers want to respond to either of us? Please send responses
to peter@rachel.org.

Return to Table of Contents

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

From: Journal of Industrial Ecology (pg. 243), Oct. 1, 2005
[Printer-friendly version]

ADAM FINKEL REVIEWS CASS SUNSTEIN'S BOOK, RISK AND REASON

A review of: Risk and Reason: Safety, Law, and the Environment, by
Cass R. Sunstein. Cambridge, UK: Cambridge University Press, 2002, 342
pp., ISBN 0521791995, $25.00 (Also in paperback: ISBN 0521016258
$22.99).

By Adam M. Finkel

As someone dedicated to the notion that society needs quantitative
risk assessment (QRA) now more than ever to help make decisions about
health, safety, and the environment, I confess that I dread the
arrival of each new book that touts QRA or cost-benefit analysis as a
"simple tool to promote sensible regulation." Although risk analysis
has enemies aplenty, from both ends of the ideological spectrum, with
"friends" such as John Graham (Harnessing Science for Environmental
Regulation, 1991), Justice Stephen Breyer (Breaking the Vicious
Circle, 1994), and now Cass Sunstein, practitioners have their hands
full.

I believe that at its best, QRA can serve us better than a
"precautionary principle" that eschews analysis in favor of crusades
against particular hazards that we somehow know are needlessly harmful
and can be eliminated at little or no economic or human cost. After
all, this orientation has brought us increased asbestos exposure for
schoolchildren and remediation workers in the name of prevention, and
also justified an ongoing war with as pure a statement of the
precautionary principle as we are likely to find ("we have every
reason to assume the worst, and we have an urgent duty to prevent the
worst from occurring," said President Bush in October 2002 about
weapons of mass destruction in Iraq). More attention to benefits and
costs might occasionally dampen the consistent enthusiasm of the
environmental movement for prevention, and might even moderate the on-
again, off-again role precaution plays in current U.S. economic and
foreign policies. But "at its best" is often a distant, and even a
receding, target -- for in Risk and Reason, Sunstein has managed to
sketch out a brand of QRA that may actually be less scientific, and
more divisive, than no analysis at all.

To set environmental standards, to set priorities among competing
claims for environmental protection, or to evaluate the results of
private or public actions to protect the environment, we need reliable
estimates of the magnitude of the harms we hope to avert as well as of
the costs of control. The very notion of eco-efficiency presupposes
the ability to quantify risk and cost, lest companies either waste
resources chasing "phantom risks" or declare victory while needless
harms continue unabated. In a cogent chapter (Ch. 8) on the role of
the U.S. judiciary in promoting analysis, Sunstein argues persuasively
that regulatory agencies should at least try to make the case that the
benefits of their efforts outweigh the costs, but he appears to
recognize that courts are often ill-equipped to substitute their
judgments for the agencies' about precisely how to quantify and to
balance. He also offers a useful chapter (Ch. 10) on some creative
ways agencies can transcend a traditional regulatory enforcer role,
help polluters solve specific problems, and even enlist them in the
cause of improving eco-efficiency up and down the supply chain. (I
tried hard to innovate in these ways as director of rulemaking and as
a regional administrator at the U.S. Occupational Safety and Health
Administration, with, at best, benign neglect from political
appointees of both parties, so Sunstein may be too sanguine about the
practical appeal of these innovations.)

Would that Sunstein had started (or stopped) with this paean to
analysis as a means to an end -- perhaps to be an open door inviting
citizens, experts, those who would benefit from regulation, and those
reined in by it to "reason together." Instead, he joins a chorus of
voices promoting analysis as a way to justify conclusions already
ordained, adding his own discordant note. Sunstein clearly sees QRA as
a sort of national antipsychotic drug, which we need piped into our
homes and offices to dispel "mass delusions" about risk. He refers to
this as "educating" the public, and therein lies the most
disconcerting aspect of Risk and Reason: he posits a great divide
between ordinary citizens and "experts," and one that can only be
reconciled by the utter submission of the former to the latter. "When
ordinary people disagree with experts, it is often because ordinary
people are confused," he asserts (p. 56) -- not only confused about
the facts, in his view, but not even smart enough to exhibit a
rational form of herd behavior! For according to Sunstein, "millions
of people come to accept a certain belief [about risk] simply because
of what they think other people believe" (p. 37, emphasis added).

If I thought Sunstein was trying by this to aggrandize my fellow
travelers -- scientists trained in the biology of dose-response
relationships and the chemistry and physics of substances in the
environment, the ones who actually produce risk assessments -- I
suppose I would feel inwardly flattered, if outwardly sheepish, about
this unsolicited elevation above the unwashed masses. But the reader
will have to look long and hard to find citations to the work of
practicing risk assessors or scientists who helped pioneer these
methods. Instead, when Sunstein observes that "precisely because they
are experts, they are more likely to be right than ordinary people. .
. brain surgeons make mistakes, but they know more than the rest of us
about brain surgery" (p. 77), he has in my view a quaint idea of where
to find the "brain surgeons" of environmental risk analysis.

He introduces the book with three epigrams, which I would oversimplify
thus: (1) the general public neglects certain large risks worthy of
fear, instead exhibiting "paranoia" about trivial risks; (2) we
maintain these skewed priorities in order to avoid taking
responsibility for the (larger) risks we run voluntarily; and (3)
defenders of these skewed priorities are narcissists who do not care
if their policies would do more harm than good. The authors of these
epigrams have something in common beyond their worldviews: they are
all economists. Does expertise in how markets work (and that
concession would ignore the growing literature on the poor track
record of economists in estimating compliance costs in the regulatory
arena) make one a "brain surgeon" qualified to bash those with
different views about, say, epidemiology or chemical carcinogenesis?

To illustrate the effects of Sunstein's continued reliance on one or
two particular subspecies of "expert" throughout the rest of his book,
I offer a brief analysis of Sunstein's five short paragraphs (pp.
82-83) pronouncing that the 1989 public outcry over Alar involved
masses of "people [who] were much more frightened than they should
have been."1 Sunstein's readers learn the following "facts" in this
example:

** Alar was a "pesticide" (actually, it regulated the growth of apples
so that they would ripen at a desired time).

** "About 1 percent of Alar is composed of UDMH [unsymmetrical
dimethylhydrazine], a carcinogen" (actually, this is roughly the
proportion found in raw apples -- but when they are processed into
apple juice, about five times this amount of UDMH is produced).

** The Natural Resources Defense Council (NRDC) performed a risk
assessment claiming that "about one out of every 4,200 [preschool
children] exposed to Alar will develop cancer by age six" (actually,
NRDC estimated that exposures prior to age six could cause cancer with
this probability sometime during the 70-year lifetimes of these
children -- a huge distinction, with Sunstein's revision making NRDC
appear unfamiliar with basic assumptions about cancer latency
periods).

** The EPA's current risk assessment is "lower than that of the NRDC
by a factor of 600" (actually, the 1/250,000 figure Sunstein cites as
EPA's differs from NRDC's 1/4,200 figure by only a factor of 60
(250,000  4,200). Besides, EPA never calculated the risk at one in
250,000. After Alar's manufacturer (Uniroyal) finished a state-of-the-
art study of the carcinogenicity of UDMH in laboratory animals, EPA
(Federal Register, Vol. 57, October 8, 1992, pp. 46,436-46,445)
recalculated the lifetime excess risk to humans at 2.6  10-5, or 1 in
38,000. And, acting on recommendations from the U.S. National Academy
of Sciences, EPA has subsequently stated that it will consider an
additional tenfold safety factor to account for the increased
susceptibility of children under age 2, and a threefold factor for
children aged 2 to 16 -- which, had they been applied to UDMH, would
have made the EPA estimate almost equal to the NRDC estimate that made
people "more frightened than they should have been").

** "A United Nations panel... found that Alar does not cause cancer in
mice, and it is not dangerous to people" (true enough, except that
Sunstein does not mention that this panel invoked a threshold model of
carcinogenesis that no U.S. agency would have relied on without more
and different scientific evidence: the U.N. panel simply ignored the
large number of tumors at the two highest doses in Uniroyal's UDMH
study and declared the third-highest dose to be "safe" because that
dose produced tumors, but at a rate not significantly higher than the
background rate).

** A 60 Minutes broadcast "instigated a public outcry.. . without the
most simple checks on its reliability or documentation" (readers might
be interested, however, that both a federal district court and a
federal appeals court summarily dismissed the lawsuit over this
broadcast, finding that the plaintiffs "failed to raise a genuine
issue of material fact regarding the falsity of statements made during
the broadcast").

** The demand for apples "plummeted" during 1989 (true enough, but
Sunstein neglects to mention that within five years after the
withdrawal of Alar the apple industry's revenues doubled versus the
level before the controversy started).

Sunstein's entire source material for these scientific and other
conclusions? Four footnotes from a book by political scientist Aaron
Wildavsky and one quotation from an editorial in Science magazine
(although the incorrect division of 250,000 by 4,200 and the mangling
of the NRDC risk assessment appear to be Sunstein's own
contributions). One reason the general public annoys Sunstein by
disagreeing with the "experts," therefore, is that he has a very
narrow view of where one might look for a gold standard against which
to judge the merits of competing conclusions. Perhaps Sunstein himself
has come to certain beliefs about Alar and other risks "simply because
of what [he thinks] other people believe," and comforts himself that
the people he agrees with must be "experts."

Similarly, Sunstein makes some insightful points about the public's
tendency to assume that the risks are higher for items whose benefits
they perceive as small, but he fails to notice the mountains of
evidence that his preferred brand of experts tend to impute high
economic costs to regulations that they perceive as having low risk-
reduction benefits. He accepts as "unquestionably correct" the
conclusion of Tengs and colleagues (1995) that government badly
misallocates risk-reduction resources, for example, without
acknowledging Heinzerling's (2002) finding that in 79 of the 90
environmental interventions Tengs and colleagues accused of most
severely wasting the public's money, the agency involved never
required that a dime be spent to control those hazards, probably
because analysis showed such expenditures to be of questionable cost-
effectiveness.

Finally, Sunstein fails to acknowledge the degree to which experts can
agree with the public on broad issues, and can also disagree among
themselves on the details. He cites approvingly studies by Slovic and
colleagues suggesting that laypeople perform "intuitive toxicology" to
shore up their beliefs, but fails to mention that in the most recent
of their studies (1995), toxicologists and the general public both
placed 9 of the same 10 risks at the top of38 risks surveyed, and
agreed on 6 of the 10 risks among the lowest 10 ranked. Yet when
toxicologists alone were given information on the carcinogenic effects
of "Chemical B" (data on bromoethane, with its identity concealed) in
male and female mice and rats, only 6% of them matched the conclusions
of the experts at the U.S. National Toxicology Program that there was
"clear evidence" of carcinogenicity in one test group (female mice),
"equivocal evidence" in two others, and "some evidence" in the fourth.
"What are ordinary people thinking?" (p. 36) when they disagree with
the plurality of toxicologists, Sunstein asks, without wondering what
these toxicologists must have been thinking to disagree so much with
each other. One simple answer is that perhaps both toxicologists and
the general public, more so than others whose training leads them
elsewhere, appreciate the uncertainties in the raw numbers and the
room for honest divergence of opinion even when the uncertainties are
small. These uncertainties become even more influential when multiple
risks must be combined and compared -- as in most life-cycle
assessments and most efforts to identify and promote more eco-
efficient pathways -- so Sunstein's reliance on a style of expertise
that regards uncertainty as an annoyance we can downplay or "average
away" is particularly ill-suited to broader policy applications.

I actually do understand Sunstein's frustration with the center of
gravity of public opinion in some of these areas. Having worked on
health hazards in the general environment and in the nation's
workplaces, I devoutly wish that more laypeople (and more experts)
could muster more concern about parts per thousand in the latter arena
than parts per billion of the same substances in the former. But I
worry that condescension is at best a poor strategy to begin a
dialogue about risk management, and hope that expertise would aspire
to more than proclaiming the "right" perspective and badgering people
into accepting it. Instead, emphasizing the variations in expertise
and orientation among experts could actually advance Sunstein's stated
goal of promoting a "cost-benefit state," as it would force those who
denounce all risk and cost-benefit analysis to focus their sweeping
indictments where they belong. But until those of us who believe in a
humble, humane brand of risk assessment can convince the public that
substandard analyses indict the assessor(s) involved, not the entire
field, I suppose we deserve to have our methods hijacked by experts
outside our field or supplanted by an intuitive brand of "precaution."

==============

Adam M. Finkel, School of Public Health, University of Medicine and
Dentistry of New Jersey Piscataway, New Jersey, USA

Note

1. This is admittedly not a disinterested choice, as I was an expert
witness for CBS News in its successful attempts to have the courts
summarily dismiss the product disparagement suit brought against it
for its 1989 broadcast about Alar. But Sunstein's summaries of other
hazards (e.g., toxic waste dumps, arsenic, airborne particulate
matter) could illustrate the same general point.

References

Heinzerling, L. 2002. Five hundred life-saving interventions and their
misuse in the debate over regulatory reform. Risk: Health, Safety and
Environment 13(Spring): 151-175.

Slovic, P., T. Malmfors, D. Krewski, C. K. Mertz, N. Neil, and S.
Bartlett. 1995. Intuitive toxicology, Part II: Expert and lay
judgments of chemical risks in Canada. Risk Analysis 15(6): 661-675.

Tengs, T. O., M. E. Adams, J. S. Pliskin, D. G. Safran, J. E. Siegel,
M. C. Weinstein, and J. D. Graham. 1995. Five hundred life saving
interventions and their cost-effectiveness. Risk Analysis 15(3): 369-
390.

Return to Table of Contents

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

From: Rachel's Precaution Reporter #103, Aug. 15, 2007
[Printer-friendly version]

A LETTER TO MY FRIEND WHO IS A RISK ASSESSOR

Dear Adam,

It was really good to hear from you. I'm delighted that you're now
working in New Jersey, with joint appointments at Princeton University
and at the School of Public health at the University of Medicine and
Dentistry of New Jersey (UMDNJ). Everyone in New Jersey is taking a
bath in low levels of toxic chemicals, so we need all the help we can
get, and you're exactly the kind of person we most need help from --
honest, knowledgeable, committed, and principled. Your knowledge of
toxicology and risk assessment -- and the courage you demonstrated as
a government whistle-blower within the Occupational Safety and
Health Administration -- are sorely needed in the Garden State, as
they are everywhere in America.

I read with real interest your review of Cass Sunstein's book, Risk
and Reason. I thought you did an excellent job of showing that
Sunstein "has managed to sketch out a brand of QRA [quantitative risk
assessment] that may actually be less scientific and more divisive,
than no analysis at all." To me, it seems that Sunstein is more
interested in picking a fight with his fellow liberals than in helping
people make better decisions.

What I want to discuss in this note to you is your attack on the
precautionary principle. In the second paragraph of your review, you
wrote, "I believe that at its best, QRA [quantitative risk assessment]
can serve us better than a 'precautionary principle' that eschews
analysis in favor of crusades against particular hazards that we
somehow know are needlessly harmful and can be eliminated at little or
no economic or human cost. After all, this orientation has brought us
increased asbestos exposure for schoolchildren and remediation workers
in the name of prevention, and also justified an ongoing war with as
pure a statement of the precautionary principle as we are likely to
find ("we have every reason to assume the worst, and we have an urgent
duty to prevent the worst from occurring," said President Bush in
October 2002 about weapons of mass destruction in Iraq)."

The two examples you give -- asbestos removal, and Mr. Bush's pre-
emptive war in Iraq -- really aren't good examples of the
precautionary principle. Let's look at the details.

All versions of the precautionary principle have three basic parts:

1) If you have reasonable suspicion of harm

2) and you have scientific uncertainty

3) then you have a duty to take action to avert harm (though the kind
of action to take is not spelled out in the precautionary principle).

Those are the three basic elements of the precautionary principle,
found in every definition of the principle.

And here is a slightly more verbose version of the same thing:

1. Pay attention and heed early warnings. In a highly technological
society, we need to keep paying attention because so many things can
go wrong, with serious consequences.

2. When we develop reasonable suspicion that harm is occurring or is
about to occur, we have a duty to take action to avert harm. The
action to take is not spelled out by the precautionary principle but
proponents of the principle have come up with some suggestions for
action:

3. We can set goals -- and in doing this, we can make serious efforts
to engage the people who will be affected by the decisions.

4. We can examine all reasonable alternatives for achieving the
goal(s), again REALLY engaging the people who will be affected.

5. We can give the benefit of the doubt to nature and to public
health.

6. After the decision, we can monitor, to see what happens, and again
heed early warnings.

7. And we can favor decisions that are reversible, in case our
monitoring reveals that things are going badly.

Now, let's look at the examples you gave to see if they really
represent failures of the precautionary principle.

The asbestos removal craze (1985-2000) was essentially started by one
man, James J. Florio, former Congressman from New Jersey (and later
Governor of New Jersey).

As we learn from this 1985 article from the Philadelphia Inquirer,
gubernatorial candidate Florio was fond of bashing his opponent for
ignoring the "asbestos crisis" and the 'garbage crisis.' In Mr.
Florio's campaign for governor of New Jersey, the "asbestos crisis"
was a political ploy -- and it worked. He got elected by promising to
fix the asbestos crisis and the garbage crisis.

As governor, Mr. Florio's approach to the "garbage crisis" was to site
a new garbage incinerator in each of New Jersey's 21 counties. At $300
to $600 million per machine, this would have set loose an
unprecedented quantity of public monies sloshing around in the
political system. Later it turned out that Mr. Florio's chief of staff
had close ties to the incinerator industry.

I don't know whether Mr. Florio or his political cronies profited
directly from the asbestos removal industry that he created almost
single-handedly, but the decision-making for asbestos was similar to
his approach to the "garbage crisis." He did not engage the affected
public in setting goals and then examine all reasonable alternatives
for achieving those goals. He did not take a precautionary approach.

In the case of garbage, Mr. Florio and his cronies decided behind
closed doors that New Jersey needed to build 21 incinerators. He and
his cronies then justified those incinerators using quantitative risk
assessments.

Mr. Florio's approach to asbestos was the same: without asking the
affected public, he decided that removal of asbestos from 100,000 of
the nation's schools was the correct goal, and thus creating a new
"asbestos removal" industry was the only reasonable alternative. You
can read about Mr. Florio's "Asbestos Hazard Emergency Response Act of
1986" here.

So, the goal ("asbestos removal") was not set through consultation
with affected parties. Perhaps if the goal had been to "Eliminate
exposures of students and staff to asbestos in schools," a different
set of alternatives would have seemed reasonable. Asbestos removal
might not have been judged the least-harmful approach.

Even after the questionable decision was made to remove all asbestos
from 100,000 school buildings nationwide, systematic monitoring was
not done, or not done properly. Furthermore, the decision to remove
asbestos was not easily reversible once removal had been undertaken
and new hazards had been created.

So Mr. Florio's law created overnight a new asbestos removal industry,
companies without asbestos removal experience rushed to make a killing
on public contracts, in some cases worker training was poor, removals
were carried out sloppily, monitoring was casual or non-existent, and
so new hazards were created.

This was not an example of a precautionary approach. It was missing
almost all the elements of a precautionary approach -- from goal
setting to alternatives assessment, to monitoring, heeding early
warnings, and making decisions that are reversible.

Now let's examine President Bush's pre-emptive strike against Iraq.

True enough, Mr. Bush claimed that he had strong evidence of an
imminent attack on the U.S. -- perhaps even a nuclear attack. He
claimed to know that Saddam Hussein was within a year of having a
nuclear weapon. But we now know that this was all bogus. This was
not an example of heeding an early warning, because no threat existed
-- it was an example of manufacturing an excuse to carry out plans
that had been made even before 9/11.

Here is the lead paragraph from a front-page story in the Washington
Post April 28, 2007:

"White House and Pentagon officials, and particularly Vice President
Cheney, were determined to attack Iraq from the first days of the Bush
administration, long before the Sept. 11, 2001, attacks, and
repeatedly stretched available intelligence to build support for the
war, according to a new book by former CIA director George J. Tenet."

Fully a year before the war began, Time magazine reported March 24,
2003, President Bush stuck his head in the door of a White House
meeting between National Security Advisor Condoleeza Rice and three
U.S. Senators discussing how to deal with Saddam Hussein through the
United Nations or perhaps in a coalition with the U.S's Middle East
partners. "Bush wasn't interested," reported Time. "He waved his hand
dismissively, recalls a participant, and neatly summed up his Iraq
policy:" "F--- Saddam. We're taking him out." This President was not
weighing alternatives, seeking the least-harmful.

The CIA, the State Department, members of Congress, and the National
Security staff wanted to examine alternatives to war, but Mr. Bush was
not interested. With Mr. Cheney, he had set the goal of war, without
wide consultation. He then manufactured evidence to support his
decision to "take out" Saddam without much thought for the
consequences. Later he revealed that God had told him to strike
Saddam. Mr. Bush believed he was a man on a mission from God.
Assessing alternatives was not part of God's plan, as Mr. Bush saw it.

This was not an example of the precautionary approach. Goals were not
set through a democratic process. Early warning were not heeded --
instead fraudulent scenarios were manufactured to justify a policy
previous set (if you can call "God told me to f--- Saddam" a policy).
As George Tenet's book makes clear again and again, alternatives were
not thoroughly examined, with the aim of selecting the least harmful.
For a long time, no monitoring was done (for example, no one has been
systematically counting Iraqi civilian casualties), and the decision
was not reversible, as is now so painfully clear.

This was definitely not an example of the precautionary approach.

========================================================

So I believe your gratuitous attack on the precautionary principle is
misplaced because you bring forward examples that don't have anything
to do with precautionary decision-making.

Now I want to acknowledge that the precautionary principle can lead to
mistakes being made -- it is not a silver bullet that can minimize all
harms. However, it we take to heart its advice to monitor and heed
early warnings, combined with favoring decisions that are reversible,
it gains a self-correcting aspect that can definitely reduce harms as
time passes.

I am even more worried by the next paragraph of your review of
Sunstein's book. Here you seem to disparage the central goal of public
health, which is primary prevention. You write,

"More attention to benefits and costs might occasionally dampen the
consistent enthusiasm of the environmental movement for prevention,
and might even moderate the on-again, off-again role precaution plays
in current U.S. economic and foreign policies." I'm not sure what you
mean by the 'on-again, off-again role precaution plays in current U.S.
economic and foreign policies." I presume you're referring here to
asbestos removal and pre-emptive war in Iraq, but I believe I've shown
that neither of these was an example of precautionary decision-making.

Surely you don't REALLY want environmentalists or public health
practitioners to "dampen their consistent enthusiasm for prevention."
Primary prevention should always be our goal, shouldn't it? But we
must ask what are we trying to prevent? And how best to achieve the
goal(s)? These are always the central questions, and here I would
agree with you: examining the pros and cons of every reasonable
approach is the best way to go. (I don't want to use your phrase
"costs and benefits" because in common parlance this phrase implies
quantitative assessment of costs and benefits, usually in dollar
terms. I am interested in a richer and fuller discussion of pros and
cons, which can include elements and considerations that are not
strictly quantifiable but are nevertheless important human
considerations, like local culture, history, fairness, justice,
community resilience and beauty.)

So this brings me to my fundamental criticisms of quantitative risk
assessment, which I prefer to call "numerical risk assessment."

1. We are all exposed to multiple stressors all of the time and
numerical risk assessment has no consistent way to evaluate this
complexity. In actual practice, risk assessors assign a value of zero
to most of the real-world stresses, and thus create a mathematical
model of an artificial world that does not exist. They then use that
make-believe model to drive decisions about the real world that 
does exist.

2. The timing of exposures can be critical. Indeed, a group of 200
physicians and scientists recently said they believe the main adage
of toxicology -- "the dose makes the poison" -- needs to be changed
to, "The timing makes the poison." Numerical risk assessment is,
today, poorly prepared to deal with the timing of exposures.

3. By definition, numerical risk assessment only takes into
consideration things that can be assigned a number. This means that
many perspectives that people care about must be omitted from
decisions based on numerical risk assessments -- things like
historical knowledge, local preferences, ethical perspectives of right
and wrong, and justice or injustice.

4. Numerical risk assessment is difficult for many (probably most)
people to understand. Such obscure decision-making techniques run
counter to the principles of an open society.

5. Politics can and do enter into numerical risk assessments. William
Ruckelshaus, first administrator of U.S. Environmental Protection
Agency said in 1984, "We should remember that risk assessment data
can be like the captured spy: If you torture it long enough, it will
tell you anything you want to know."

6. The results of numerical risk assessment are not reproducible from
laboratory to laboratory, so this decision-technique does not meet
the basic criterion for being considered "science" or "scientific."

As the National Academy of Sciences said in 1991, "Risk assessment
techniques are highly speculative, and almost all rely on multiple
assumptions of fact -- some of which are entirely untestable." (Quoted
in Anthony B. Miller and others, Environmental Epidemiology, Volume
1: Public Health and Hazardous Wastes [Washington, DC: National
Academy of Sciences, 1991], pg. 45.)

7. By focusing attention on the "most exposed individual," numerical
risk assessments have given a green light to hundreds of thousands or
millions of "safe" or "acceptable" or "insignificant" discharges that
have had the cumulative effect of contaminating the entire planet with
industrial poisons. See Travis and Hester, 1991 and Rachel's News
#831.

All this is not to say that numerical risk assessment has no place in
decision-making. Using a precautionary approach, as we set goals and,
later, as we examine the full range of alternatives, numerical risk
assessments might be one technique for generating information to be
used by interested parties in their deliberations. Other techniques
for generating useful information might be citizen juries, a Delphi
approach, or consensus conferences. (I've discussed these techniques
briefly elsewhere.)

It isn't so much numerical risk assessment itself that creates
problems -- it's reliance almost solely on numerical risk assessment
as the basis for decisions that has gotten us into the mess we're in.

Used within a precautionary framework for decision-making, numerical
risk assessment of the available alternatives in many cases may be
able to give us useful new information that can contribute to better
decisions. And that of course is the goal: better decisions producing
fewer harms.

========================================================

Return to Table of Contents

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

From: Rachel's Precaution Reporter #103, Aug. 15, 2007
[Printer-friendly version]

RISK ASSESSMENT AND PRECAUTION: COMMON STRENGTHS AND FLAWS

Adam Finkel Responds to Peter Montague

By Adam Finkel

Dear Peter,

Whether we agree more than we disagree comes down to whether means or
ends are more important. To the extent you share my view (and given
your influence on me early in my career, I should probably say, "I
share your view...") that we have a long way to go to provide a safe,
healthy, and sustainable environment for the general and (especially)
the occupational populations, our remaining differences are only those
of strategy. Outcomes "producing fewer harms" for nature and public
health are, I agree, the goal, and I assume you agree that we could
also use fewer decisions for which more harm is the likely -- perhaps
even the desired -- outcome of those in power.

But being on the same side with respect to our goals makes our
differences about methods all the more important, because knowing
where to go but not how to get there may ultimately be little better
than consciously choosing to go in the wrong direction.

Your long-standing concern about quantitative risk assessment haunts
me, if there's such a thing as being "haunted in a constructive way."
I tell my students at the University of Medicine and Dentistry of New
Jersey and at Princeton during the first class of every semester that
I literally haven't gone a week in the past 10 years without
wondering, thanks to you, if I am in fact "helping the state answer
immoral questions" about acceptable risk and in so doing, am
"essentially keeping the death camp trains running on time" (quote
from Rachel's #519, November 7, 1996). I don't consider this analogy
to be name-calling, because I have such respect for its source, so I
hope you won't take offense if I point out that everyone who
professes to care about maximizing life expectancy, human health, and
the natural functioning of the planet's ecosystems ought to ask the
same question of themselves. I do worry about quantitative risk
assessment and its mediocre practitioners, as I will try to explain
below, but I also wish that advocates of the precautionary
principle would occasionally ask themselves whether more or fewer
people will climb unwittingly aboard those death-camp trains if they
run on a schedule dictated by "precaution."

And if the outcomes we value flourish more after an action based on
quantitative risk assessment than they do after an action motivated by
precaution, then a preference for the latter implies that noble means
matter more than tangible ends -- which I appreciate in theory, but
wonder what would be so noble about a strategy that does less good or
causes more harm.

I distrust some versions of the precautionary principle for one basic
reason.[1] If I re-express your first three-part definition in one
sentence (on the grounds that in my experience, the fact of scientific
uncertainty goes without saying), I get "if you reasonably suspect
harm, you have a duty to act to avert harm, though the kind of action
is up to you." Because I believe that either inaction or action can be
unacceptably harmful, depending on circumstances, I worry that a
principle that says "act upon suspicion of harm" can be used to
justify anything. This was my point about the Iraq war, which I
agree is despicable, but not only because the suspicion of harm was
concocted (at least, inflated) but because the consequences of the
remedy were so obviously glossed over.

Whatever principle guides decision makers, we need to ask how harmful
the threat really is, and also what will or may happen if we act
against it in a particular way. Otherwise, the principle degenerates
into "eliminate what we oppose, and damn the consequences." I'm not
suggesting that in practice, the precautionary principle does no
better than this, just as I trust you wouldn't suggest that
quantitative risk assessment is doomed to be no better than "human
sacrifice, Version 2.0." Because I agree strongly with you (your
"verbose version, point 5") that when health and dollars clash, we
should err on the side of protecting the former rather than the
latter, I reject some risk-versus-risk arguments, especially the ones
from OMB [Office of Management and Budget] and elsewhere that
regulation can kill more people than it saves by impoverishing them
(see, for example, my 1995 article "A Second Opinion on an
Environmental Misdiagnosis: The Risky Prescriptions of Breaking the
Vicious Circle [by Judge Steven Breyer]", NYU Environmental Law
Journal, vol. 3, pp. 295-381, especially pp. 322-327). But the
asbestos and Iraq examples show the direct trade-offs that can ruin
outcomes made in a rush to prevent. Newton's laws don't quite apply to
social decision-making: for every action, there may be an unequal and
not-quite-opposite reaction. "Benign" options along one dimension may
not be so benign when viewed holistically. When I was in charge of
health regulation at OSHA, I tried to regulate perchloroethylene (the
most common solvent used in dry-cleaning laundry). I had to be
concerned about driving dry-cleaners into more toxic substitutes (as
almost happened in another setting when we regulated methylene
chloride, only to learn of an attempt -- which we ultimately helped
avert -- by some chemical manufacturers to encourage customers to
switch to an untested, but probably much more dangerous, brominated
solvent). But encouraging or mandating "good old-fashioned wet
cleaning" was not the answer either (even if it turns out to be as
efficient as dry cleaning), once you consider that wet clothes are
non-toxic but quite heavy -- and the ergonomic hazards of thousands of
workers moving industrial-size loads from washers to dryers is the
kind of "risk of action" that only very sophisticated analyses of
precaution would even identify.

This is why I advocated "dampening the enthusiasm for prevention" --
meaning prevention of exposures, not prevention of disease,
which I agree is the central goal of public health. That was a poor
choice of words on my part, as I agree that when the link between
disease and exposure is clear, preventing exposure is far preferable
to treating the disease; the problem comes when exposures are
eliminated but their causal connection to disease is unfounded.

To the extent that the precautionary principle -- or quantitative risk
assessment, for that matter -- goes after threats that are not in fact
as dire as worst- case fears suggest, or does so in a way that
increases other risks disproportionately, or is blind to larger
threats that can and should be addressed first, it is disappointing
and dangerous. You can say that asbestos removal was not "good
precaution" because private interests profited from it, and because
the remediation was often done poorly, not because it was a bad idea
in the first place. Similarly, you can say that ousting Saddam Hussein
was not "good precaution" because the threat was overblown and it (he)
could have been "reduced" (by the military equivalent of a
pump-and-treat system?) rather than "banned" (with extreme prejudice).
Despite the fact that in this case the invasion was justified by an
explicit reference to the precautionary principle ("we have every
reason to assume the worst and we have an urgent duty to prevent the
worst from occurring"), I suppose you can argue further that not all
actions that invoke the precautionary principle are in fact
precautionary -- just as not all actions that claim to be risk-based
are in fact so. But who can say whether President Bush believed,
however misguidedly, that there were some signals of early warning
emerging from Iraq? Your version of the precautionary principle
doesn't say that "reasonable suspicion" goes away if you also happen
to have a grudge against the source of the harm.

Again, in both asbestos removal and Iraq I agree that thoughtful
advocates of precaution could have done much better. But how are these
examples really any different from the reasonable suspicion that
electromagnetic fields or irradiated food can cause cancer? Those
hazards, as well as the ones Hussein may have posed, are/were largely
gauged by anecdotal rather than empirical information, and as such
are/were all subject to false positive bias. We could, as you suggest,
seek controls that contain the hazard (reversibly) rather than
eliminating it (irrevocably), while monitoring and re-evaluating, but
that sounds like "minimizing" harm rather than "averting" it, and
isn't that exactly the impulse you scorn as on the slippery slope to
genocide when it comes from a risk assessor? And how, by the way, are
we supposed to fine-tune a decision by figuring out whether our
actions are making "things go badly," other than by asking "immoral
questions" about whose exposures have decreased or increased, and by
how much?

We could also, as you suggest, "really engage the people who will be
affected," and reject alternatives that the democratic process ranks
low. I agree that more participation is desirable as an end in itself,
but believe we shouldn't be too sanguine about the results. I've been
told, for example, that there exist people in the U.S. -- perhaps a
majority, perhaps a vocal affected minority -- who believe that giving
homosexual couples the civil rights conferred by marriage poses an
"unacceptable risk" to the fabric of society. They apparently believe
we should "avert" that harm. If I disagree, and seek to stymie their
agenda, does that make me "anti-precautionary" (or immoral, if I use
risk assessment to try to convince them that they have mis-estimated
the likelihood or amount of harm)?

So I'm not sure that asbestos and Iraq are atypical examples of what
happens when you follow the precautionary impulse to a logical degree,
and I wonder if those debacles might even have been worse had
those responsible followed your procedural advice for making them more
true to the principle. But let's agree that they are examples of "bad
precaution." The biggest challenge I have for you is a simple one:
explain to me why "bad precaution" doesn't invalidate the
precautionary principle, but why for 25 years you've been trashing
risk assessment based on bad risk assessments! Of course
there is a crevasse separating what either quantitative risk
assessment or precaution could be from what they are, and it's unfair
to reject either one based on their respective poor track records.
You've sketched out a very attractive vision of what the precautionary
principle could be; now let me answer some of your seven concerns
about what quantitative risk assessment is.

(1) (quantitative risk assessment doesn't work for unproven hazards) I
hate to be cryptic, but "please stay tuned." A group of risk assessors
is about to make detailed recommendations to address the problem of
treating incomplete data on risk as tantamount to zero risk. In the
meantime, any "precautionary" action that exacerbates any of these
"real-world stresses" will also be presumed incorrectly to do no
harm...

(2) (quantitative risk assessment is ill-equipped to deal with
vulnerable periods in the human life cycle) It's clearly the dose, the
timing, and the susceptibility of the individual that act and
interact to create risk. quantitative risk assessment depends on
simplifying assumptions that overestimate risk when the timing and
susceptibility are favorable, and underestimate it in the converse
circumstances. The track record of risk assessment has been one of
slow but consistent improvement toward acknowledging the particularly
vulnerable life stages and individuals (of whatever age) who are most
susceptible, so that to the extent the new assumptions are wrong, they
tend to over-predict. This is exactly what a system that
interprets the science in a precautionary way ought to do -- and the
alternative would be to say "we don't know enough about the timing of
exposures, so all exposures we suspect could be a problem ought to be
eliminated." This ends up either being feel-good rhetoric or leading
to sweeping actions that may, by chance, do more good than
harm.

(3) (quantitative risk assessment leaves out hard-to-quantify
benefits) Here, as in the earlier paragraph about "pros and cons," you
have confused what must be omitted with "what we let them omit
sometimes." I acknowledge that most practitioners of cost-benefit
analysis choose not to quantify cultural values, or aggregate
individual costs and benefits so that equitable distributions of
either are given special weight. But when some of us risk assessors
say "the benefits outweigh the costs" we consciously and prominently
include justice, individual preferences, and "non-use values" such as
the existence of natural systems on the benefits side of the ledger,
and we consider salutary economic effects of controls as offsetting
their net costs. Again, "good precaution" may beat "bad cost-benefit
analysis" every time, but we'd see a lot more "good cost-benefit
analysis" if its opponents would help it along rather than pretending
it can't incorporate things that matter.

(4) (quantitative risk assessment is hard to understand) The same
could be said about almost any important societal activity where the
precise facts matter. I don't fully understand how the Fed sets
interest rates, but I expect them to do so based on quantitative
evaluation of their effect on consumption and savings, and to be able
to answer intelligent questions about uncertainties in their analyses.
"Examining the pros and cons of every reasonable approach," which we
both endorse, also requires complicated interpretation of data on
exposures, health effects, control efficiencies, costs, etc., even if
the ruling principle is to "avert harm." So if precaution beats
quantitative risk assessment along this dimension, I worry that it
does so by replacing unambiguous descriptions ("100 deaths are fewer
than 1000") with subjective ones ("Option A is 'softer' than Option
B").

(5) (Decision-makers can orchestrate answers they most want to hear)
"Politics" also enters into defining "early warnings," setting goals,
and evaluating alternatives -- this is otherwise known as the
democratic process. Removing the numbers from an analysis of a problem
or of alternative solutions simply shifts the "torturing" of the
number into a place where it can't be recognized as such.

(6) (quantitative risk assessment is based on unscientific
assumptions) It sounds here as if you're channeling the knee-jerk
deregulators at the American Council for Science and Health, who
regularly bash risk assessment to try to exonerate threats they deem
"unproven." quantitative risk assessment does rely on assumptions,
most of which are grounded in substantial theory and evidence; the
alternative would be to contradict your point #1 and wait for proof
which will never come. The 1991 European Commission study you
reference involved estimating the probability of an industrial
accident, which is indeed a relatively uncertain area within risk
assessment, but one that precautionary decision-making has to confront
as well. The 1991 NAS study was a research agenda for environmental
epidemiology, and as such favored analyses based on human data, which
suffer from a different set of precarious assumptions and are
notoriously prone to not finding effects that are in fact present.

(7) (quantitative risk assessment over-emphasizes those most exposed
to each source of pollution) This is a fascinating indictment of
quantitative risk assessment that I think is based on a non
sequitur. Yes, multiple and overlapping sources of pollution can
lead to unacceptably high risks (and to global burdens of
contamination), which is precisely why EPA has begun to adopt
recommendations from academia to conduct "cumulative risk analyses"
rather than regulating source by source. The impulse to protect the
"maximally exposed individual" (MEI) is not to blame for this problem,
however; if anything, the more stringently we protect the MEI, the
less likely it is that anyone's cumulative risk will be
acceptably high, and the more equitable the distribution of risk will
be. Once more, this is a problem that precautionary risk
assessment can allow us to recognize and solve; precaution alone
can at its most ambitious illuminate one hazard at a time, but it has
no special talent for making risks (as opposed to particular
exposures) go away.

I note that in many ways, your list may actually be too kind given
what mainstream risk assessment has achieved to date. These seven
possible deficiencies pale by comparison to the systemic problems with
many quantitative risk assessments, which I have written about at
length (see, for example, the 1997 article "Disconnect Brain and
Repeat After Me: 'Risk Assessment is Too Conservative.'" In Preventive
Strategies for Living in a Chemical World, E. Bingham and D. P. Rall,
eds., Annals of the New York Academy of Sciences, 837, 397-417). Risk
assessment has brought us fallacious comparisons, meaningless "best
estimates" that average real risks away, and arrogant pronouncements
about what "rational" people should and should not fear. But these
abuses indict the practitioners -- a suspicious proportion of whom
profess to be trained in risk assessment but never were -- not the
method itself, just as the half-baked actions taken in precaution's
name should not be generalized to indict that method.

So in the end, you seem to make room for a version of the
precautionary principle in which risk assessment provides crucial raw
material for quantifying the pros and cons of different alternative
actions. Meanwhile, I have always advocated for a version of
quantitative risk assessment that emphasizes precautionary responses
to uncertainty (and to human interindividual variability), so that we
can take actions where the health and environmental benefits may not
even exceed the expected costs of control (in other words, "give the
benefit of the doubt to nature and to public health"). The reason the
precautionary principle and quantitative risk assessment seem to be at
odds is that despite the death-camp remark, you are more tolerant of
risk assessment than the center of gravity of precaution, while I am
more tolerant of precaution than the center of gravity of my field. If
the majorities don't move any closer together than they are now, and
continue to be hijacked by the bad apples in each camp, I guess you
and I will have to agree to disagree about whether mediocre precaution
or mediocre quantitative risk assessment is preferable. But I'll
continue to try to convince my colleagues that since risk assessment
under uncertainty must either choose to be precautionary with health,
or else choose to pretend that errors that waste lives and errors that
waste dollars are morally equivalent, we should embrace the first bias
rather than the second. I hope you will try to convince your
colleagues (and readers) that precaution without analysis is like the
"revelation" to invade Iraq -- it offers no justification but sorely
needs one.

[1] As you admit, there are countless variations on the basic theme of
precaution. I was careful to say in my review of Sunstein's book that
I prefer quantitative risk assessment to "a precautionary
principle that escews analysis," and did not mean to suggest that most
or all current versions of it fit the description.

Return to Table of Contents

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

From: Rachel's Democracy & Health News #919, Aug. 9, 2007
[Printer-friendly version]

TWO RULES FOR DECISIONS: TRUST IN ECONOMIC GROWTH VS. PRECAUTION

By Joseph H. Guth

[Joseph H. Guth, JD, PhD, is Legal Director of the Science and
Environmental Health Network. He holds a PhD in biochemistry from
University of Wisconsin (Madison), and a law degree from New York
University. His work has appeared previously in Rachel's #846,
#861, #892, #901 and #905.]

Everyone knows the role of law is to control and guide the economy.
From law, not economics, springs freedom from slavery, child labor and
unreasonable working conditions. Law, reflecting the values we hold
dear, governs our economy's infliction of damage to the environment.

Our law contains what might be called an overarching environmental
decision rule that implements our social choices. The structure of
this decision rule is an intensely political issue, for the people of
our democracy must support its far-reaching consequences. Today we
(all of us) are rethinking our current environmental decision rule,
which our society adopted in the course of the Industrial
Revolution.

The "trust in economic growth" environmental decision rule

Our overarching environmental decision rule (which is also prevalent
in much of the rest of the world) constitutes a rarely-stated balance
of social values that is hard to discern even though it pervades every
aspect of our society.

This decision rule relies extensively on cost-benefit analysis and
risk assessment, but the decision rule itself is even broader in
scope. The foundation of the rule is the assumption that economic
activity usually provides a net benefit to society, even when it
causes some damage to human health and the environment. (This
conception of "net benefit" refers to the effect on society as a
whole, and does not trouble itself too much with the unequal
distribution of costs and benefits.)

From this assumption, it follows that we should allow all economic
activities, except those for which someone can prove the costs
outweigh the benefits. This, then, is the prevailing environmental
decision rule of our entire legal system: economic activity is
presumed to provide a net social benefit even if it causes some
environmental damage, and government may regulate (or a plaintiff may
sue) only if it can carry its burden of proof to demonstrate that the
costs outweigh the benefits in a particular case. Let's call this the
"trust in economic growth" decision rule.

The "precautionary principle" environmental decision rule

The "precautionary principle" is equal in scope to the "trust in
economic growth" decision rule, but incorporates a profoundly
different judgment about how to balance environment and economic
activity when they come into conflict. Under this principle, damage
to the environment should be avoided, even if scientific uncertainties
remain. This rule implements a judgment that we should presumptively
avoid environmental damage, rather than presumptively accept it as we
do under the "trust in economic growth" rule.

The role of "cost-benefit" analysis

"Cost-benefit" analysis is a subsidiary tool of both decision rules.
However, it is used in very different contexts under the two rules. It
can sometimes be employed under the "precautionary" decision rule as a
way to compare and decide among alternatives. But under the "trust in
economic growth" decision rule, cost-benefit analysis appears as the
only major issue and often masquerades as the decision rule itself.
This is because most of our laws implicitly incorporate the unstated
presumption that economic activities should be allowed even if they
cause environmental damage. By and large, our laws silently bypass
that unstated presumption and start out at the point of instructing
government to develop only regulations that pass a cost-benefit test.

Thus, the foundational presumption of the "trust in economic growth"
decision rule is simply accepted as a received truth and is rarely
examined or even identified as supplying our law's overarching context
for cost-benefit calculations. Almost all economists probably agree
with it (except those few who are concerned with the global human
footprint and are trying to do full cost accounting for the economy as
a whole).

The role of "sound science"

How does science, particularly "sound science," relate to all this?
Science supplies a critical factual input used by governments and
courts in implementing environmental decision rules. Science is
employed differently by the two decision rules, but science does not
constitute or supply a decision rule itself. Like cost-benefit
analysis, science is a subsidiary tool of the decision rules and so
cannot properly be placed in "opposition" to either decision rule. A
claim that the precautionary principle, as an overarching
environmental decision rule implementing a complex balancing of social
values, is in "opposition" to science is a senseless claim.

The phrase "sound science" represents the proposition that a
scientific fact should not be accepted by the legal system unless
there is a high degree of scientific certainty about it. It is a term
used by industry in regulatory and legal contexts and is not commonly
used by scientists while doing scientific research. However, it
resonates within much of the scientific community because it is a call
to be careful and rigorous.

"Sound science" also represents a brake on the legal system's
acceptance of emerging science, of science that cuts across
disciplines, and of science that diverges from the established
traditions and methodologies that characterize many specific
disciplines of science. "Sound science" encourages scientists who are
concerned about the world to remain in their silos, to avoid looking
at the world from a holistic viewpoint, and to avoid professional
risks.

But, why does it work for industry? The call for government and law
to rely only on "sound science" when they regulate is a call for them
to narrow the universe of scientific findings that they will consider
to those that have a high degree of certainty.

This serves industrial interests under our prevailing "trust in
economic growth" decision rule because it restricts the harms to human
health and the environment that can be considered by government and
law to those that are sufficiently well established to constitute
"sound science."

Because the burden of proof is on government, requiring government to
rely only on facts established by "sound science" reduces the scope of
possible regulatory activity by making it harder for government to
carry its burden to show that the benefits of regulation (avoidance of
damage to health and environment) outweigh the costs to industry.
Exactly the same dynamic is at play when plaintiffs try to carry their
burden of proof to establish legal liability for environmental damage.

Shifting the burden of proof would shift the effect of "sound
science"

"Sound science" can help industrial interests under a precautionary
decision rule, but it also contains a seed of disaster for them.

Precaution is triggered when a threat to the environment is
identified, so that the more evidence required to identify a threat as
such, the fewer triggers will be pulled. While the precautionary
principle is designed to encourage environmental protection even in
the face of uncertainty, those opposed to environmental protection
urge that the threshold for identification of threats should require
as much certainty as possible, and preferably be based only on "sound
science."

The seed of disaster for industrial interests is this: the burden of
proof can be switched under the precautionary principle (so that when
a threat to the environment is identified the proponent of an activity
must prove it is safe -- just as a pharmaceutical company must prove
that a drug is safe and effective before it can be marketed). When
that happens, a call for "sound science" actually would cut against
such proponents rather than for them. This is because proponents of an
activity would have to provide proof of safety under a "sound science"
standard. In other words, the call for "sound science" creates higher
burdens on those bearing the burden of proof. In fact, while debates
about "sound science" masquerade as debates about the quality of
science, the positions that different actors take are actually driven
entirely by the underlying legal assignment of the burden of proof.

Why precaution? Because of cumulative impacts.

One of the reasons for adopting the precautionary principle, rather
than the "trust in economic growth" decision rule, is "cumulative
impacts."

The foundational assumption of the "trust in economic growth" rule
(that economic activity is generally to the net benefit of society,
even if it causes environmental damage) is further assumed to be true
no matter how large our economy becomes. To implement the "trust in
economic growth" rule, all we do is eliminate any activity without a
net benefit, and in doing this we examine each activity independently.
The surviving economic activities, and the accompanying cost-benefit-
justified damage to the environment, are both thought to be able to
grow forever.

Not only is there no limit to how large our economy can become, there
is no limit as to how large justified environmental damage can become
either. The "trust in economic growth" decision rule contains no
independent constraint on the total damage we do to Earth -- indeed
the core structure of this decision rule assumes that we do not need
any such constraint. People who think this way see no need for the
precautionary principle precisely because they see no need for the
preferential avoidance of damage to the environment that it embodies.

But, as we now know, there is in fact a need for a limit to the damage
we do to earth. Unfortunately, the human enterprise has now grown so
large that we are running up against the limits of the Earth -- if we
are not careful, we can destroy our only home. (Examples abound:
global warming, thinning of Earth's ozone shield, depletion of ocean
fisheries, shortages of fresh water, accelerated loss of species, and
so on.)

And it is the cumulative impact of all we are doing that creates this
problem. One can liken it to the famous "straw that broke the camel's
back." At some point "the last straw" is added to the camel's load,
its carrying capacity exceeded. Just as it would miss the larger
picture to assume that since one or a few straws do not hurt the
camel, straw after straw can be piled on without concern, so the
"trust in economic growth" decision rule misses the larger picture by
assuming that cost-benefit-justified environmental damage can grow
forever.

Thus, it is the total size of our cumulative impacts that is prompting
us to revisit our prevailing decision rule. This is why we now need a
decision rule that leads us to contain the damage we do. It is why we
now must work preferentially to avoid damage to the Earth, even if we
forego some activities that would provide a net benefit if we lived in
an "open" or "empty" world whose limits were not being exceeded. We
can still develop economically, but we must live within the
constraints imposed by Earth itself.

Ultimately, the conclusion that we must learn to live within the
capacity of a fragile Earth to provide for us, painful as it is, is
thrust upon us by the best science that we have -- the science that
looks at the whole biosphere, senses the deep interconnections between
all its parts, places us as an element of its ecology, recognizes the
time scale involved in its creation and our own evolution within it,
and reveals, forever incompletely, the manifold and mounting impacts
that we are having upon it and ourselves.

Return to Table of Contents

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

  Rachel's Precaution Reporter offers news, views and practical
  examples of the Precautionary Principle, or Foresight Principle, in
  action. The Precautionary Principle is a modern way of making
  decisions, to minimize harm. Rachel's Precaution Reporter tries to
  answer such questions as, Why do we need the precautionary
  principle? Who is using precaution? Who is opposing precaution?

  We often include attacks on the precautionary principle because we  
  believe it is essential for advocates of precaution to know what
  their adversaries are saying, just as abolitionists in 1830 needed
  to know the arguments used by slaveholders.

  Rachel's Precaution Reporter is published as often as necessary to
  provide readers with up-to-date coverage of the subject.

  As you come across stories that illustrate the precautionary 
  principle -- or the need for the precautionary principle -- 
  please Email them to us at rpr@rachel.org.

  Editors:
  Peter Montague - peter@rachel.org
  Tim Montague   -   tim@rachel.org
  
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

  To start your own free Email subscription to Rachel's Precaution
  Reporter send any Email to one of these addresses:

  Full HTML edition: rpr-subscribe@pplist.net
  Table of Contents (TOC) edition: rpr-toc-subscribe@pplist.net

  In response, you will receive an Email asking you to confirm that
  you want to subscribe.

  To unsubscribe, send any email to rpr-unsubscribe@pplist.net
  or to rpr-toc-unsubscribe@pplist.net, as appropriate.

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::
Environmental Research Foundation
P.O. Box 160, New Brunswick, N.J. 08903
rpr@rachel.org
:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::