Institute of Science in Society (I-SIS)  [Printer-friendly version]
July 13, 2000

USE AND ABUSE OF THE PRECAUTIONARY PRINCIPLE

I-SIS submission to US Advisory Committee on International Economic
Policy (ACIEP) Biotech Working Group, July 13, 2000

[Rachel's introduction: "The precautionary principle is actually part
and parcel of sound science.... Scientific evidence is always
incomplete and uncertain. The responsible use of scientific evidence,
therefore, is to set precaution. This is all the more important for
technologies, such as genetic engineering, which can neither be
controlled nor recalled."]

By Peter T. Saunders

Introduction

The precautionary principle is accepted as the basis of the Cartegena
Biosafety Protocol agreed in Montreal in January 2000, already signed
by 68 nations who attended the Convention on Biological Diversity
Conference in Nairobi in May, 2000. The principle is to be applied to
all GMOs whether used as food or as seeds for environmental release.

The precautionary principle states that when there is reasonable
suspicion of harm, lack of scientific certainty or consensus must not
be used to postpone preventative action. There is indeed sufficient
direct and indirect scientific evidence to suggest that GMOs are
unsafe for use as food or for release into the environment. And that
is why more than 300 scientists from 38 countries are demanding a
moratorium on all releases of GMOs (World Scientists Statement and
Open Letter to All Governments).

The precautionary principle is actually part and parcel of sound
science. Science is an active knowledge system in which new
discoveries are made almost every day. Scientific evidence is always
incomplete and uncertain. The responsible use of scientific evidence,
therefore, is to set precaution. This is all the more important for
technologies, such as genetic engineering, which can neither be
controlled nor be recalled.

Use and Abuse of the Precautionary Principle

There has been a lot written and said about the precautionary
principle recently, much of it misleading. Some have stated that if
the principle were applied it would put an end to technological
advance. Others claim to be applying the principle when they are not.
From all the confusion, it is easy to mistake it for some deep
philosophical idea that is inordinately difficult to grasp (1).

In fact, the precautionary principle is very simple. All it actually
amounts to is this: if one is embarking on something new, one should
think very carefully about whether it is safe or not, and should not
go ahead until reasonably convinced it is. It is just common sense.

Too many of those who fail to understand or to accept the
precautionary principle are pushing forward with untested,
inadequately researched technologies, and insisting that it is up to
the rest of us to prove them dangerous before they can be stopped. The
perpetrators also refuse to accept liability; so if the technologies
turn out to be hazardous, as in many cases they have, someone else
will have to pay the penalty

The precautionary principle hinges on concept of the burden of proof,
which ordinary people have been expected to understand and accept in
the law for many years. It is also the same reasoning that is used in
most statistical testing. Indeed, as a lot of work in biology depends
on statistics, misuse of the precautionary principle often rests on
misunderstanding and abuse of statistics. Both the accepted practice
in law and the proper use of statistics are in accord with the common-
sensible idea that it is incumbent on those introducing a new
technology to prove it safe, and not for the rest of us to prove it
harmful.

The Burden of Proof

The precautionary principle states that if there are reasonable
scientific grounds for believing that a new process or product may not
be safe, it should not be introduced until we have convincing evidence
that the risks are small and are outweighed by the benefits.

It can also be applied to existing technologies when new evidence
appears suggesting that they are more dangerous than we had thought
(as in the case of cigarettes, CFCs, greenhouse gasses and now GMOs).
Then, it requires that we undertake research to better assess the risk
and that in the meantime, we should not expand our use of the
technology and should put in train measures to reduce our dependence
on it. If the dangers are considered serious enough, then the
principle may require us to withdraw the products or impose a ban or a
moratorium on further use.

The principle does not, as some critics claim, require industry to
provide absolute proof that something new is safe. That would be an
impossible demand and would indeed stop technology dead in its tracks,
but I do not know of anyone who is actually demanding it. The
precautionary principle does not deal with absolute certainty. On the
contrary, it is specifically intended for circumstances where there is
no absolute certainty.

What the precautionary principle does is to put the burden of proof
onto the innovator or perpetrator, but not in an unreasonable or
impossible way. It is up to the perpetrator to demonstrate beyond
reasonable doubt that it is safe, and not for the rest of society to
prove that it is not.

No one should have any difficulty understanding that because precisely
the same sort of argument is used in the criminal law. The prosecution
and the defence are not equal in the courtroom. The members of the
jury are not asked to decide whether they think it is more or less
likely that the defendant has committed the crime he or she is charged
with. Instead, the prosecution is supposed to prove beyond reasonable
doubt that the defendant is guilty. Members of the jury do not have to
be absolutely certain that the defendant is guilty before they
convict, but they do have to be confident they are right.

There is a good reason for adopting a burden of proof that assumes
innocence until proven guilty. The defendant may be guilty or not, and
may be found guilty or not. If the defendant is guilty and convicted,
justice has been done, as is the case if innocent and found not
guilty. But suppose the jury reaches the wrong verdict, what then?

That depends on which of the two possible errors was made. If the
defendant actually committed the crime, but found not guilty, then a
crime goes unpunished. The other possibility is that the defendant is
wrongly convicted of a crime, in which case an innocent life is
ruined. Neither of these outcomes is satisfactory, but society has
decided that the second is so much worse than the first that we should
do as much as we reasonably can to avoid it. It is better, so the
saying goes, that "a hundred guilty men should go free than that one
innocent man be convicted". In any situation in which there is
uncertainty, mistakes will be made. Our aim is to minimise the damage
that results when mistakes are made.

Just as society does not require the defendant to prove innocence, so
it should not require objectors to prove that a technology is harmful.
It is for those who want to introduce something new to prove, not with
certainty, but beyond reasonable doubt, that it is safe. Society
balances the trial in favour of the defendant because we believe that
convicting an innocent person is far worse than failing to convict
someone who is guilty. In the same way, we should balance the decision
on hazards and risks in favour of safety, especially in those cases
where the damage, should it occur, is serious and irredeemable.

The objectors must bring forward evidence that stands up to scrutiny,
but they do not have to prove that there are serious dangers. It is
for the innovators to establish beyond reasonable doubt that what they
are proposing is safe. The burden of proof is on them.

The Misuse of Statistics

You have an antique coin that you want to use for deciding who will go
first at a game, but you are worried it might be biased in favour of
heads. You toss it three times, and it comes down heads all three
times. Naturally, that does not do anything to reassure you, until
someone who claims to know something about statistics comes along, and
informs you that as the "p-value" is 0.125, you have nothing to worry
about. The coin is not biased.

Does that not sound like arrant nonsense? Surely if a coin comes down
heads three times in a row, that cannot prove it is unbiased. No, of
course it cannot. But this sort of reasoning is too often being used
to prove that GM technology is safe.

The fallacy, and it is a fallacy, comes about either through a
misunderstanding of statistics or a total neglect of the precautionary
principle -- or, more likely, both. In brief, people are claiming that
they have proven that something is safe, when what they have actually
done is to fail to prove that it is unsafe. It's the mathematical way
of claiming that absence of evidence is the same as evidence of
absence.

To see how this comes about, we have to appreciate the difference
between biological and other kinds of scientific evidence. Most
experiments in physics and chemistry are relatively clear cut. If you
want to know what will happen if you mix, say, copper and sulphuric
acid, you really only have to try it once. If you want to be sure, you
will repeat the experiment, but you expect to get the same result,
even to the amount of hydrogen that is produced from a given amount of
copper and acid.

In biology, however, we are dealing with organisms which vary a lot
and never behave in predictable, mechanical ways. If we spread
fertiliser on a field, not every plant will increase in size by the
same amount, and if you cross two lines of corn not all the resulting
seeds will be the same. So we almost always have to use some
statistical argument to tell us whether what we observe is merely due
to chance or reflects some real effect.

The details of the argument will vary depending upon exactly what it
is we want to establish, but the standard ones follow a similar
pattern. Suppose, that plant breeders have come up with a new strain
of maize, and we want to know if it gives a better yield than the old
one. We plant each of them in a field, and in August, we harvest more
from the new than from the old. That is encouraging, but it might
simply be a chance fluctuation. After all, even if we had planted both
fields with the old strain, we would not expect to have obtained
exactly the same yield in both fields.

So what we do is the following. We suppose that the new strain is the
same as the old one. (This is called the "null hypothesis", because we
assume that nothing has changed.) We then work out the probability
that the new strain would yield as well as it did simply on account of
chance. We call this probability the "p-value". Clearly the smaller
the p-value, the more likely it is that the new strain really is
better -- though we can never be absolutely certain. What counts as
'small' is arbitrary, but over the years, statisticians have adopted
the convention that if the p-value is less than 5% we should reject
the null hypothesis, i.e. we can infer that the new strain really is
better. Another way of saying the same thing is that the difference in
yields is 'significant'.

Note that the p-value is neither the probability that the new strain
is better nor the probability that it is not. When we say that the
increase is significant, what we are saying is that if the new strain
were no better than the old, the probability of such a large increase
happening by chance would be less than 5%. Consequently, we are
willing to accept that the new strain is better.

Why have statisticians fastened on such a small value? Wouldn't it
seem reasonable that if there is less than a 50-50 chance of such a
large increase we should infer that the new strain is better, whereas
if the chance is greater than 50-50 -- in racing terms if it is "odds
on" -- then we should infer that it is not.

No, and the reason why not is simple: it's a question of the burden of
proof. Remember that statistics is about taking decisions in the face
of uncertainty. It is serious business recommending that a company
changes the variety of seed it produces and that farmers should switch
to planting the new one. There could be a lot of money to be lost if
we are wrong. We want to be sure beyond reasonable doubt, and that's
usually taken to mean a p-value of .05 or less.

Suppose that we obtain a p-value greater than .05, what then? We have
failed to prove that the new strain is better. We have not, however,
proved that it is no better, any more than by finding a defendant not
guilty we have proved him innocent.

In the example of the antique coin coming up three heads in a row, the
null hypothesis was that the coin was fair. If so, then the
probability of a head on any one toss would be 1/2, so the probability
of three in a row would be (1/2)3=0.125. This is greater than .05, so
we cannot reject the null hypothesis, i.e. we cannot claim that our
experiment has shown the coin to be biased. Up to that point, the
reasoning was correct. Where it went wrong was in claiming that the
experiment has shown the coin to be fair.

Yet that is precisely the sort of argument we see in scientific papers
defending genetic engineering. A recent report, "Absence of toxicity
of Bacillus thuringiensis pollen to black swallowtails under field
conditions" (2) is claiming by its title to have shown that there is
no harmful effect. Only in the discussion, however, do they state
correctly that there is "no significant weight differences among
larvae as a function of distance from the corn field or pollen level".

A second paper claims to show that transgenes in wheat are stably
inherited. The evidence for that is the "transmission ratios were
shown to be Mendelian in 8 out of 12 lines". In the accompanying
table, however, six of the p-values are less that 0.5 and one of them
is 0.1. That is not sufficient to prove that the genes are unstable,
or inherited in a non-Mendelian way. But it certainly does not prove
that they are, which is what is claimed.

The way to decide if the antique coin is biased is to toss it more
times and record the outcome; and in the case of the safety and
stability of GM crops, more and better experiments should be done.

The Anti-Precautionary Principle

The precautionary principle is such good common sense that one would
expect it to be universally adopted. Naturally, there can be
disagreement on how big a risk we are prepared to tolerate and on how
great the benefits are likely to be, especially when those who stand
to gain and those who will bear the costs if things go wrong are not
the same. It is significant that the corporations are rejecting
proposals that they should be held liable for any damage caused by the
products of GM technology. They are demanding a one-way bet: they
pocket any gains and someone else pays for any losses. It's also an
indication of exactly how confident they are that the technology is
really safe.

What is baffling is why our regulators have failed and continue to
fail to act on the precautionary principle. They tend to rely instead
on what we might call the anti-precautionary principle. When a new
technology is being proposed, it must be permitted unless it can be
shown beyond reasonable doubt that it is dangerous. The burden of
proof is not on the innovator; it is on the rest of us.

The most enthusiastic supporter of the anti-precautionary principle is
the World Trade Organisation (WTO), the international body whose task
it is to prevent countries from setting up artificial barriers to
trade. A country that wants to restrict or prohibit imports on grounds
of safety has to provide definitive proof of hazard, or else be
accused of erecting false barriers to free trade. A recent example is
WTO's judgement that the EU ban on US growth-hormone injected beef is
illegal.

Politicians should constantly be reminded of the effects of applying
the anti-precautionary principle over the past fifty years, and
consider their responsibility for allowing corporations to damage our
health and the environment, which could have been prevented. I mention
just a few: mad cow disease and new variant CJD, the tens of millions
dead from cigarette smoking, intolerable levels of toxic and
radioactive wastes in the environment that include hormone disrupters,
carcinogens and mutagens.

Conclusion

There is nothing difficult or arcane about the precautionary
principle. It is the same sort of reasoning that is used in the courts
and in statistics. More than that, it is just common sense. If we have
genuine doubts about whether something is safe, then we should not use
it until we are convinced it is all right. And how convinced we have
to be depends on how much we need it.

As far as GM crops are concerned, the situation is straightforward.
The world is not short of food; where people are going hungry, it is
because of poverty. There is both direct and indirect evidence to
indicate that the technology may not be safe for health and
biodiversity, while the benefits of GM agriculture remain illusory and
hypothetical. We can easily afford a five-year moratorium to support
further research on how to improve the safety of the technology, and
into better methods of sustainable, organic farming, which do not have
the same unknown and possibly serious risks.

Notes and references

See, for example Holm & Harris (Nature 29 July, 1999).

Wraight, A.R. et al, (2000). Proceedings of the National Academy of
Sciences (early edition). Quite apart from the use of statistics, it
generally requires considerable skill and experience to design and
carry out an experiment that will be sufficiently informative. It is
all too easy to fail to find something even when it is there. Our
failure to observe it may simply reflect a poor experiment or
insufficient data or both.

Cannell, M.E. et al (1999). Theoretical and Applied Genetics 99 (1999)
772-784.

* Dr. Peter Saunders, Professor of Applied Mathematics at King's
College London, co-Founder of I-SIS.

I-SIS is a not-for-profit organisation, depending on donations,
membership fees, subscriptions, and merchandise sales to continue its
work. Find out more about membership here

The Institute of Science in Society, PO Box 32097, London NW1 OXR
telephone: [44 1994 231623] [44 20 8452 2729] [44 20 7272 5636]

General Enquiries: sam@i-sis.org.uk -- Website/Mailing List press-
release@i-sis.org.uk -- ISIS Director m.w.ho@i-sis.org.uk