WorldChanging.com
March 6, 2006

THE OPEN FUTURE: THE REVERSIBILITY PRINCIPLE

By Jamais Cascio{1}

Two philosophies dominate the broad debates about the development of potentially-worldchanging technologies. The Precautionary Principle tells us that we should err on the side of caution when it comes to developments with uncertain or potentially negative repercussions, even when those developments have demonstrable benefits, too. The Proactionary Principle, conversely, tells us that we should err on the side of action in those same circumstances, unless the potential for harm can be clearly demonstrated and is clearly worse than the benefits of the action. In recent months, however, I've been thinking about a third approach. Not a middle-of-the-road compromise, but a useful alternative: the Reversibility Principle.

It's very much a work-in-progress, but read on to see what this could entail...

The Precautionary Principle, first articulated in 1988{2}, argues that uncertainty should be a trigger for caution when it comes to technological advances. The most widely-accepted version of the principle comes from the Wingspread Statement{3}:

"When an activity raises threats of harm to human health or the environment, precautionary measures should be taken even if some cause and effect relationships are not fully established scientifically. In this context the proponent of an activity, rather than the public, should bear the burden of proof. The process of applying the Precautionary Principle must be open, informed and democratic and must include potentially affected parties."

Transhumanism advocates Max More and Natasaha Vita-More created the Proactionary Principle{4} in 2004 as a direct counter to the Precautionary Principle. This concept argues that only probable and serious negative outcomes should be enough to block the development of potentially-useful technologies. The current version of the statement can be found on Max More's website{5}:

"People's freedom to innovate technologically is highly valuable, even critical, to humanity. This implies a range of responsibilities for those considering whether and how to develop, deploy, or restrict new technologies. Assess risks and opportunities using an objective, open, and comprehensive, yet simple decision process based on science rather than collective emotional reactions. Account for the costs of restrictions and lost opportunities as fully as direct effects. Favor measures that are proportionate to the probability and magnitude of impacts, and that have the highest payoff relative to their costs. Give a high priority to people's freedom to learn, innovate, and advance."

There's room for debate in each of these philosophies, of course. Many worldchangers and WorldChanging allies subscribe to a version of the Precautionary Principle{6} that focuses on taking responsibility for possible negative outcomes rather than simply avoiding any action that might lead to problems; our friends at the Center for Responsible Nanotechnology characterize this as the "active" form of the Precautionary Principle{7}. The Proactionary Principle doesn't yet have multiple strongly-articulated versions, but the principle's authors have modified its wording in response to ongoing discussion; it's currently at version 1.2, although an earlier phrasing can be found in the Wikipedia article{8}.

Critics of the Precautionary Principle claim that it focuses too much on worst-case scenarios, and gives insufficient weight to likely benefits of disputed technologies. Critics of the Proactionary Principle claim that it focuses too much on simple cause-and-effect logic, and ignores both complex results arising from interactions with other developments, and the potential for significant-but-not- inevitable problems. In my view, both of these arguments are largely correct.

We live in a world of rapid technological advances and tremendous global problems. Ideally, the first can help ameliorate the second; unfortunately, given the power of many of these advances, we run a strong risk that the first could make the second even worse. A binary "do it"/"don't do it" argument isn't well-suited to the degree of uncertainty that accompanies technological advances, nor the combinatorial, mutually-reinforcing aspects of global problems (such as climate disruption making conditions of poverty worse in the developing world, driving people towards survival strategies that degrade the environment). I propose, instead, that we think not in terms of "caution" or "action," but in terms of "reversibility."

A word of warning: this idea isn't yet fully-baked, and I hope to see serious critiques coming from both precautionary and proactionary advocates. I welcome the criticism, as it will help me work out the details of the argument.

The Reversibility Principle

This is my first effort to articulate the Reversibility Principle:

"When considering the development or deployment of beneficial technologies with uncertain, but potentially significant, negative results, any decision should be made with a strong bias towards the ability to step back and reverse the decision should harmful outcomes become more likely. The determination of possible harmful results must be grounded in science but recognize the potential for people to use the technology in unintended ways, must include a consideration of benefits lost by choosing not to move forward with the technology, and must address the possibility of serious problems coming from the interaction of the new technology with existing systems and conditions. This consideration of reversibility should not cease upon the initial decision to go forward to hold back, but should be revisited as additional relevant information emerges."

Let's look at this in more detail.

"... development or deployment..." Ideally, the Reversibility approach would take hold in the early stages of the research and development process. The goal isn't necessarily to shut down research the moment potential problems are discovered, but to make certain to design the technology or process with reversibility in mind. We can assume that responsible technological development includes a desire to avoid harm; the Reversibility Principle would add to that a desire to include an "off switch" if harm is later identified.

"...technologies..." By this I mean any human-constructed tool, whether mechanical, biological or social.

"...uncertain, but potentially significant, negative results..." This encompasses two key issues: the negative results need not be guaranteed or inevitable; they should, however, be demonstrably serious. How "significant" is defined is likely to be a point of debate, but to start, I would look at the possibility of death, the difficulty of mitigation or amelioration, and the potential to make other, existing problems worse.

"...strong bias..." The potential for reversibility should be a critical issue as to whether to develop or deploy a technology, but shouldn't be the sole determinant. Other issues, such as the need to avert an even greater problem, will always come into play.

"...reverse the decision..." This is the cornerstone of the principle. Ideally, we would be able to recall the technology and undo the damage it has done should an unexpected negative result emerge. This will not necessarily be easy or even possible -- but the difficulty of reversing the effects of an action arises, in part, from not taking reversibility into account during the design process.

"...grounded in science..." Misunderstandings, rumors or myths -- even popular ones -- should not be sufficient to cause a decision to hold off the development or deployment of useful technologies. At the same time, we must recognize that all science is contingent upon better information, and the inherent uncertainties of scientific study should not be cause to dismiss concerns as not "grounded in science."

"...the potential for people to use the technology in unintended ways..." Saying that something is safe if used correctly isn't the same as it being safe. If "the street finds its own uses for things," those uses will often be contrary to the manufacturer's instructions. In short, consideration of possible harmful results must include possible misuses and abuses of the technology.

"...consideration of benefits lost..." The strongest argument against the strict form of the Precautionary Principle is that it fails to account for the harm that could result from the lack of the new technology in the same way as it accounts for the harm that could result from its deployment. In a world of large-scale problems requiring innovative solutions, this is dangerously short-sighted. The potential for irreversible negative results coming from the use of the technology must be weighed against the irreversible negative results coming from its relinquishment.

"...interaction of the new technology with existing systems and conditions..." This will be the most difficult to measure part of the Reversibility Principle. New technologies do not exist in a vacuum. When deployed, they immediately become part of a larger technological ecosystem, and effects that, in isolation, may be essentially harmless can, in combination with other parts of the ecosystem, lead to serious problems. An example would be a biofuel plan that leads many food farmers to shift to fuel crops, at the expense of the availability of food for poverty-stricken regions.

"...should not cease..." Once a decision has been made to deploy or not to deploy a given technology, questions about the technology should not be forgotten. New discoveries and analysis may change the balance of issues around the decision, and what was once the right choice may in time become the wrong one. In short, the decision as to whether a technology is sufficiently reversible should itself be reversible.

Why Reversibility?

Reversibility is something that would be useful for everyone to think about as they decide whether or not to adopt a particular tool or system, but the concept is particularly important for designers and planners.

From the design perspective, reversibility is something that should be part of the overall design process, much like sustainability. Just as it's easier to undertake a sustainable or "cradle-to-cradle" project by including the concept from the beginning, technology deployments are more likely to be reversible if the concept is inherent to the design, not simply an afterthought. For designers, then, the Reversibility Principle would advocate the question "how can we make this technology in a way that gives us the best ability to shut it off and undo any harm it might cause?" There may not be a perfect answer to the question, but it's almost inevitable that designs that take this issue into account will be more reversible than those that do not.

For planners, reversibility becomes an issue to take into account as technology development turns into deployment. By "planners," I mean anyone with responsibility for how a technological system gets into common use. For manufacturers, Reversibility Principle planning could be a hedge against lawsuits; for governments, Reversibility Principle planning could be a part of both economic and political strategy. If the reversibility concept were to take hold, I would imagine that insurance companies would be among its most strident advocates.

So how would the Reversibility Principle play out in practice?

One obvious candidate for reversibility analysis is biotechnology. A Precautionary approach says that we don't know the long-term effects of introducing genetically modified organisms into the ecosystem, as they are self-replicating technologies subject to evolutionary pressures; we should, therefore, avoid their deployment. Proactionary advocates argue that the benefits of the use of GMOs can be substantial, particularly in parts of the world that (for political or environmental reasons) are unable to grow enough food for local populations; we should, therefore, encourage their development. As before, both of these positions are, in my view, more or less correct.

A Reversibility Principle approach to biotechnology in general would argue that GMOs should be engineered in a way to make it possible to remove them from the environment if unexpected or low-probability problems emerge. Issues of human consumption of GMOs would be handled on a case-by-case basis, with a bias towards holding off on products that demonstrate a possibility of serious or irreversible problems.

Another candidate for the reversibility approach is the response to global warming. The Precautionary Principle and the Proactionary Principle could each be use to justify both rapid action to reduce carbon and a "wait for better methods" approach. From a Reversibility Principle perspective, however, the choice is clear. The potential problems arising from immediate action to cut carbon emissions are largely economic, and while in the worst case scenario they are serious, they are more easily mitigated than those that would come from a slow response, which in even a moderate-case scenario would harm hundreds of millions of people in irreversible ways.

The Reversibility Principle would also apply in the case of geo- engineering{9} or "terraforming Earth{10}" projects to stop globally catastrophic climate outcomes. It's likely that, should we be forced to consider such global-scale engineering to respond to climate disaster, few of the options will be reversible. The question then becomes which option -- including the option of doing nothing -- would in the worst reasonable scenarios result in the least amount of death and destruction, and which would give us the greatest opportunity for gradual mitigation of harm. Underlying the choices will be the need to make the ways the options as reversible as possible, even if full reversibility isn't plausible.

There are two major questions that come to mind about the Reversibility Principle.

To be blunt, the first is whether "reversibility" is even possible. From a purely physical perspective, it's not; even the act of stepping back and brushing over one's footprints still shifts the sand. But there's a difference between being unable to return the world exactly to how it once was and being unable to avoid inevitable disasters. Some of the difference arises from how soon we decide that a choice needs to be reversed; even gradual changes can become irreversible if given enough time to accumulate.

We should see "reversibility," then, not as an attempt to go back to precisely how the world once looked, but as an attempt to eliminate further harm by its source, and to ameliorate the harm that has occurred.

But the bigger issue for the Reversibility Principle perspective is just how readily we can predict the various possible outcomes, both good and bad. The quick answer is we can't fully, but that hasn't stopped us from planning for the future before; we often need to act in situations of limited information. This doesn't mean our choices must be ill-informed.

This is a situation where Scenario Planning methodology could be of value. The scenario approach intentionally avoids coming up with a single "most likely" future. Instead, scenario planners come up with multiple contingent futures, with none of them meant to be a prediction. Rather, the collection of scenarios function as environments in which to test plans -- strategic wind tunnels, if you will. In Reversibility analysis, planners would come up with multiple contingent futures in which to think about outcomes if the given technology is or is not deployed.

There's also the possibility of increasingly sophisticated models and simulations. I have enough experience in the use of computer models for political and social analysis to know that simulations should stick to physical systems, but it may be possible in time to develop decision-making aids using computer models that help human decision- makers to better understand both the physical and social dynamics at work. In situations where harmful outcomes are highly contingent but potentially very serious, good simulations could help answer the "what happens if..." questions in ways that can better be applied to questions of reversibility.

Reversibility and the Open Future

A cornerstone of the open future{11} concept is that we should be striving towards a world that maximizes our flexibility in response to challenges. We will never have perfectly free choices when problems arise, but we are more likely to come up with good solutions under less-constrained conditions than we would if we were limited to a handful of options. The choice to pull back and say "let's try something different" is an option that we should strive to maintain.

Ultimately, the Reversibility Principle should be a heuristic, a prism through which we look at the world and make our decisions. We may not always choose the path with the simplest way back -- it may not always be the right choice -- but it would encourage us to consider the issue for all of our options. Asking ourselves, "if we do this, how readily can it be undone if we discover problems?" forces us think in terms of more than immediate gratification, and to consider how the choice connects to other choices we and the people around us have made and will make. In the end, it may even be a good first-order approximation of wisdom.

{1} http://www.worldchanging.com/jamais_bio.html

{2} http://en.wikipedia.org/wiki/Precautionary_Principle

{3} http://www.biotech-info.net/rachels_586.html

{4} http://en.wikipedia.org/wiki/Proactionary_Principle

{5} http://www.maxmore.com/proactionary.htm

{6} http://www.worldchanging.com/archives/000375.html

{7} http://www.worldchanging.com/archives/001565.html

{8} http://en.wikipedia.org/wiki/Proactionary_Principle

{9} http://www.worldchanging.com/archives/004013.html

{10} http://www.worldchanging.com/archives/004137.html

{11} http://www.worldchanging.com/archives/004122.html