Volume 1, Issue 2 
2nd Quarter, 2006

Proactionary Nano-Policy: Managing Massive Decisions for Tiny Technologies

Max More, Ph.D.

This article was adapted from a lecture given by Max More, Ph.D. at the 1st Annual Workshop on Geoethical Nanotechnology on July 20, 2005 at the Terasem Retreat in Lincoln, VT.

More is the Chairman of the Extropy Institute, a philosophical and cultural non-profit organization with the mission “…[T]o serve its members by ensuring a reputable, open environment for discussing the impact of emerging technologies and for collaborating with diversely-skilled experts in exploring the future of humanity.” More urges a proactionary approach in developing policy that addresses the wonders and risks of nanotechnology. The basic point of the proactionary principle is that we need to protect the freedom to innovate because it is critical to our future survival and well-being. Of course, new technology needs to be regulated as it develops, but More argues for an approach to regulation that allows it to flourish, rather than an overcautious, precautionary approach that may end up limiting its potential.

Christmas-time for Nanotechnology
In a way, we could say that this is “Christmas-time for nanotechnology”. It is a festive time and everyone is paying attention to it and enjoying all Morethe funding that is being thrown their way. Yet, as at Christmas-time, we have to ask whether nanotechnology has been naughty or nice. The answer is both: Nanotechnology is naughty and nice.

We know about all the possible risks of nanotechnology, from the minor things to the huge world-eating problems. Yet we also need to make sure that people understand the benefits, which can be substantial. Nanotechnology is naughty when one wonders about the possibilities of dangerous nanoparticles, targeted nano-weapons, and cancerous self-replication. Nanotechnology is nice when we imagine possible consequences such as abundance, health, super-longevity, and environmental restoration.

In imagining the potential of nanotechnology, we must consider both the good and bad. We do not want to err too far on the side of caution, but we also do not want to get too carried away either. How do we find the right balance? The thrust of this article is to ask the question - what is the problem with constraining nanotechnology? How far do we want to constrain it and what is the right way to do so?

Unfit Brains
We start off with the problem that we are equipped with brains that are wonderful on one level, but are not very fit in an evolutionary sense for modern society. They are not well equipped to deal with the type of problems and risks that we face. They are designed to handle repeated attacks and to learn from specific events. Our brains are not that good at handling highly complex subjects, abstractions, and what is often referred to as "black swans," which are extremely unlikely events that happen once in a million years.

Our brains also come with a bestiary of biases that lead us to agree or disagree with one another for other than rational reasons, including:

In addition to our cognitive limitations, we have institutional limitations. Therefore, any discussion about how we will constrain or regulate nanotechnology (or any other technology) must recognize that regulators are not optimizers.

Regulators Are Not Optimizers
We might think that regulators exist to optimize results, but we must take the institutional context into account. Regulators are there to do their jobs as they see them. Their main goals might be to build up their departments, power, and control. Regulators tend to over-regulate, even if they are not doing it consciously. They tend to over-emphasize risks and dangers, and discount the benefits of new technologies. This may simply be because when we do not see things, we cannot directly see many of the benefits. The dangers are more easily seen.

1 2 3 4 5 6 7 next page>