Volume 1, Issue 4
4th Quarter, 2006


Indirect Mind Uploading:
Using AI to Avoid Staying Dead

Paul Almond

page 4 of 13

Searle also gives the term weak AI to the position that computers are capable of modelling the behavior of conscious entities. Weak AI differs from strong AI in that it does not regard such computers as necessarily conscious merely by virtue of them appearing to behave in the same way, to an external observer, as systems that are conscious. There are objections to the strong AI idea, some of them made by Searle himself.

One objection, often associated with religion, is the idea that consciousness cannot be caused by arranging matter in various ways and that it involves something beyond the physical world or beyond human understanding in some sense. The most obvious form of this objection is in the claim that a machine "cannot have a soul".

Roger Penrose, a British mathematician, objects to the strong AI idea by suggesting that there must exist laws of physics which have a non-computable nature and that no mere computer can simulate a system that works according to these laws[1, 2]. By taking this position Penrose also rejects the premise of weak AI.

Another objection to strong AI is the Chinese room argument [3, 4] proposed by Searle, who rejects strong AI. This seeks to show that consciousness and understanding cannot be regarded as existing in a system merely because it is executing an algorithm that makes it appear to behave in the right way. As with the work of Penrose, this attracts a lot of criticism. Searle accepts that human brains are machines that are capable of consciousness, but says that this does not mean that any machine that can behave in the same way as a human brain experiences consciousness. If we were to decide that Searle is correct, and I do not think that he is, there would still be a last resort available of ensuring that the copied human mind was uploaded into a suitable machine that would meet Searle’s standards, in terms of its internal workings, for being able to experience consciousness. For example, we might use molecular nanotechnology or another technology with similar capabilities to construct something physically similar to a human brain that is consistent with the information stored in the scanning of the original human brain.

The Identity Issue
Even if strong AI is a reasonable idea, then it could still be suggested that a computer program that is running a copy of your mind is not really your mind, but is merely a copy that lacks the status of the original. If this pessimistic outlook were reasonable, then mind uploading would be futile because you would still die and be replaced by a completely different individual who merely had a personality and memories that were similar to yours.

It is easy to see why such an objection could occur: If your brain is destroyed, then anything else that is made to contain information from your brain is clearly not your brain; it is less obvious, however, that this means that it is not you. The whole issue here is one of what we should regard as constituting a person, the brain or the information contained in it.

The brain has long been regarded as the seat of consciousness by modern medicine and the brain itself is not regarded as something that can be sacrificed to help a person survive. Preserving the brain is the main function of a doctor, although other body parts may be removed or replaced to allow the brain to continue functioning. Should the brain itself really be that important to us or would it make more sense to preserve the description of a mind somehow? This really gives us two views of what is aimed for in "survival". In one view, trying to survive means trying to keep your brain intact and in the other view it means trying to ensure that the information in your brain survives.

Next Page

Footnotes
1. Penrose, R. (1989). The Emperor’s New Mind: Concerning Computers, Minds and the Laws of Physics. Oxford: Oxford University Press. (back to top)

2. Penrose, R. (1994). Shadows of the Mind: A Search for the Missing Science of Consciousness. Oxford: Oxford University Press. (back to top)

3. Searle, J. R. (1997). The Mystery of Consciousness. New York: The New York Review of Books. (back to top)

4. Hauser, L. (2001). The Chinese Room Argument. The Internet Encyclopedia of Philosophy. Retrieved June 22, 2003 from http://www.utm.edu/research/iep/c/chineser.htm
(back to top)


1 2 3 4 5 6 7 8 9 10 11 12 13 next page>