Volume 1, Issue 2
2nd Quarter, 2006


Creating a New Intelligent Species: Choices and Responsibilities for AI Designers

Eliezer Yudkowsky

page 4 of 5

Evolution seems to have programmed us to believe that wealth will make us happy, but not programmed us to actually become Yudkowskyhappy. In hindsight, this is not surprising; rarely is the evolutionarily optimal strategy to be content with what you have. The more you have, the more you want. Happiness is the carrot dangled in front of us; it keeps moving forward after we take a few bites.

Is this a good AI mind design? Is it right to create a child who, even if she wins the lottery, will be very happy at first and then six months later go back to where she started? Is it right to make a mind that has as much trouble as a human in achieving long-term happiness? Is that the way you would create a mind if you were creating a mind from scratch?

I do not say that it is good to be satisfied with what you have. There is something noble in having dreams that are open-ended and having aspirations that soar higher and higher without limit. But the human brain represents happiness using an analog scale; there literally are not enough neurotransmitters in the human brain for a billionaire to be a thousand times happier than a millionaire. Open-ended aspirations should be matched by open-ended happiness, and then there would be no need to deceive people about how happy achievement will make them.

A subtlety within evolutionary biology is that conditional responses require more genetic complexity than unconditional responses. It takes a more sophisticated adaptation to grow a fur coat in response to cold weather than to grow a fur coat regardless. For the fur coat to apparently depend on nurture instead of nature, you must evolve cold-weather sensors. Similarly, conditional happiness is more complex than unconditional happiness. Not that unconditional happiness would be a good thing. A human parent can choose how to raise a child, but natural selection has already decided the options and programmed the matrix from environment to outcomes. No matter how you raise a human child, she will not grow up to be a fish. A maker of Artificial Intelligence has enormously more power than a human parent.

A programmer does just not stand in loco parentis to an Artificial Intelligence, but both in loco parentis and in loco evolutionis. A programmer is responsible for both nature and nurture. The choices and options are not analogous to a human parent raising a child, but more like creating a new and intelligent species. You wish to ensure that Artificial Intelligences are treated kindly, that they are not hurt, enslaved, or murdered without the protection of law. This wish does you credit, but there is an anthropomorphism at the heart of it, which is the assumption that the AI has the capacity to be heard; that the AI does not wish to be enslaved; that the AI will be unhappy if placed in constant jeopardy of its life; and that the AI will exhibit a conditional response of happiness depending on how society treats it. The programmers could build an AI that was anthropomorphic in that way if the programmers possessed the technical art to do what they wanted. But if you are concerned for the AI's quality of life or for that matter, about the AI's ability and desire to fulfill its obligation as a citizen, then the way the programmers build the AI is more important than how society treats the AI.

1 2 3 4 5 next page>