Volume 2, Issue 1
1st Quarter, 2007


Artificial Moral Agents (AMAs): Prospects and Approaches for Building Computer Systems and Robots Capable of Making Moral Decisions

Wendell Wallach

Page 2 of 6

The bottom up approaches are inspired by the theory of evolution, developmental psychology, learning theories, and even just the simple act of fine tuning the system; the engineer's act of fine tuning a system in order to achieve some specified goal. In regards to evolution, particularly from the Game theorists and the evolutionary psychologists or sociobiologists we have the contention that at least some of our moral propensities are inherent. They are in some senses hardwired by evolution and we may be able to re-evolve these in our artificial life forms. It is difficult to evolve any form of complex agent within a computer environment. Other evolutionary inspired technologies such as genetic algorithms and evolutionary robotics have had some degree of success, though it is hard to conceive of how you might specify the goal that such systems would strive for. For example, how would you describe the survival of the ‘most moral’ or the survival of the ‘most good’, or the survival of the just?

So evolution does give us some real tools to work with in artificial intelligence, though it's hard to know at this stage how far we are going to get with those tools in developing systems with moral decision making faculties.

A great deal of attention has been given to machine learning, though machine learning has not been applied much to questions of moral development.  We do have work in human psychology by Piaget[1], Lawrence  Kohlberg[2], Carol Gilligan[3] and their student’s suggestions, regarding stages children progress through in developing an acumen about moral decision making. We may be able to appropriate some of those theories presuming that we have an artificial intelligent platform able to progress developmentally.

In all of this we really have to keep in mind fundamental distinctions between humans and computer platforms we're working with. I'm tabling the discussion of cyborgs in this particular article.  But we humans are essentially, at least from a scientific viewpoint, biochemical, instinctual, and emotional platforms out of which our higher order faculties emerged.

Emotions and instinct may be foundational to the kinds of higher order faculties that we have, whereby computers are essentially logical platforms.

distinction
Image 3: Distinction

Now, there may be possible advantages to being a computer at least from an ethical viewpoint. Herb Simon[4], one of our fathers of AI, who received the Nobel Prize for his theory of bounded rationality, argued that we aren't rational agents in the traditional sense of considering all possible options rather, we're very limited in the options we can consider, and we tend to select the option that is most satisfying or the first option with which we feel comfortable. But presumably a computer platform might be able to consider a much wider branch of options, many more moves in the chess sense, for example, and in this it has the possibility of coming up with a course of action that we might not have considered by a human agent and is a better choice. An artificial moral agent might have this possible advantage.

Another possible advantage is an absence of base motivations in computer systems presuming that they are not programmed to have motivation such as greed. This is a major presumption when you start talking about evolving artificial agents. Greed may be essential to evolution.

Perhaps the most significant difference in moral aptitude, or what gets talked about the most, is the absence of emotions or at least the absence of the kind of somatic emotions which make humans subject to emotional highjackings or emotional prejudices. When talking about moral decision making, the role of emotions is a big subject that is growing daily and I'm just going to allude to a few elements of it here.

When I grew up, moral philosophy was still dominated by stoicism, the belief that to make good moral decisions you had to have dispassionate reason, and that the enemy was our animal nature. This goes back to the Greek and Roman stoics. We now live in the age of emotional intelligence, an age when we are beginning to look at the positive influences we derive from emotions. To some extent this contention goes back to David Hume[5] and his belief that there was certain moral sentiments that were foundational and were positive. It's very difficult to think through the problem of whether or not computer systems can or should have emotions of their own. And if so, if they did have emotions that you believe are important to the kinds of faculties they have, then what kind of emotions are we talking about? Would those be purely cognitive emotions or would those have some kind of somatic [6] character to them? When we think about bringing in feelings, sentiments, empathy, a sense of what pain is in a somatic sense, do we lose some of what we gain from them as computational platforms?

Next Page

Footnotes

1. Jean Piaget - (August 9, 1896 – September 16, 1980) was a Swiss philosopher, natural scientist and developmental psychologist, well known for his work studying children and his theory of cognitive development. Wikipedia.org February 9, 2007 3:58 pm EST

2. Lawrence Kohlberg - (October 25, 1927 – January 19, 1987) was an American psychologist. He was born in Bronxville, New York. He served as a professor at the University of Chicago as well as Harvard University. He is famous for his work in moral education, reasoning, and development. Being a close follower of Jean Piaget's theory of cognitive development, Kohlberg's work reflects and perhaps even extends his predecessor's work. This work has been further extended and modified by such scholars as Carol Gilligan and James Rest. Wikipedia.org February 9, 2007 4:00 pm EST

3. Carol Gilligan - (1936– ) is an American feminist, ethicist, and psychologist best known for her work with and against Lawrence Kohlberg on ethical community and ethical relationships, and certain subject-object problems in ethics. Wikipedia.org February 9, 2007 4:02 pm EST

4. Herb Simon - Herbert Alexander Simon (June 15, 1916 – February 9, 2001) was an American political scientist whose research ranged across the fields of cognitive psychology, computer science, public administration, economics, management, and philosophy of science and a professor, most notably, at Carnegie Mellon University. With almost a thousand, often very highly cited publications, he is one of the most influential social scientists of the 20th century. Wikipedia.org February 9, 2007 4:05 pm EST

5. David Hume - (April 26, 1711 – August 25, 1776)[1] was a Scottish philosopher, economist, and historian. He is one of the most important figures in the history of Western philosophy and of the Scottish Enlightenment. Wikipedia.org February 9, 2007 4:08 EST

6. Somatic - The term somatic refers to the body, as distinct from some other entity, such as the mind. The word comes from the Greek word Σωματικóς (Somatikòs), meaning "of the body". It has different meanings in various disciplines. Wikipedia.org February 9, 2007 4:09 EST

 

 

1 2 3 4 5 6 next page>