Volume 2, Issue 1
1st Quarter, 2007


Artificial Moral Agents (AMAs): Prospects and Approaches for Building Computer Systems and Robots Capable of Making Moral Decisions

Wendell Wallach

Page 3 of 6

The third concern that I have on the slide is Emotional Heuristics.[1] This is perhaps the one that is newest and some of us are only beginning to think through, but it's the question of whether our whole emotional system is really the lattice upon which our capacity to reason has been built.


Image 4: Roles of Emotion

We are filled with emotional valiances, likes and dislikes, positive and negative, constant valuations that are constantly being revised. And in many ways these set the grounding for and set the frame on what we think is appropriate to consider when we are working through some of the more difficult decisions we analyze.

Whether AMAs are going to need emotions of their own may have a lot to do with what their functions are. Clearly, a lot of our initial systems won't need emotions, but they may need some kind of affective intelligence, particularly the ability to discern the intentions, motivations, and emotions of the people with whom they're interacting. If we have robots taking care of the elderly in the home, then it's going to be very important that they can read the emotions, the responses, and the confusion on the faces of those with whom they're working. But whether or not humans are going to feel comfortable with machines sensitive to their emotional states is another issue, and that's going to be one of the ethical thresholds we are going to have to cross.  

In Ross Picard’s affective computing lab at MIT [2],  they're starting to run into a lot of IRB (Institutional Review Board)  issues in terms of what they have to tell people when they're interacting with computer systems that may be sensitive to their emotional states. Emotions is only one of what we call supra-rational faculties and social mechanisms that have to be considered.

What I mean by supra-rational is faculties beyond what we normally think about as moral reasoning or making moral judgments. Other supra-rational faculties that have to be considered include sociability, which is the ability to recognize social cues and respond with social cues that are understood by those with whom the systems interact.


Image 5: Sociability

Will moral agents need to be embodied and be interacting with the world that they're moving through? If being embodied is necessary for certain forms of moral reasoning, and I would argue that it is, then there will be certain kinds of moral decisions which are excluded from those agents that are built into disembodied computers. This may not affect the management of driverless trains and the decisions they make or the management of economic decisions, investment decisions being made by computers, but it will affect many of the decisions made by systems that are directly interacting with people.

What about consciousness or a theory of mind? The term "theory of mind" refers to our capacity to recognize that other people have minds that are separate and distinct from our own, that they may have intentions different than our intentions and our assumptions, and how we recognize the beliefs and intentions of others.


Image 6: Consciousness, Theory of Mind

Consciousness and theory of mind give two good examples of the problems we encounter when we think through the development of systems with higher order cognitive faculties. So far the ways in which we are thinking about implementing are based on theories regarding the functions that would constitute consciousness or a theory of mind. As Igor Alexander, for example, has come up with five axioms which he contends are at least some of the prerequisites for consciousness, including the ability to plan, to imagine, to be aware you’re in an out-there world.

Engineers are proceeding to implement consciousness breaking down those second-order faculties, for example ‘planning’, into lower and lower and more finite discrete tasks.  Somewhere down the line they're going to reassemble systems that can perform all of these discrete tasks into one system, which is really the hard piece, and it will only be then, when we go through the reassembly, that we will even know whether our original theory, whereby we broke down these higher order faculties into these discrete tasks, is really a working theory at all. There are many arguments out there that these reductionistic theories are not working theories.  The same issues hold true for work by Yale’s roboticist, Brian Scassellati[3] on implementing a theory of mind in a system modeled on a one-year old child.

Next Page

Footnotes

1. Heuristics - A heuristic is a replicable method or approach for directing one's attention in learning, discovery, or problem-solving. It is originally derived from the Greek "heurisko", which means "I find". Wikipedia.org February 9, 2007 4:11 EST

2. Ross Picard - Rosalind W. Picard is founder and director of the Affective Computing Research Group at the Massachusetts Institute of Technology (MIT) Media Laboratory and co-director of the Things That Think Consortium, the largest industrial sponsorship organization at the lab. web.media.mit.edu/~picard/ February 9, 2007 4:15 pm EST

3. Brian Scassellati - Theory of Mind for a Humanoid Robot http://www.cs.yale.edu/homes/scaz/papers/Humanoids2000-tom.pdf February 9, 2007 4:33 pm EST

 

 

1 2 3 4 5 6 next page>