Volume 2, Issue 1
1st Quarter, 2007


Artificial Moral Agents (AMAs): Prospects and Approaches for Building Computer Systems and Robots Capable of Making Moral Decisions

Wendell Wallach

Page 5 of 6

The third demonstration was a case based neural net, and this was really an introductory experiment by Marcello Guarini[1] in trying to program a neural net to make simple ethical decisions.  He programmed the neural network with 22 cases and they ran roughly like this: Jack kills Jill, innocent lives are saved. This case is rated as a permissible case. Conversely, Jill kills Jack and innocent lives are saved. Cases were usually proposed in both forms. Another example, Jack kills Jill, Jack makes lots of money, is rated as not permissible. So we have 22 of these simple training cases. Then he ran another 38 cases, and he found that the system was relatively good at classifying the permissible and the not permissible scenarios. Though like most neural networks there was sensitivity to the original training cases. For example, you might get a female or male bias if you did not use training cases that exactly matched Jill and Jack performing the same permissible and impermissible actions. These systemic biases are notorious within neural nets.

The outstanding question is whether larger sample sets can do away with the biases. What Marcello did conclude was that whereby the system was relatively good at classifying cases, it showed no aptitude for the reassembly of cases. In other words, it couldn’t deduce a principle or give a rationale for why it made the classification that it made. Professor Guarini feels that this is going to be essential if we are going to have more fleshed out artificial moral agents.

A few of the other key players who have been involved in explicitly addressing the implementation questions include Josh Storrs Hall, Peter Danielson, Eliezer Yudkowsky, Luciano Floridi, J.W. Sanders, Catriona Kennedy, and I would say thousands of additional researchers must also be noted. A number of the big names, such as Marvin Minsky[2], have referred to this problem though they don't really give it concerted attention. But the real thing I wanted to underline here is that the solution to many of the problems I’ve raised may not necessarily come from the direct tackling of moral decision making for AI systems. Advances in machine learning, affective computing, A-life, and semantic nets may open up new approaches.

Stan Franklin is a computer scientist at the University of Memphis, and he's worked together with Bernard Baars[3] who is among the more famous neuroscientists. Bernie Baars has proposed a highly regarded theory of consciousness, known as the Global Workspace Theory GWT)[4].  Stan Franklin, together with Bernie and other neuroscientists and computer scientists, has tried to create this modular system in which he implements Bernie's Global Workspace Theory. Franklin has actually done a small implementation of this for the Navy.

Each module in effect uses different technology to perform the tasks that it performs. Not all of these modules have contained within the system built for the Navy.  The diagram represents a single LIDA cycle[5] and each cycle  takes roughly one-fifth of a second, so in the Global Work Space Theory we may be cycling in this way hundreds of times every minute. I find this model of conscious decision-making interesting because it creates spaces to think about, let's say, the implementation of affective heuristics at one level, whereby at another stage you might bring in some rules in the form of attention codelets.

Stan’s model suggests one approach for building a more human-like decision maker that can accommodate many of the complexities I’ve raised in this article. Whether a moral decision-maker based on this model will meet criteria we set for moral agency will be difficult to know without actually building the system.

Those concerns are the philosophical and legal issues around moral agents, the criteria and tests we have for evaluating systems, what would be the thresholds for giving systems rights and responsibilities, can we control systems, does it make sense to talk about punishing artificial agents, and how will we monitor their development? Can we, should we, do we want to control reproduction? And who's responsible when an AMA fails to meet legal and ethical guidelines? Certainly in the short run product liability laws cover the issue of responsibility, but for a variety of reasons we are going to transcend that legal framework very quickly.

Next Page

Footnotes

1. Marcello Guarini - Particularism and the Classification and Reclassification of Moral Cases More Info February 9, 2007 5:02 EST

2. Marvin Minsky - has made many contributions to AI, cognitive psychology, mathematics, computational linguistics, robotics, and optics. In recent years he has worked chiefly on imparting to machines the human capacity for commonsense reasoning. His conception of human intellectual structure and function is presented in The Society of Mind (CDROM, book) which is also the title of the course he teaches at MIT. http://web.media.mit.edu/~minsky/ February 9, 2007 5:12 pm EST

3. Bernard Baars - Senior Fellow in Theoretical Neurobiology, The Neurosciences Institute http://vesicle.nsi.edu/users/baars/ February 9, 2007 5:20 pm EST

4. Global Workspace Theory - a simple cognitive architecture that has been developed to account qualitatively for a large set of matched pairs of conscious and unconscious processes (Baars, 1983, 1988, 1993, 1997). http://cogweb.ucla.edu/CogSci/GWorkspace.html February, 9 2007 5:22 pm EST

5. LIDA cycle - the LIDA conceptual model aims at being a cognitive “theory of everything.” http://www.agiri.org/forum/index.php?showtopic=141 February 9, 2007 5:25 pm EST

 

 

1 2 3 4 5 6 next page>