Volume 2, Issue 1
1st Quarter, 2007


Artificial Moral Agents (AMAs): Prospects and Approaches for Building Computer Systems and Robots Capable of Making Moral Decisions

Wendell Wallach

Page 6 of 6

We will probably move to no-fault insurance for artificial systems. Corporations will want to push questions of personhood before we have actually crossed significant thresholds as a way of reducing their liability. They’ll want to put the actions of those agents that the designers can't fully predict into the courts and away from corporate fiduciary responsibility.

Then there is the other side, presuming we find the agent at fault, what recourse do we have for punishing them? Is it enough to just turn them off or pull their CPUs?  Are there other useful and acceptable forms of punishment? Some people have suggested lowering their energy supply or the availability of information which, to these systems, will be their lifeblood.

We also have the dangers that decision-making AI systems pose. These are the public policy considerations that we as a society are going to confront; should we embrace, relinquish or find ways of regulating the development of such systems.

Another challenge, if we find ways of building in restraints into such systems, how effective will those constraints be? We're already developing systems that are self-adapting or self-healing, and that's usually looked at in terms of systems solving operating such as operating system problems or monitoring the possible failure of their own components.  But we could very easily be moving in a direction where a self-healing system found ways around any constraints we built into those systems. And if we are really talking about the future possibility of systems with greater and greater autonomy, we may not want to inhibit their freedom.

Conclusion

If it's unclear about our ability to manage AMAs, then we may have to reconsider whether building autonomous systems is really a path we want to go down. In the short run, the systems we are building are going to operate in fairly constrained contexts, and, therefore, they may not need fully fleshed out moral decision making faculties. But they will need to manage the kinds of ethical decisions that arise within those contexts.

Eventually we are going to need AMAs that have supra-rational sources of input that maintain the dynamics and the flexibility of bottom-up systems, which can accommodate diverse inputs. In addition, these systems will also have to subject the choices and decisions they make to some higher order principle, some top-down principles that represent the ideals that the system is trying to meet.


wendall wallach

 

 

 

Wendell Wallach, WW Associates



Wendell Wallach founded and managed two computer consulting companies. Among his clients were PepsiCo International, the State of Connecticut, and educational institutions in the Northeast. He is presently working on two books, one on Robot Morals and the other on ethics and human decision-making in the Information Age.

 

 

 

1 2 3 4 5 6 <Back to Issue Contents