Volume 2, Issue 3
3rd Quarter, 2007


BINA48 Mock Trial: Judge's Decision

Professor Gene Natale

Page 4 of 5

Dr. J. Storrs Hall, currently with the Institute for Molecular Manufacturing in California wrote in 2000:

“The inescapable conclusion is that not only should we give consciences to our machines where we can, but if we can indeed create machines that exceed us in the moral as well as the intellectual dimensions, we are bound to do so.” [1]

In a 2003 article, Paul Almond suggests that with the idea of molecular nanotechnology, proposed by K. Eric Drexler, and with cryonics, we may not be far from achieving “immortality”, by what he refers to as “indirect mind uploading”; using software models of your mind, previously stored. [2]

In a 2003 presentation, Dr. Martine Rothblatt, as counsel for BINA48 in seeking to prevent its manufacturer from discontinuing its electrical supply, argued that BINA48 was designed to think autonomously, to communicate normally with people and to “transcend the machine-human interface by attempting to empathize with customer concerns.” In arguing that the court should allow BINA48 to sue Exhabit, Dr. Rothblatt pointedly illustrated that standing has not always been limited to human beings, and that

Supreme Court Justice Douglas suggested that in the context of environmental law that legal standing might profitably be granted to inanimate objects.  In a presentation given in 2005, Peter Voss envisions that with what he refers to as Artificial General Intelligence (AGI), knowledge and skills may be acquired by computers by “learning” rather than by being programmed.

Acknowledging that his next argument is controversial, he submits that once in that “ready to learn” mode, they will be “self-aware”. Voss predicts that the focus of the legal system at that time will be to protect humans or government, rather than protecting “AGIs”, but that the AGIs will be quite capable of looking after themselves.

In a 1992 law review article, Michael Rivard did an exhaustive study into these areas and argues that “constitutional personhood [should be] extended to all species exhibiting self-awareness.” [3]

Today, a WESTLAW search shows no less than 999 law review articles having referred to “artificial intelligence”, with no less than 46 such articles written just this year.

In 2005, attorney David Calverley of Scottsdale Arizona argued:

“Androids have begun to act in ways that, on the surface, seem human. However, no one is prepared to view them as anything other than property. As androids become more sophisticated, and as engineers try harder to make them ‘conscious’, moral, ethical and legal issues will arise.”[4]

At that same time, Wendell Wallach, currently with the Yale Interdisciplinary Center for Bioethics maintained that “emotional intelligence” may be required of: “. . . a service robot …and should take appropriate action if it senses that its behavior caused fear or other form of emotional disturbance [in its clients].”[5]

Next Page

Footnotes

1. Ethics for Machines.
http://www.kurzweilai.net/articles/art0218.html?printable=1 August 3, 2007 2:10PM EST

2. Indirect Mind Uploading: Using AI to Avoid Staying Dead, August 2003, (www.paul-almond.com/Indirect MindUploading.htm

3. 39 UCLA L. Rev. 1425 (1992)

4. Connection Science, Vol. 18, No 4. Dec. 2006

5. Computer Science Society, Android Science, Stressa, Italy, 2005 p. 149-159.

 

1 2 3 4 5 next page>