Volume 2, Issue 1
1st Quarter, 2007


Artificial Intelligence as a Legal Person

David Calverley, Esq.

Page 2 of 4


Image 7 - Intentionality 

Then he goes on to say, "My suspicion is that judges and juries would be rather impatient with the metaphysical argument that AIs cannot really have intentionality."

From that quote we can unpack a bit of information. What he's really talking about there I believe is a distinction in theory of mind between phenomenal consciousness and functional consciousness.  What he's really saying is you don't have to feel what it's like in order to have functional consciousness for a legal purpose.  In other words, if you walk like a duck and quack like a duck, you're going to be treated like a duck whether or not you feel like a duck.  That's a pretty broad dismissal of an entire range of theories and theory of mind; we don't have a lot of time.

The bottom line is that I think the functionality argument becomes critical in the legal perspective. What the end result of the action is will be looked at from a functional standpoint in most cases by judges and the average juries. There obviously is a lot more to be said. One of the problems is that law can be viewed as a cluster concept where there aren't any real clear definitions of what law or a legal person might be.

In some cases, the law - the courts have said, yes, it is a person.  In other cases they've said no.  So what are we really trying to do?  We're trying to begin the process, and intentionality is one factor we must address.

I've also drawn on the concept of autonomy because I think it's another one of the critical aspects that we need to really begin to understand in the context of personhood.

Autonomy has a variety of meanings. There is the one that a lot of us are familiar with if we deal in bioethics and that is the autonomy described by Beauchamp and Childress[1] the four principles they seem to rely upon is autonomy. That autonomy derives in large part from a concept of liberal individualism and is one of the underpinnings of the western judicial system. That is one meaning of autonomy. 


Image 8 - Autonomy 

In computers there are other meanings. For example, Hexmoor, Castelfranchi and Falcone[2] point to one meaning and say there is a human agent interaction where the agent is expected to require and conform to the preferences set by the human operator. In other words, the device is autonomous when the device faithfully carries out the human's preferences and performs actions accordingly.

Another definition is where there is negotiation between agents.  The agent is supposed to use its knowledge, its intelligence, and its ability to exert a degree of discretion within a negotiation scenario.

A third definition by Margaret Boden[3] is that the agent, if it can be viewed as manipulating its own internal capabilities, its own liberties, and what it allows itself to experience about the world as a whole, then it's autonomous. I think that this definition of autonomy captures what we've been talking about and that is Frankfurt's second order desire volition type of autonomy.

Somebody asked a question whether other animals have had this kind of autonomy and Dan Dennett[4] uses the example of his dog and claims that his dog exhibits second order intentionality.  


Image 9 - Autonomy 2 

If I can remember the story correctly, if you are sitting in your easy chair at night and the dog who has been trained for the last five years not to get up on chairs (because it's not polite and dogs don't do that), goes to the front door and starts to bark and scratch at the front door and you think the dog has to go for a walk (it's that time of night), and you get up and walk to the front door and the dog runs around and jumps in your chair. The argument that Dennett makes is this was second order intentionality.

The dog knew that if it scratched at the door and barked that you would get up out of the chair and then it could run around and hop in the chair and have a nice warm chair to sit in for at least 30 seconds. So he claims that can be exhibited in other types of entities.

The question that I think is still an open question, which I think really brings us to a conclusion, is how can we fit this together? What can we create as a working hypothesis? I want to discuss another aspect that appears in the paper, in the beginning of the paper, and really sets the foundation. I would argue that we really need to make a distinction between humans and persons.  And that humans, the term "human" be restricted to homosapiens, and the "persons" be restricted to legal creations which may include homosapiens but is not necessarily exclusive to them.

Next Page

Footnotes

[1] Beauchamp, T. L., and Childress, J.F., 2001.  Principles of Biomedical Ethics, Fifth Edition. New York: Oxford University Press.

[2] Hexmoor, H., Castelfranchi, C., & Falcone, R. (2003). A Prospectus on Agent Autonomy. In H. Hexmoor (Ed.), Agent Autonomy. Boston: Kluwer Academic Publishers.

[3] Boden, M. (1996). Autonomy and Artificiality. In M. Boden (Ed.), The Philosophy of ArtificialLife. Oxford: Oxford Univ. Press.

[4] Dennet, Daniel, Brainstorms, Montgomery, VT, Bradford 1978.

1 2 3 4 next page>