Tuesday, December 15, 2015

Asimov and Those Rules that Don't Work

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Seems legitimate enough, right? This will nicely ensure that the machine in question takes into consideration through all of its deeds the idea that it must put all humans first. The problem here lies in the definition of terms. To even hope to be able to implement these three rules, a machine needs first to know the definition of injure, then human, then harm. How do we do this? The answer is that we can’t.
If we begin with the word “human,” then we have a few interesting things to consider. A video on the YouTube channel computerphile calls them “edge-cases”. If I were to point at you, then it is clear that you are a human, most people in any given room with a robot are also easily called people. But do dead people count as human? I’m sure, for most applications, they don’t, but that means you, the developer, must define the word “dead” too. What about unborn people? It’s hard to get a room full of people to agree on this one as it is. So, in order to make these rules, which at first looked so appealing, work as guidelines, without first defining answers for almost every ethical question.
Of course, these rules are never seriously considered nowadays by developers of Artificial Intelligence. The reason is that they are simply not useful. I am interested in them for a different reason than their applications: I am interested in their exigence. These rules were created, essentially, as a plot-device: a means by which action could be driven along. Unfortunately for Mr. Asimov’s reputation, they were not written from the point of view of a computer scientist, which is why they look kind of like an elf-magic explanation of an idea this advanced.

So what are we left with from these rules. What I take away from them is that the ideas they present are just too advanced. People think that intuitive understandings that they might have will just tend to traverse the sort of blood-brain barrier, if you will, that separates the mind of an AI and their own. We like to think in poetry, but AI among other things tells us that we cannot make poetic assumptions about a thing this deeply rooted in logic and computer science. Everything, absolutely everything, must be specific in a computer program. If a concept is not specifically defined, then it will simply not exist in the mind of the machine. Sorry, Isaac, better luck next time.

Works Cited
"Do We Need Asimov's Laws? | MIT Technology Review." MIT Technology Review. MIT Technology Review, 16 May 2014. Web. 15 Dec. 2015. <http://www.technologyreview.com/view/527336/do-we-need-asimovs-laws/>.
"Isaac Asimov's "Three Laws of Robotics"" Isaac Asimov's "Three Laws of Robotics" N.p., n.d. Web. 15 Dec. 2015. <http://www.auburn.edu/~vestmon/robotics.html>.
Singer, Peter W. "Isaac Asimov's Laws of Robotics Are Wrong." The Brookings Institution. The Brookings Institution, 18 May 2009. Web. 15 Dec. 2015. <http://www.brookings.edu/research/opinions/2009/05/18-robots-singer>.

No comments:

Post a Comment