In 2000, Allen, Varner and Zinser addressed the possibility of a Moral Turing Test (MTT) to judge the success of an automated moral agent (AMA), a theme that is repeated in Wallach and Allen (2009). While the authors are careful to note that a language-only test based on moral justifications, or reasons, would be inadequate, they consider a test based on moral behavior. “One way to shift the focus from reasons to actions,” they write, “might be to restrict the information available to the human judge in some way. Suppose the human judge in the MTT is provided with descriptions of actual, morally significant actions of a human and an AMA, purged of all references that would identify the agents. If the judge correctly identifies the machine at a level above chance, then the machine has failed the test” (206). While they are careful to note that indistinguishability between human and automated agents might set the bar for passing the test too low, such a test by its very nature decides the morality of an agent on the basis of appearances. Since there seems to be little else we could use to determine the success of an AMA, we may rightfully ask whether, analogous to the term "thinking" in other contexts, the term "moral" is headed for redescription here. Indeed, Wallach and Allen’s survey of the problem space of machine ethics forces the question of whether in fifty years (or less) one will be able to speak of a machine as being moral without expecting to be contradicted. Supposing the answer were yes, why might this invite concern? What is at stake? How might such a redescription of the term "moral" come about?Link.
August 16, 2010
Bring on the 'moral' Turing test
Tony Beavers brings this fascinating paper to our attention, "Moral Machines and the Threat of Ethical Nihilism." Excerpt from the paper:
The more important question is... Moral by who's standards?
ReplyDeleteMoral by who's standards?
ReplyDeleteNon-sociopaths have some inherent sense of what is wrong, test for the very basics.
ReplyDeleteThe meaning of morality can stand to be clarified regardless of the need to decide whether AIs have it. Yes, arbitrary technical working definitions can come to overshadow older meanings, but the reason there's a danger of that is that we're so unsure of what we mean by moral anyway.
ReplyDeleteIn any case, a Turing-like test is by no means Wallach and Allen's only approach. Their book will probably deepen rather than oversimplify the question for most readers. But they do stress a pragmatic urgency. Morality is something that actually matters, not just grist for nostalgia and pondering.
You can argue that non-sociopaths have a socialized sense of what is right or wrong.
ReplyDeleteUnless you subscribe to a form of absolute morality (again, imposed or constructed by what or who?), there is no inherent morality, and you run into the question "by whose standards".
Using the Turing Test approach for moral behavior is as questionable for morality as it is for intelligence, past a certain point.
What happens when you can spot the machine every time because they are acting too ethically, or exhibiting greater than human intelligence?
Humans make pretty poor examples for modeling ethical behavior. We barely scrape by as exhibits of sentient beings.