A more basic issue is that there is no agreed definition of consciousness. Perhaps in practical terms, a simpler answer to the question of machine rights might come from the way people treat them. We should put our faith in our own ability to detect consciousness, rather than look to philosophical discourse.
There is one obvious shortcoming of this approach: we will probably sense sentience before it is truly deserved because of our remarkable tendency to anthropomorphise. After all, we are already smitten by today's relatively dumb robots. Some dress up their robot vacuum cleaners. Others take robots fishing or go so far as to mourn their loss on the battlefield.
Even so, popular sentiment towards machines and robots will give a vivid feel for the degree of their sophistication. Franklin himself admits that even he referred to his original creation as "she", though he "did not feel at all bad" when he turned "her" off. But when he and a significant number of others do feel a pang of guilt as they flick the off switch, we might well have passed a milestone in artificial cognition: the birth of a machine that deserves rights.
May 21, 2011
New Scientist asks: When should we give rights to robots?
From the New Scientist article, "When should we give rights to robots?":
The weakness in trusting our instincts is not, as the article argues, that we are likely to anthropomorphize machines and give them rights too soon, but rather on the contrary that we humans have a track record of *not* reconizing the rights of others for far too long. Many people even now don't wholly recognize that all humans are equal, and only some of us have begun to consider what it really means to give rights to non-human people.
ReplyDeleteI'm not convinced by the editorialist's concern at all. (And I think a glance at the comments in the NS page bears me out.)