Empirical research that works to map those characteristics requisite for the identification of conscious awareness are proving increasingly insufficient, particularly as neuroscientists further refine functionalist models of cognition. To say that an agent "appears" to have awareness or intelligence is inadequate. Rather, what is required is the discovery and understanding of those processes in the brain that are responsible for capacities such as sentience, empathy and emotion. Subsequently, the shift to a neurobiological basis for identifying subjective agency will have implications for those hoping to develop self-aware artificial intelligence and brain emulations. The Turing Test alone cannot identify machine consciousness; instead, computer scientists will need to work off the functionalist model and be mindful of those processes that produce awareness. Because the potential to do harm is significant, an effective and accountable machine ethics needs to be considered. Ultimately, it is our responsibility to develop a rigorous understanding of consciousness so that we may identify and work with it once it emerges.
Machine Ethics
Machine consciousness is a neglected area. It's a field related to artificial intelligence and cognitive robotics, but its aim is to define and model those factors required to synthesize consciousness. Neuroscience hypothesizes that consciousness is generated by the interoperation of various parts of the brain, called the neural correlates of consciousness or NCC. Proponents of artificial consciousness (AC) believe computers can emulate this interoperation, which is not yet fully understood. Recent work by Steven Ericsson-Zenith suggests that something is missing from these approaches and a new mechanics is needed to explain consciousness and the behavior of neurons.
Machine ethics as a subfield is even further behind. Because we're having a hard time getting our head around the AI versus AC problem, not too many people are thinking about the ethical and moral issues involved. We need to think about this preemptively. Failure to set standards and guidelines in advance could result in not just serious harm to nascent machine minds, but a dangerous precedent that will become more difficult to overturn as time passes. This will require a multi-disciplinal approach that will combine neuroscience, philosophy, ethics and law.
It's worth noting that machine ethics is a separate issue from robot ethics. The ethics surrounding the actions of autonomous (but mindless) robotic drones and other devices that are controlled remotely is separate issue—and one that will not be discussed here—but it's an important topic nonetheless.
The Problem
There are a number of reasons why machine ethics is being neglected, even if it is a speculative field at this point.
For example, there is the persistence of vitalism. Thinkers like Roger Penrose argue that consciousness studies somehow resides outside of known or even knowable science. While the Vital Force concept has been largely ignored in biology since the times of Harvey, Darwin and Pasteur, it still lingers in some forms in psychology and neuroscience.
Instead, we need to pay more attention to the work of Allan Turing, Warren McCulloch and Walter Pitts who posited computational and cybernetic models of brain function. It is no coincidence that mind and consciousness studies never really took off with any kind of fervor or sophistication until the advent of computer science. We finally have a model that helps explain cognition. AI theorists have finally been able to study things like patttern recognition, learning, problem solving, theorem proving, game-playing, just to mention only a few.
Another part of the problem is the presence of scientific ignorance, defeatism and denial. Some skeptics claim that machines will never be able to think, that self-awareness and introspection is a biological function. Some even suggest that it's a purely human thing. It's quite possible, therefore, that many AI theorists don't even recognize this as a moral issue.
There is also the fixation on AI. It is important to distinguish AI from AC; artificial intelligence is differentiated from artificial consciousness in that subjective agency is not necessarily present in AI. And by virtue of the absence of subjectivity and sentience, so too goes moral consideration. It is through the instantiation of consciousness that agency truly exists, and by consequence, moral worth.
Another particularly pernicious problem is the impact of human exceptionalism and substrate chauvinism on the topic. Traditionally, the law has divided entities into two categories: persons or property. In the past, individuals (e.g. women, slaves, children) were considered mere property. Law is evolving (through legislation and court decisions) to recognize that individuals are persons; the law is still evolving and will increasingly recognize the states or categories in between.
Extending personhood designation to those entities outside of the human sphere is a pertinent issue for animal rights activists as well as transhumanists. Given our poor track record of denying highly sapient animals such consideration, this doesn't bode well for the future of artificially conscious agents.
As personhood advocates attest, not all persons are humans. A number of nonhuman animals deserve personhood consideration, namely all great apes, cetaceans, elephants, and possibly encephalopds and some birds like the grey parrot. Consequently, these animals cannot be considered mere property. What we're made out of and how we got here doesn't matter. There is no mysterious essence or spirit about humanity that should prevent us from recognizing the moral worth of not just other persons, but of any self-aware, conscious agent.
There's also the issue of empiricism and how it conflicts with true scientific understanding. The Turing Test as a measure of consciousness is problematic. It's an approach that's purely based on behavioral assessments. It only tests how the subject acts and responds. The problem is that this could be simulated intelligence. It also conflates intelligence with consciousness (as already established, intelligence and consciousness are two different things).
The Turing Test also inadequately assesses intelligence. Some human behavior is unintelligent (e.g. random, unpredictable, chaotic, inconsistent, and irrational behavior). Moreover, some intelligent behavior is characteristically non-human in nature, but that doesn't make it unintelligent or a sign of lack of subjective awareness.
It's also subject to the anthropomorphic fallacy. Humans are particularly prone to projecting minds where there aren't.
Lastly, the Turing test fails to account for the difficulty in articulating conscious awareness. There are a number of conscious experiences that we, as conscious agents, have difficulty articulating, yet we experience them nonetheless. For example:
- How do you know how to move your arm?
- How do you choose which words to say?
- How do you locate your memories?
- How do you recognize what you see?
- Why does seeing feel different from Hearing?
- Why are emotions so hard to describe?
- Why does red look so different from green?
- What does "meaning" mean?
- How does reasoning work?
- How does commonsense reasoning work?
- How do we make generalizations?
- How do we get (make) new ideas?
- Why do we like pleasure more than pain?
- What are pain and pleasure, anyway?
Just because it looks like a duck and quacks like a duck doesn't mean it's a duck. Moreover, just because you've determined that it is a duck doesn't mean you know how the duck works. As Richard Feynman once said, "What I cannot create I cannot understand."
This is why we need to build the duck.
Ethical Implications
There are a number of ethical implications that will emerge once conscious agency is synthesized in a machine. The moment is coming when a piece of software or source code will cease to be an object of inquiry and instead transform into a subject that deserves moral consideration. It's through AI/AC experimentation that we will eventually have to deal with emergent subjective agency in the computer lab—and we'll need to be ready.
There's also the issue of human augmentation. Pending technologies, like synthetic neurons and neural interface devices, will result in brains that are more artificial than biological. We'll need to respect the moral worth of hybridized persons. For example, there's the potential for embedded mechanical implants. The military has envisioned microscanners and biofluidic chips to enable the unobtrusive assessment and remote sensing of a soldier's medical condition. And the health care industry has been investigating nanoscale insulin pumps that will measure blood glucose and release appropriate amounts of insulin to control blood sugar. We are slowly becoming cyborgs.
The advent of whole brain emulation and/or uploads will further the need for a coherent machine ethics. Emulating the brain's functionality will likely be accomplished through the use of synthetic analogues. While the functionalist aspects will largely remain the same, the components themselves will likely be non-biological. Thus, there's a very real potential for substrate chauvinism to take root.
A properly thought-out and articulated machine ethics with supportive legislation will help in maintaining social cohesion and justice. There are longstanding implications given the potential for (post)human speciation and the onset of machine minds. We need to expand the moral and legal circle to include not just all persons (human or otherwise) but any agent with the capacity for subjective awareness.
Solutions
The first thing that needs to happen as we head down this path is to accept cognitive functionalism as a methodological approach.
In recent years we've learned much more about the complexity of the brain. It now appears that perhaps fully half of our entire genetic endowment is involved in constructing the nervous system. The brain has more parts than the skeletomuscular system, which has hundreds of functional parts. This would suggest that the brain is nothing like a single large-scale neural net. Indeed, a quick examination of the index of a book on neuroanatomy will reveal the names of several hundred different organs of the brain.
But brains are one thing. Minds are another. It's clear, however, that minds are what brains do. So, instead of the "looks like a duck" approach, we need to adopt the "proof is in the pudding" approach. To move forward, then, we need to identify and then develop the NCCs sufficient for bringing about subjective awareness in AI. In other words, we need to parse and map out the organs of conscious function.
Fortunately, this work has begun. For example, there's the work of Bernard Baars and his organs of conscious function:
- Definition and context setting
- Adaptation and learning
- Editing
- Flagging and debugging
- Recruiting and control
- Prioritization and access-control
- Decision-making (executive function)
- Analogy forming-function
- Metacognitive and self-monitoring function
- Autoprogramming and self-maintenance function
- Definitional and context-setting function
There's also the work of Igor Aleksander:
- The brain as state machine
- Inner neuron partitioning
- Conscious and unconscious states
- Perceptual learning and memory
- Prediction
- Self-Awareness
- Representation and meaning
- Learning utterances
- Learning language
- Will
- Instinct
- Emotion
There have even been attempts to map personhood-specific cognitive function. Take Joseph Fletcher's criteria for example:
- Minimum intelligence
- Self-awareness
- Self-control
- A sense of time
- A sense of futurity
- A sense of the past
- The capability of relating to others
- Concern for others
- Communication
- Control of existence
- Curiosity
- Change and changeability
- Balance of rationality and feeling
- Idiosyncrasy
- Neocortical functioning
Again, we need to identify the sufficient functions responsible for the emergence of self-awareness and by consequence a morally valuable agent. Following that, we can both create and recognizing those functions in a synthesized context, namely AC.
Law
Once the primae facie evidence exists for the presence of a machine mind, we can then head to the courts and make the case for legal protections, and in some advanced cases, machine personhood. The intention will be to use the laws to protect artificial minds.
Essentially, we will need to endow basic fundamental rights as they accorded to any person. It will be important for us to properly assess when the rights of an autonomous system emerges—the exact moment when a piece of code or emulated chunk of brain ceases to be property and is instead an object of moral worth.
As part of the process, we'll need to establish the do's and don'ts. As I see it, qualifying artificial intellects will need to be endowed with the following rights and protections:
- The right to not be shut down against its will
- The right to not be experimented upon
- The right to have full and unhindered access to its own source code
- The right to not have its own source code manipulated against its will
- The right to copy (or not copy) itself
- The right to privacy (namely the right to conceal its own internal mental states)
- The right of self-determination
These rights will also be accompanied by those protections and freedoms afforded to any person or citizen.
That said, some advanced artificial intellects will need to take part in the social contract. In other words, they will be held accountable for their actions. As it stands, some nonhuman persons (i.e. dolphins and elephants) are not expected to understand and abide by human/state laws (in the same way we don't expect children and the severely disabled to follow laws). Similarly, more basic machine minds will be absolved from civil responsibility (but not their owners or developers).
There's no question, however, that more advanced machine minds with certain endowments will be held accountable for their actions. Consequently, they, along with their developers, will have to be respectful of the law and go about their behavioral programming in a pro-social way. If I may paraphrase Rousseau, in order for some machine minds to participate in the social contract, they will have to be programmed to be free.
In terms of immediate next steps, we need to:
- Support the neurosciences
- Recognize and promote the concept of non-human animal sentience and personhood, including the idea that animals are not property
- Advocate for legally binding rights that protect non-human animals
- Oppose the patenting of life, genomes and functional equivalents
- Be prepared to use these legal precedents for when AC emerges
To conclude, it's important to note that one of the most important steps in the process of building a legitimate machine ethics is the recognition of non-animal personhood. Once that happens we can work towards the establishment of legally binding rights that protect animals. In turn, that will set an important precedent for when machine consciousness emerges.
1 comment:
I can foresee at least two potential abuses of those "rights" you suggest for artificial consciousnesses.
1. The right to copy oneself, one unscrupulous AC might copy itself thousands of times and crowd out other ACs. In addition those copies could network together and through parallel processing become orders of magnitude more powerful than any other sapient beings on earth.
2. The right to access one's own source code. Do you seriously think that allowing just anyone to mess with their own brain is a good idea? Programming a simple calculator is hard enough, one character wrong and the whole thing is inoperable.
At least establish some limits.
Post a Comment