Q: You created the term "democratic transhumanism," so how do you define it?
JH: The term "democratic transhumanism" distinguishes a biopolitical stance that combines socially liberal or libertarian views (advocating internationalist, secular, free speech, and individual freedom values), with economically egalitarian views (pro-regulation, pro-redistribution, pro-social welfare values), with an openness to the transhuman benefits that science and technology can provide, such as longer lives and expanded abilities. It was an attempt to distinguish the views of most transhumanists, who lean Left, from the minority of highly visible Silicon Valley-centered libertarian transhumanists, on the one hand, and from the Left bioconservatives on the other.
In the last six or seven years the phrase has been supplanted by the descriptor "technoprogressive" which is used to describe the same basic set of Enlightenment values and policy proposals:
Q: In simple terms, what is the "personhood theory? " How do you think it is/will be applied to A.I.?Human enhancement technologies, especially anti-aging therapies, should be a priority of publicly financed basic research, be well regulated for safety, and be included in programs of universal health care
Structural unemployment resulting from automation and globalization needs to be ameliorated by a defense of the social safety net, and the creation of universal basic income guarantees
Global catastrophic risks, both natural and man-made, require new global programs of research, regulation and preparedness
Legal and political protections need to be expanded to include all self-aware persons, including the great apes, cetaceans, enhanced animals and humans, machine minds, and hybrids of animals, humans and machines
Alliances need to be built between technoprogressives and other progressive movements around sustainable development, global peace and security, and civil and political rights, on the principle that access to safe enabling technologies are fundamental to a better future
In Enlightenment thought "persons" are beings aware of themselves with interests that they enact over time through conscious life plans. Personhood is a threshold which confers some rights, while there are levels of rights both above and below personhood. Society is not obliged to treat beings without personhood, such as most animals, human embryos and humans who are permanently unconscious, as having a fundamental right to exist in themselves, a "right to life." To the extent that non-persons can experience pain however we are obliged to minimize their pain. Above personhood we oblige humans to pass thresholds of age, training and testing, and licensure before they can exercise other rights, such as driving a car, owning a weapon, or prescribing medicine. Children have basic personhood rights, but full adult persons who have custody over them have an obligation to protect and nurture children to their fullest possible possession of mature personhood rights.
Who to include in the sphere of persons is a matter of debate, but at the IEET we generally believe that apes and cetaceans meet the threshold. Beyond higher mammals however, the sphere of potential kinds of minds is enormous, and it is very likely that some enhanced animals, post-humans and machine minds will possess only a sub-set of the traits that we consider necessary for conferring personhood status. For instance a creature might possess a high level of cognition and communication, but no sense of self-awareness or separate egoistic interests. In fact, when designing AI we will probably attempt to avoid creating creatures with interests separate from our own, since they could be quite dangerous. Post-humans meanwhile may experiment with cognitive capacities in ways that sometimes take them outside of the sphere of "persons" with political claims to rights, such as if they suppress capacities for empathy, memory or identity.
Q: What ethical obligations are involved in the development of A.I.?
We first have an ethical obligation to all present and future persons to ensure that the creation of machine intelligence enhances their life options, and doesn't diminish or extinguish them. The most extreme version of this dilemma is posed by the possibility of a hostile superintelligence which could be an existential risk to life as we understand it. Short of that the simple expansion of automation and robotics will likely eliminate most forms of human labor, which could result in widespread poverty, starvation and death, and the return of a feudal order. Conversely a well-regulated transition to an automated future with a basic income guarantee could create an egalitarian society in which humans all benefit from leisure.
We also have ethical obligations in relationship to the specific kinds of AI will create. As I mentioned above, we should avoid creating self-willed machine minds because of the dangers they might pose to the humans they are intended to serve. But we also have an obligation to the machine minds themselves to avoid making them self-aware. Our ability to design self-aware creatures with desires that could be thwarted by slavery, or perhaps even worse to design creatures who only desire to serve humans and have no will to self-development, is very troubling. If self-willed self-aware machine minds do get created, or emerge naturally, and are not a catastrophic threat, then we have an obligation to determine which ones can fit into the social order as rights-bearing citizens.
Q: What direction do you see technology headed - robots as tools or robots as beings?
It partly depends on whether self-aware machine minds are first created by brain-machine interfaces, brain emulation and brain "uploading," or are designed de novo in machines, or worse, emerge spontaneously. The closer the connection to human brains that machine minds have the more likely they are to retain the characteristics of personhood that we can recognize and work with as fellow citizens. But a mind that emerges more from silicon is unlikely to have anything in common with human minds, and more likely to either be a tool without a will of its own, or a being that we can't communicate or co-exist with.
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.