July 31, 2010
July 15, 2010
Should we clone Neanderthals?
Zach Zorich of Archeology explores the scientific, legal, and ethical obstacles to cloning Neanderthals:
The ultimate goal of studying human evolution is to better understand the human race. The opportunity to meet a Neanderthal and see firsthand our common but separate humanity seems, on the surface, too good to pass up. But what if the thing we learned from cloning a Neanderthal is that our curiosity is greater than our compassion? Would there be enough scientific benefit to make it worth the risks? "I'd rather not be on record saying there would," Holliday told me, laughing at the question. "I mean, come on, of course I'd like to see a cloned Neanderthal, but my desire to see a cloned Neanderthal and the little bit of information we would get out of it...I don't think it would be worth the obvious problems." Hublin takes a harder line. "We are not Frankenstein doctors who use human genes to create creatures just to see how they work." Noonan agrees, "If your experiment succeeds and you generate a Neanderthal who talks, you have violated every ethical rule we have," he says, "and if your experiment fails...well. It's a lose-lose." Other scientists think there may be circumstances that could justify Neanderthal cloning.Link.
"If we could really do it and we know we are doing it right, I'm actually for it," says Lahn. "Not to understate the problem of that person living in an environment where they might not fit in. So, if we could also create their habitat and create a bunch of them, that would be a different story."
"We could learn a lot more from a living adult Neanderthal than we could from cell cultures," says Church. Special arrangements would have to be made to create a place for a cloned Neanderthal to live and pursue the life he or she would want, he says. The clone would also have to have a peer group, which would mean creating several clones, if not a whole colony. According to Church, studying those Neanderthals, with their consent, would have the potential to cure diseases and save lives. The Neanderthals' differently shaped brains might give them a different way of thinking that would be useful in problem-solving. They would also expand humanity's genetic diversity, helping protect our genus from future extinction. "Just saying 'no' is not necessarily the safest or most moral path," he says. "It is a very risky decision to do nothing."
Hawks believes the barriers to Neanderthal cloning will come down. "We are going to bring back the mammoth...the impetus against doing Neanderthal because it is too weird is going to go away." He doesn't think creating a Neanderthal clone is ethical science, but points out that there are always people who are willing to overlook the ethics. "In the end," Hawks says, "we are going to have a cloned Neanderthal, I'm just sure of it."
Gelernter's 'dream logic' and the quest for artificial intelligence
Internet pioneer David Gelernter explores the ethereal fuzziness of cognition in his Edge.org article, "Dream-logic, the internet and artificial consciousness." He's right about the imperfect and dream-like nature of cognition and conscious thought; AI theorists should certainly take notice.
But Gelernter starts to go off the rails toward the conclusion of the essay. His claim that an artificial consciousness would be nothing more a zombie mind is unconvincing, as is his contention that emotional capacities are are necessary component of the cognitive spectrum. There is no reason to believe, from a functionalist perspective, that the neural correlates of consciousness cannot take root in an alternative and non-biological medium. And there are examples of fully conscious human beings without the ability to experience emotions.
Gelernter, like a lot of AI theorists, need to brush-up on their neuroscience.
At any rate, here's an excerpt from the article; you can judge the efficacy of his arguments for yourself:
But Gelernter starts to go off the rails toward the conclusion of the essay. His claim that an artificial consciousness would be nothing more a zombie mind is unconvincing, as is his contention that emotional capacities are are necessary component of the cognitive spectrum. There is no reason to believe, from a functionalist perspective, that the neural correlates of consciousness cannot take root in an alternative and non-biological medium. And there are examples of fully conscious human beings without the ability to experience emotions.
Gelernter, like a lot of AI theorists, need to brush-up on their neuroscience.
At any rate, here's an excerpt from the article; you can judge the efficacy of his arguments for yourself:
As far as we know, there is no way to achieve consciousness on a computer or any collection of computers. However — and this is the interesting (or dangerous) part — the cognitive spectrum, once we understand its operation and fill in the details, is a guide to the construction of simulated or artificial thought. We can build software models of Consciousness and Memory, and then set them in rhythmic motion.
The result would be a computer that seems to think. It would be a zombie (a word philosophers have borrowed from science fiction and movies): the computer would have no inner mental world; would in fact be unconscious. But in practical terms, that would make no difference. The computer would ponder, converse and solve problems just as a man would. And we would have achieved artificial or simulated thought, "artificial intelligence."
But first there are formidable technical problems. For example: there can be no cognitive spectrum without emotion. Emotion becomes an increasingly important bridge between thoughts as focus drops and re-experiencing replaces recall. Computers have always seemed like good models of the human brain; in some very broad sense, both the digital computer and the brain are information processors. But emotions are produced by brain and body working together. When you feel happy, your body feels a certain way; your mind notices; and the resonance between body and mind produces an emotion. "I say again, that the body makes the mind" (John Donne).
The natural correspondence between computer and brain doesn't hold between computer and body. Yet artificial thought will require a software model of the body, in order to produce a good model of emotion, which is necessary to artificial thought. In other words, artificial thought requires artificial emotions, and simulated emotions are a big problem in themselves. (The solution will probably take the form of software that is "trained" to imitate the emotional responses of a particular human subject.)
One day all these problems will be solved; artificial thought will be achieved. Even then, an artificially intelligent computer will experience nothing and be aware of nothing. It will say "that makes me happy," but it won't feel happy. Still: it will act as if it did. It will act like an intelligent human being.
And then what?
July 12, 2010
Latest piece: Observation Selection Effect [art]
Munkittrick joins Discover Mag's Science Not Fiction
IEET colleague and Pop Transhumanism blogger Kyle Munkittrick has joined the Discover Magazine empire as a contributor to their Science Not Fiction blog. Way to go, Kyle!
Wisdom: From Philosophy to Neuroscience by Stephen S. Hall [book]
Stephen S. Hall's new book, Wisdom: From Philosophy to Neuroscience, looks interesting.
Promotional blurbage:
A. C. Grayling has penned an insightful and critical review of Hall's book:
Promotional blurbage:
A compelling investigation into one of our most coveted and cherished ideals, and the efforts of modern science to penetrate the mysterious nature of this timeless virtue.Hall's book is part of a larger trend that, along with happiness studies, is starting to enter (or is that re-enter?) mainstream academic and clinical realms of inquiry.
We all recognize wisdom, but defining it is more elusive. In this fascinating journey from philosophy to science, Stephen S. Hall gives us a dramatic history of wisdom, from its sudden emergence in four different locations (Greece, China, Israel, and India) in the fifth century B.C. to its modern manifestations in education, politics, and the workplace. We learn how wisdom became the provenance of philosophy and religion through its embodiment in individuals such as Buddha, Confucius, and Jesus; how it has consistently been a catalyst for social change; and how revelatory work in the last fifty years by psychologists, economists, and neuroscientists has begun to shed light on the biology of cognitive traits long associated with wisdom—and, in doing so, begun to suggest how we might cultivate it.
Hall explores the neural mechanisms for wise decision making; the conflict between the emotional and cognitive parts of the brain; the development of compassion, humility, and empathy; the effect of adversity and the impact of early-life stress on the development of wisdom; and how we can learn to optimize our future choices and future selves.
Hall’s bracing exploration of the science of wisdom allows us to see this ancient virtue with fresh eyes, yet also makes clear that despite modern science’s most powerful efforts, wisdom continues to elude easy understanding.
A. C. Grayling has penned an insightful and critical review of Hall's book:
First, though, one must point to another and quite general difficulty with contemporary research in the social and neurosciences, namely, a pervasive mistake about the nature of mind. Minds are not brains. Please note that I do not intend anything non-materialistic by this remark; minds are not some ethereal spiritual stuff a la Descartes. What I mean is that while each of us has his own brain, the mind that each of us has is the product of more than that brain; it is in important part the result of the social interaction with other brains. As essentially social animals, humans are nodes in complex networks from which their mental lives derive most of their content. A single mind is, accordingly, the result of interaction between many brains, and this is not something that shows up on a fMRI scan. The historical, social, educational, and philosophical dimensions of the constitution of individual character and sensibility are vastly more than the electrochemistry of brain matter by itself. Neuroscience is an exciting and fascinating endeavour which is teaching us a great deal about brains and the way some aspects of mind are instantiated in them, but by definition it cannot (and I don't for a moment suppose that it claims to) teach us even most of what we would like to know about minds and mental life.
I think the Yale psychologist Paul Bloom put his finger on the nub of the issue in the March 25th number of Nature where he comments on neuropsychological investigation into the related matter of morality. Neuroscience is pushing us in the direction of saying that our moral sentiments are hard-wired, rooted in basic reactions of disgust and pleasure. Bloom questions this by the simple expedient of reminding us that morality changes. He points out that "contemporary readers of Nature, for example, have different beliefs about the rights of women, racial minorities and homosexuals compared with readers in the late 1800s, and different intuitions about the morality of practices such as slavery, child labour and the abuse of animals for public entertainment. Rational deliberation and debate have played a large part in this development." As Bloom notes, widening circles of contacts with other people and societies through a globalizing world plays a part in this, but it is not the whole story: for example, we give our money and blood to help strangers on the other side of the world. "What is missing, I believe," says Bloom, and I agree with him, "is an understanding of the role of deliberate persuasion."
Contemporary psychology, and especially neuropsychology, ignores this huge dimension of the debate not through inattention but because it falls well outside its scope. This is another facet of the point that mind is a social entity, of which it does not too far strain sense to say that any individual mind is the product of a community of brains.
July 10, 2010
Intersexed athlete Caster Semenya given green light to compete
South African sprinter Caster Semenya has been given approval by the International Association of Athletics Federations (IAAF) to race as a female. The 19 year old runner is an intersexed individual with internal males testes that are producing testosterone at rates considerably above average for women. After a gender test in September 2009, the IAAF decided to ban her from racing, citing a biological advantage that was not of Semenya's doing. Now, after conducting an investigation, the Federation has passed a ruling allowing Semenya to race again.
This is a very interesting, if not perplexing, decision, and I wonder how it's going to play against the International Olympic Committee's (IOC) recent decision calling for intersexed athletes to have a medical procedure in order to qualify for the Olympics. By all accounts, Semenya has not had the procedure, or if she has, is not disclosing that information to the public. Moreover, the results of her most recent gender test are not being disclosed.
Very fishy.
So why did the IAAF suddenly change its mind, and why are they not giving any reasons? Did they feel pressured by the public? Is this a case of political correctness on the track? Or did Semenya have the medical procedure? And if so, why not disclose it? Or would that open a huge can of worms -- and a possible charge of a human rights violation?
Let's assume Semenya did not have the procedure. Has the IAAF therefore decided that intersexed persons are good to compete against unambiguously gendered individuals? And what about her competitors? I can't imagine that they're very happy right now. This would seem to be a dangerous and ill conceived precedent. Semenya is not the only intersexed athlete currently competing in Olympic sports. What about them?
I have a feeling this story is far from over.
This is a very interesting, if not perplexing, decision, and I wonder how it's going to play against the International Olympic Committee's (IOC) recent decision calling for intersexed athletes to have a medical procedure in order to qualify for the Olympics. By all accounts, Semenya has not had the procedure, or if she has, is not disclosing that information to the public. Moreover, the results of her most recent gender test are not being disclosed.
Very fishy.
So why did the IAAF suddenly change its mind, and why are they not giving any reasons? Did they feel pressured by the public? Is this a case of political correctness on the track? Or did Semenya have the medical procedure? And if so, why not disclose it? Or would that open a huge can of worms -- and a possible charge of a human rights violation?
Let's assume Semenya did not have the procedure. Has the IAAF therefore decided that intersexed persons are good to compete against unambiguously gendered individuals? And what about her competitors? I can't imagine that they're very happy right now. This would seem to be a dangerous and ill conceived precedent. Semenya is not the only intersexed athlete currently competing in Olympic sports. What about them?
I have a feeling this story is far from over.
Video games as art?
The interwebs are angry because Roger Ebert, a film critic who knows virtually nothing about video games, is arguing that video games will never be considered an artform. Grant Tavinor of Kotaku takes a more nuanced approach to the question and uses the popular BioShock video game to make his case:
Moreover, even if you dismiss current games as having any artistic merit, Ebert's claim that they will never be legitimate artforms is suspicious. Never? Really? Not even when augmented reality enters the picture? Or completely immersive virtual reality?
Even more profoundly, a number of years ago I speculated about the potential for directly altering subjective and emotional experience and how mental manipulation could become an art form. In the article, Working the Conscious Canvas, I wrote:
Art, whether it be traditional or novel, has always been about transcending the individual and sharing the subjective experience of others. As I've written before, "The greatest artists thrill us with their stories, endow us with emotional and interpersonal insight, and fill us with joy through beautiful melodies, paintings and dance. By doing so they give us a piece of their selves and allow us to venture inside their very minds—even if just for a little bit."
And yes, this includes video games.
Finally, and this is my judgment, BioShock is the result of the intention to make an artwork. Intentions can be slippery things, but it seems evident enough in the game that it is intended to be something more than just a game: BioShock is intended to have the features listed above (they are not accidental) and it is intended to have these features as a matter of its being art.I agree that part of the problem is the nascent status of video-games-as-art. Pacman never attempted to be artist; Bioshock clearly does. It's still early days.
Hence, BioShock seems an entirely natural candidate for art status. It has, in some form, all but one of the criteria. The one it lacks-belonging to an established artistic form-it lacks because of the very newness of video games. BioShock is not necessarily a masterpiece (the last act is problematic) but this is beside the point; the vast majority of art works are not masterpieces. Surely it would be unfair to deny BioShock art status when it has so many of the qualities that in other uncontested art works accounts for their art status?
Moreover, even if you dismiss current games as having any artistic merit, Ebert's claim that they will never be legitimate artforms is suspicious. Never? Really? Not even when augmented reality enters the picture? Or completely immersive virtual reality?
Even more profoundly, a number of years ago I speculated about the potential for directly altering subjective and emotional experience and how mental manipulation could become an art form. In the article, Working the Conscious Canvas, I wrote:
It's conceivable that predetermined sets of emotional experiences could be a future art form. Artists might, for example, manipulate emotions alongside established art forms, a la A Clockwork Orange-but certainly not for the same questionable ends.Imagine this same technology, but in the context of video games. Now there's some scary potential.
For example, imagine listening to Beethoven's "Ode to Joy" or "Moonlight Sonata" while having your emotional centers manipulated in synch with the music's mood and tone. You'd be compelled to feel joy when the music is joyful, sadness when the music is sad.
The same could be done with film. In fact, last century, director Orson Welles, who was greatly influenced by German expressionistic filmmaking, directed movies in which the subjective expression of inner experiences was emphasized (Touch of Evil, for example). In the 1960s, Alfred Hitchcock, also a student of expressionism, went a step further by creating and editing sequences in a way that was synchronized with subjective perception, such as the quick-cut shower sequence in Psycho.
In the future, audiences could share emotional experiences with a film's protagonist. Imagine watching Saving Private Ryan, Titanic or Gone with the Wind in such a manner. The experience would be unbelievably visceral, nothing like today's experience of sitting back and watching.
The beauty of such experiences is that sophisticated virtual reality technology isn't required, just the control mechanisms to alter emotional experience in real-time.
Of course, some will argue that when artists can directly manipulate emotions, they will have lost a dialogue with their audience, as audience members will simply be feeling exactly what's intended. But this won't necessarily be the case. Rather, audience members will respond to emotional tapestries in unique ways based on their personal experiences, the same way they do now to other art forms.
Art, whether it be traditional or novel, has always been about transcending the individual and sharing the subjective experience of others. As I've written before, "The greatest artists thrill us with their stories, endow us with emotional and interpersonal insight, and fill us with joy through beautiful melodies, paintings and dance. By doing so they give us a piece of their selves and allow us to venture inside their very minds—even if just for a little bit."
And yes, this includes video games.
Economist: War in the Fifth Domain
The latest cover article of The Economist poses the question: are the mouse and keyboard the new weapons of conflict?
Important thinking about the tactical and legal concepts of cyber-warfare is taking place in a former Soviet barracks in Estonia, now home to NATO’s “centre of excellence” for cyber-defence. It was established in response to what has become known as “Web War 1”, a concerted denial-of-service attack on Estonian government, media and bank web servers that was precipitated by the decision to move a Soviet-era war memorial in central Tallinn in 2007. This was more a cyber-riot than a war, but it forced Estonia more or less to cut itself off from the internet.Link.
Similar attacks during Russia’s war with Georgia the next year looked more ominous, because they seemed to be co-ordinated with the advance of Russian military columns. Government and media websites went down and telephone lines were jammed, crippling Georgia’s ability to present its case abroad. President Mikheil Saakashvili’s website had to be moved to an American server better able to fight off the attack. Estonian experts were dispatched to Georgia to help out.
Many assume that both these attacks were instigated by the Kremlin. But investigations traced them only to Russian “hacktivists” and criminal botnets; many of the attacking computers were in Western countries. There are wider issues: did the cyber-attack on Estonia, a member of NATO, count as an armed attack, and should the alliance have defended it? And did Estonia’s assistance to Georgia, which is not in NATO, risk drawing Estonia into the war, and NATO along with it?
Such questions permeate discussions of NATO’s new “strategic concept”, to be adopted later this year. A panel of experts headed by Madeleine Albright, a former American secretary of state, reported in May that cyber-attacks are among the three most likely threats to the alliance. The next significant attack, it said, “may well come down a fibre-optic cable” and may be serious enough to merit a response under the mutual-defence provisions of Article 5.
NYT: Until Cryonics Do Us Part
The New York Times has published a piece about cryonicists and how not all family members buy into it. The article focuses on Robin Hanson, a name that should be familiar to most readers of this blog:
Among cryonicists, Peggy’s reaction might be referred to as an instance of the “hostile-wife phenomenon,” as discussed in a 2008 paper by Aschwin de Wolf, Chana de Wolf and Mike Federowicz.“From its inception in 1964,” they write, “cryonics has been known to frequently produce intense hostility from spouses who are not cryonicists.” The opposition of romantic partners, Aschwin told me last year, is something that “everyone” involved in cryonics knows about but that he and Chana, his wife, find difficult to understand. To someone who believes that low-temperature preservation offers a legitimate chance at extending life, obstructionism can seem as willfully cruel as withholding medical treatment. Even if you don’t want to join your husband in storage, ask believers, what is to be lost by respecting a man’s wishes with regard to the treatment of his own remains? Would-be cryonicists forced to give it all up, the de Wolfs and Federowicz write, “face certain death.”Link.
Subscribe to:
Posts (Atom)