Beyond robots that think about what they are thinking, Lipson and his colleagues are also exploring if robots can model what others are thinking, a property that psychologists call "theory of mind". For instance, the team had one robot observe another wheeling about in an erratic spiraling manner toward a light. Over time, the observer could predict the other's movements well enough to know where to lay a "trap" for it on the ground. "It's basically mind reading," Lipson says.Read more.
"Our holy grail is to give machines the same kind of self-awareness capabilities that humans have," Lipson says. "This research might also shed new light on the very difficult topic of our self-awareness from a new angle—how it works, why and how it developed."
February 27, 2011
Helping robots become self-aware
A recent article in Scientific American, Automaton, Know Thyself: Robots Become Self-Aware, points to the work being done by engineers to instill a certain degree of self-awareness in robots, including the capacities for self-image and even a theory of other minds:
February 25, 2011
SEED Magazine on cognitive enhancement through 'adaptive harnessing'
The next giant leap in human evolution may not come from fields like genetic engineering or artificial intelligence, argues SEED's Mark Changizi, but rather from appreciating our ancient brains:
More on "Humans, Version 3.0."
The root of these misconceptions is the radical underappreciation of the design engineered by natural selection into the powers implemented by our bodies and brains, something central to my 2009 book, The Vision Revolution. For example, optical illusions (such as the Hering) are not examples of the brain’s poor hardware design, but, rather, consequences of intricate evolutionary software for generating perceptions that correct for neural latencies in normal circumstances. And our peculiar variety of color vision, with two of our sensory cones having sensitivity to nearly the same part of the spectrum, is not an accidental mutation that merely stuck around, but, rather, appear to function with the signature of hemoglobin physiology in mind, so as to detect the color signals primates display on their faces and rumps.Changizi is clearly on to something. Reworking the brain to increase efficiency, boost its powers, and give it novel capacities is a sound idea. But why oh why do so many specialists like Changizi ignore the impact of converging technologies? Adaptive harnessing will most likely be done in concert with other types of cognitive enhancements, including genetic, pharmaceutical, and artificial intelligent applications. And it's not as far off as he'd have us believe.
These and other inborn capabilities we take for granted are not kluges, they’re not “good enough,” and they’re more than merely smart. They’re astronomically brilliant in comparison to anything humans are likely to invent for millennia.
Neuronal recycling exploits this wellspring of potent powers. If one wants to get a human brain to do task Y despite it not having evolved to efficiently carry out task Y, then a key point is not to forcefully twist the brain to do Y. Like all animal brains, human brains are not general-purpose universal learning machines, but, instead, are intricately structured suites of instincts optimized for the environments in which they evolved. To harness our brains, we want to let the brain’s brilliant mechanisms run as intended—i.e., not to be twisted. Rather, the strategy is to twist Y into a shape that the brain does know how to process.
But how do I know this is feasible? This tactic may use the immensely powerful gifts that natural selection gave us, but what if harnessing these powers is currently far beyond us? How do we find the right innate power for any given task? And how are we to know how to adapt that task so as to be just right for the human brain’s inflexible mechanisms?
I don’t want to pretend that answers to these questions are easy—they are not. Nevertheless, there is a very good reason to be optimistic that the next stage of human will come via the form of adaptive harnessing, rather than direct technological enhancement: It has already happened.
More on "Humans, Version 3.0."
February 23, 2011
Freeman Dyson: How We Know [information theory]
There's a great review article by Freeman Dyson in the New York Review of Books, who provides a summary of James Gleick's new book, The Information: A History, a Theory, a Flood. It's a somewhat longish article, but worth the read; information theory continues to be a particularly fascinating era of inquiry:
The explosive growth of information in our human society is a part of the slower growth of ordered structures in the evolution of life as a whole. Life has for billions of years been evolving with organisms and ecosystems embodying increasing amounts of information. The evolution of life is a part of the evolution of the universe, which also evolves with increasing amounts of information embodied in ordered structures, galaxies and stars and planetary systems. In the living and in the nonliving world, we see a growth of order, starting from the featureless and uniform gas of the early universe and producing the magnificent diversity of weird objects that we see in the sky and in the rain forest. Everywhere around us, wherever we look, we see evidence of increasing order and increasing information. The technology arising from Shannon’s discoveries is only a local acceleration of the natural growth of information.
..............
The vision of the future as an infinite playground, with an unending sequence of mysteries to be understood by an unending sequence of players exploring an unending supply of information, is a glorious vision for scientists. Scientists find the vision attractive, since it gives them a purpose for their existence and an unending supply of jobs. The vision is less attractive to artists and writers and ordinary people. Ordinary people are more interested in friends and family than in science. Ordinary people may not welcome a future spent swimming in an unending flood of information. A darker view of the information-dominated universe was described in a famous story, “The Library of Babel,” by Jorge Luis Borges in 1941.3 Borges imagined his library, with an infinite array of books and shelves and mirrors, as a metaphor for the universe.
Contextual advertising FAIL
*Sigh*
By the way, this is the 2,000th post on Sentient Developments! Thanks to all of you who have supported me over the years.
By the way, this is the 2,000th post on Sentient Developments! Thanks to all of you who have supported me over the years.
February 19, 2011
Printing body parts
From the Economist:
The great hope of transplant surgeons is that they will, one day, be able to order replacement body parts on demand. At the moment, a patient may wait months, sometimes years, for an organ from a suitable donor. During that time his condition may worsen. He may even die. The ability to make organs as they are needed would not only relieve suffering but also save lives. And that possibility may be closer with the arrival of the first commercial 3D bio-printer for manufacturing human tissue and organs.More.
The new machine, which costs around $200,000, has been developed by Organovo, a company in San Diego that specialises in regenerative medicine, and Invetech, an engineering and automation firm in Melbourne, Australia. One of Organovo’s founders, Gabor Forgacs of the University of Missouri, developed the prototype on which the new 3D bio-printer is based. The first production models will soon be delivered to research groups which, like Dr Forgacs’s, are studying ways to produce tissue and organs for repair and replacement. At present much of this work is done by hand or by adapting existing instruments and devices.
To start with, only simple tissues, such as skin, muscle and short stretches of blood vessels, will be made, says Keith Murphy, Organovo’s chief executive, and these will be for research purposes. Mr Murphy says, however, that the company expects that within five years, once clinical trials are complete, the printers will produce blood vessels for use as grafts in bypass surgery. With more research it should be possible to produce bigger, more complex body parts. Because the machines have the ability to make branched tubes, the technology could, for example, be used to create the networks of blood vessels needed to sustain larger printed organs, like kidneys, livers and hearts.
Putting an end to dolphin exploitation at aquatic theme parks
A number of years ago I visited Sea World in Orlando, Florida. The experience proved to be a formative one, as it would mark the last time I would ever visit an aquatic theme park. What I saw there at the dolphin show that day shattered all illusions I had about the treatment of dolphins at these parks, while at the same time demonstrating to me the obvious ways in which they can express their individuality and intentions—and how this is conveniently ignored by us in ways that are completely self-serving.
The show got off to a rocky start. As the cheesy performance music blared through the loudspeakers, the trainers enthusiastically marched to the stage and assumed their positions. They blew their high-pitched whistles and waited for the dolphins to do their part.
But the dolphins ignored the cue. They swam nervously in their holding tank, circling and circling.
The trainers tried again, but the dolphins remained steadfast. They weren't going anywhere.
So, the trainers stopped the show and addressed the audience. We were told that, as a hierarchical species, the leaders of the troop were preventing the rest of the dolphins from partaking in the show. The reason, they suspected, was on account of a looming storm.
Indeed, hurricane Ernesto was slated to hit the region in the next 24 hours, and it's likely that the dolphins, sensing the low pressure system, were in a state of agitation. The last thing they wanted to do at that moment was to follow commands and perform tricks.
Unfazed, the trainers said they weren't about to let the dolphins have their way and that they were going to try and try again until they performed the show as expected.
Once again, the trainers marched to their stations and the cheesy music began anew. After another short delay, the dolphins finally decided to take part. But I have to say, it was the most half-assed effort I've ever seen put on by dolphins. They consistently missed their cues and went about their jumps and tricks as if they were just going through the motions.
What was happening was blatantly obvious to anyone paying attention: Their hearts were simply not into it.
As I sat there watching this spectacle, I started to feel ill, and I suddenly regretted coming to the park. I was hit hit with a glaringly obvious realization.
These dolphins are slaves.
Indeed, we are making these highly intelligent and emotional animals perform tricks against their will. They are confined to ridiculously small tanks and expected to perform on cue—and should they refuse, they're beaten back to submission by an unrelenting crew of trainers who simply won't take no for an answer—even if it's in front of a live audience.
Now, I realize that the dolphin show brings a lot of money to these parks—but the dolphin tank has got to go. It's cruelty through and through. As nonhuman persons, dolphins need to be protected from these kinds of abuses. They are not ours to play with.
We have no right to compel dolphins to entertain us. They deserve better than that. Moreover, we have no right to contain them in this way. Dolphins need to swim. In fact, in the wild, dolphins swim an average of 65 to 85 kilometers per day. The tanks at these theme parks must feel intensely claustrophobic to them. It's torturous.
And as I learned on that day at Sea World, dolphins are also capable of expressing their discontent. They can show us when they're not happy and they can express their will. We need to start paying attention and put aside our petty desire to watch dolphins jump through hoops.
It's time to stop this kind of animal exploitation.
Check out the IEET's Rights of Non-Human Persons program to learn more.
Nokia + Burton: Self tracking and viral advertising
Via Information Aesthetics:
It had to come to this. Now the "viral" infographics has engulfed us so much we cannot longer communicate a simple fact without some sort of Photoshop-crazy or weirdly-angled, textual chart (wonder-o-wonder, where do I get these examples from?), the next frontier seems to become the world of the quantified self.Be sure to follow the link and check out the embedded videos.
And of course, the first people to adopt self-tracking viz for viral purposes are always the coolest of the bunch: mobile communications company Nokia and snowboard manufacturer Burton recently joined forces and hooked up a snowboarder to a powerful smartphone and a series of custom sensors in order to capture 5 different measurements during a snowboard run: speed, the 3D orientation of the board, feet pressure, heart rate and physiological rush (i.e. galvanic skin response).
Nokia and Burton now invite people to "interpret" this data in a creative way, ranging from unique visualizations to original installations that are triggered by the data. Their goal is to show the winning entries at the Burton US Open in March.
As any common data geek is also a star on the black-labeled snow slopes, winners will receive VIP tickets, plus accommodation and travel. For anyone who happens to live far, far away from Stratton, US, and has a bunch of free time around the middle of March, this seems like a sweet deal. Note that the submission deadline is 21 February! Others can wait until they release all the required information to develop and make your own data-augmented snowboard.
February 16, 2011
Wired: To Talk With Aliens, Learn to Speak With Dolphins
Wired is reporting on an interesting paper about cutting edge dolphin communications studies and how this research may eventually assist in SETI endeavors.
This research is being done by Denise Herzing of the Wild Dolphin Project. Her paper is called "SETI meets a social intelligence: Dolphins as a model for real-time interaction and communication with a sentient species." Abstract:
This research is being done by Denise Herzing of the Wild Dolphin Project. Her paper is called "SETI meets a social intelligence: Dolphins as a model for real-time interaction and communication with a sentient species." Abstract:
In the past SETI has focused on the reception and deciphering of radio signals from potential remote civilizations. It is conceivable that real-time contact and interaction with a social intelligence may occur in the future. A serious look at the development of relationship, and deciphering of communication signals within and between a non-terrestrial, non-primate sentient species is relevant. Since 1985 a resident community of free-ranging Atlantic spotted dolphins has been observed regularly in the Bahamas. Life history, relationships, regular interspecific interactions with bottlenose dolphins, and multi-modal underwater communication signals have been documented. Dolphins display social communication signals modified for water, their body types, and sensory systems. Like anthropologists, human researchers engage in benign observation in the water and interact with these dolphins to develop rapport and trust. Many individual dolphins have been known for over 20 years. Learning the culturally appropriate etiquette has been important in the relationship with this alien society. To engage humans in interaction the dolphins often initiate spontaneous displays, mimicry, imitation, and synchrony. These elements may be emergent/universal features of one intelligent species contacting another for the intention of initiating interaction. This should be a consideration for real-time contact and interaction for future SETI work.
Anders Sandberg: Why we should fear the paperclipper
Most people in the singularity community are familiar with the nightmarish "paperclip" scenario, but it's worth reviewing. Anders Sandberg summarizes the problem:
A programmer has constructed an artificial intelligence based on an architecture similar to Marcus Hutter's AIXI model...This AI will maximize the reward given by a utility function the programmer has given it. Just as a test, he connects it to a 3D printer and sets the utility function to give reward proportional to the number of manufactured paper-clips.In the article, Why we should fear the paperclipper, Sandberg goes on to address a number of objections, including:
At first nothing seems to happen: the AI zooms through various possibilities. It notices that smarter systems generally can make more paper-clips, so making itself smarter will likely increase the number of paper-clips that will eventually be made. It does so. It considers how it can make paper-clips using the 3D printer, estimating the number of possible paper-clips. It notes that if it could get more raw materials it could make more paper-clips. It hence figures out a plan to manufacture devices that will make it much smarter, prevent interference with its plan, and will turn all of Earth (and later the universe) into paper-clips. It does so.
Only paper-clips remain.
- Such systems cannot be built
- Wouldn't the AI realize that this was not what the programmer meant?
- Wouldn't the AI just modify itself to *think* it was maximizing paper-clips?
- It is not really intelligent
- Creative intelligences will always beat this kind of uncreative intelligence
- Doesn't playing nice with other agents produce higher rewards?
- Wouldn't the AI be vulnerable to internal hacking: some of the subprograms it runs to check for approaches will attempt to hack the system to fulfil their own (random) goals?
- Nobody would be stupid enough to make such an AI
The strength of the AIXI "simulate them all, make use of the best"-approach is that it includes all forms of intelligence, including creative ones. So the paper-clip AI will consider all sorts of creative solutions. Plus ways of thwarting creative ways of stopping it.In the end, Sandberg concludes that we should still take this threat seriously:
In practice it will be having an overhead since it is runs all of them, plus the uncreative (and downright stupid). A pure AIXI-like system will likely always have an enormous disadvantage. An architecture like a Gödel machine that improves its own function might however overcome this.
This is a trivial, wizard's apprentice, case where powerful AI misbehaves. It is easy to analyse thanks to the well-defined structure of the system (AIXI plus utility function) and allows us to see why a super-intelligent system can be dangerous without having malicious intent. In reality I expect that if programming such a system did produce a harmful result it would not be through this kind of easily foreseen mistake. But I do expect that in that case the reason would likely be obvious in retrospect and not much more complex.
February 15, 2011
Social networking as a force for radical social change? Don't believe the hype
My aren't we all excited these days about the power of social networking, particularly in its apparent ability to literally topple governments. Recent events in Tunisia and Egypt have left a good number of people believing that Twitter and Facebook were the defining factors behind the fall of the despotic regimes.
Yeah, well, that's kinda not what happened.
Look, I'm not about to deny the power of these platforms to disseminate information. Clearly they had an impact on the public's ability to bond together and rally behind a worthwhile cause. But last time I checked, a good number of previous revolutions managed to happen without the internet.
Funny that. How'd they manage to pull that off without iPhones and TweetDeck?
Okay, here's the deal: The fall of any regime is contingent on a number of factors—but access to information is a relatively irrelevant variable. The 'rise up' meme can spread through a number of different channels and at varying rates, and given dire circumstances and a desperate populace, it most certainly will.
For revolutions to work, however, there has to be (1) a reason behind the uprising, (2) a population willing to go the distance, and (3) a government largely unable or unwilling to manage the situation.
In the case of Tunisia, for example, Mohamed Bouazizi's self-immolation was the immediate powder keg that set off a population largely stressed out by poor economic conditions, including rising food prices. That's it right there in a nutshell. 140 character limits had nothing to do with it. In turn, the success of the Tunisians was clearly an inspiration for the Egyptians who were suffering under similar circumstances.
As for the population's resolve, I'm certain that the solidarity and passion that was felt was accentuated by the social networking aspect. No doubt. But ultimately, for that resolve to flourish and strengthen over the long haul, there has to be underlying stress factors.
And with or without social networking, it's the response of the government that almost always determines the course of a popular uprising. In the case of Egypt, while it appeared that Hosni Mubarak had control of the military at all times, he willingly chose not to suppress the uprising with violent action. Ultimately, it was this restraint that led to his overthrow.
The same cannot be said for some other countries that have faced (or are currently facing) popular uprisings. Take China in 1989 for example. Does anyone seriously think that social networking would have prevented the Chinese military from unleashing machine gun fire on those students? Or that the protests would have continued afterwards?
Then there's Iran. Twenty months ago the country was littered with protesters who were in the possession of social networking tools. Yes, the sharing of information most certainly added fuel to the fire, but ultimately the uprising failed. Why? Because the Iranian government is more willing than others to brutalize its people. Moreover, the social networking aspect has unquestioningly backfired; it's almost certain that thousands of protesters who exposed themselves through these channels were later jailed and likely executed.
If the current protests in Iran or anywhere else are to succeed, it won't be on account of social networks. It will be because the populace simply refuses to tolerate their conditions, and that their resolve is stronger than the force it's up against.
When computers exceed our ability to understand how the hell they do the things they do
Which would be pretty much now.
Great quote from David Ferrucci, the Lead Researcher of IBM's Watson Project:
As Watson has revealed, when it errs it errs really badly.
This kind of freaks me out a little. When asking computers questions that we don't know the answers to, we aren't going to know beyond a shadow of a doubt when a system like Watson is right or wrong. Because we don't know the answer ourselves, and because we don't necessarily know how the computer got the answer, we are going to have to take a tremendous leap of faith that it got it right when the answer seems even remotely plausible.
Looking even further ahead, it's becoming painfully obvious that any complex system that is even remotely superior (or simply different) relative to human cognition will be largely unpredictable. This doesn't bode well for our attempts to engineer safe, comprehensible and controllable super artificial intelligence.
Great quote from David Ferrucci, the Lead Researcher of IBM's Watson Project:
"Watson absolutely surprises me. People say: 'Why did it get that one wrong?' I don't know. 'Why did it get that one right?' I don't know."Essentially, the IBM team came up with a whole whack of fancy algorithms and shoved them into Watson. But they didn't know how these formulas would work in concert with each other and result in emergent effects (i.e. computational cognitive complexity). The result is the seemingly intangible, and not always coherent, way in which Watson gets questions right—and the ways in which it gets questions wrong.
As Watson has revealed, when it errs it errs really badly.
This kind of freaks me out a little. When asking computers questions that we don't know the answers to, we aren't going to know beyond a shadow of a doubt when a system like Watson is right or wrong. Because we don't know the answer ourselves, and because we don't necessarily know how the computer got the answer, we are going to have to take a tremendous leap of faith that it got it right when the answer seems even remotely plausible.
Looking even further ahead, it's becoming painfully obvious that any complex system that is even remotely superior (or simply different) relative to human cognition will be largely unpredictable. This doesn't bode well for our attempts to engineer safe, comprehensible and controllable super artificial intelligence.
PBS Newshour on IBM's Watson
Check out this excellent overview of IMB's Watson, the Jeopardy playing supercomputer, which features Ray Kurzweil and Marvin Minsky:
February 14, 2011
Rights of Non-Human Persons Facebook page
Please join and "Like" the newly created Facebook page for the IEET's Rights of Non-Human Persons program.
February 13, 2011
Getting started on the Rights of Non-Human Persons project
Now that we've officially launched the Rights of Nonhuman Persons program at the IEET, you can expect to see more of these discussions right here at Sentient Developments. Specifically, the questions we're asking right now include:
- What is a person?
- What are the criteria for personhood? And why should these capacities matter and/or confer a higher degree of moral consideration?
- When it comes to human-level rights and protections, what exactly are we talking about? What aren't we talking about?
- How do we actually go about changing the laws?
Looking at the big picture, I'd like to see it such that all nonhuman persons are protected from such things as torture, experimentation, slavery, confinement, and threat of unnatural death (i.e. hunting and murder). Ideally, I'd like to see the day when elephants are no longer forced to perform at circuses, great apes gawked upon at zoos, or dolphins confined to unacceptably small tanks at oceanariums. And so on. Essentially, the rule of thumb should be: If you wouldn't do it to a human, you shouldn't do it to a nonhuman person.
Speaking of actual species, my initial short list of (suspected) nonhuman persons includes:
- Great apes (chimpanzees, gorillas, and orangutans); it's worth noting that humans are classified as a great ape
- Cetaceans (whales, dolphins, and porpoises)
- Elephants
- Cephalopods (especially the octopus)
- Grey parrots
Given how many species live on this planet, it's a pretty exclusive club. And I'm somewhat on the fence about the last two, but we have to perform our due diligence to ensure that these particular species get the protections we think they may deserve.
It's also worth noting that this is a starting point. I suspect that more species will be added to this list over time. This will be an iterative process as we (1) gain public acceptance on the issue and normalize the concept of nonhuman personhood, (2) create legal precedents and enact laws, and (3) learn more about the neurology and behavior of other nonhuman person candidate species so that they can also be included.
And although not a priority right now, we will also be considering the potential for nonbiological personhood. We foresee the day when an AI or brain emulation ceases to become an object of experimentation and instead becomes an agent worthy of moral consideration. We're not there yet, but we want to be ready for that eventuality.
It's also important to think about realizable and tangible goals. While we have a lot of work to do—and lots of minds to change—we should strive for nothing less than the actual achievement of our mission. I'm confident we'll get there. I suspect that the initial breakthrough will see great apes protected first, followed by dolphins. We're pretty much ready to conceptualize and accept these species as being persons; it's a relatively easy sell.
And from there, we'll move on the next species until we're done.
February 11, 2011
When robots attack: Should we fear a singularity? [video]
Check out this fantastic video produced by TIME.com that features my friend Brian Malow, the science comedian, who also wrote the piece:
IEET launches program to further the rights of nonhuman persons
With the help from the Institute for Ethics and Emerging Technologies, I've finally got my non-human persons rights project off the ground. Today's announcement from the IEET:
The Institute for Ethics and Emerging Technologies has announced a new program, Rights of Non-Human Persons, that will argue in favor of applying human-level rights to certain other species.
“Defense of human rights, applied as fully as possible, is one of our core principles,” said IEET Executive Director James Hughes. “As our understanding of what constitutes a ‘person’ continues to grow and change, we’re convinced it is time to expand that definition.”
George Dvorsky, a Canadian futurist and bioethicist who serves on the IEET’s Board of Directors, will head the new program on Rights of Non-Human Persons.
“It is increasingly clear that some non-human animals meet the criteria of legal personhood, and thus are deserving of specific rights and protections,” said Dvorsky. “Recent scientific research has revealed more about animal cognition and behavior than ever before, so we really have no choice but to take this prospect seriously.”
This new initiative will be included within the broader Rights of the Person program, managed by Kristi Scott. “The general thrust of human history is toward the progressive inclusion of previously marginalized individuals and groups,” said Scott. “Now we’re reaching the point where this imperative compels us to cross the species barrier so we can protect some of the most vulnerable and exploited animals on the planet.”
“Species like bonobos, elephants, dolphins, and others most certainly fall into a special class of beings, namely those deserving of the personhood designation,” added Dvorsky. “While we might recognize this instinctually, or even scientifically, it’s time we start to recognize this in the legal sense.”
“The Institute for Ethics and Emerging Technologies is well positioned to work on behalf of this cause,” said Hughes. “Philosophically, the IEET has always recognized the value of looking beyond mere human-ness when it comes to our consideration of ethics and morals. With our non-anthropocentric approach to personhood and our impressive body of advisors, the IEET will work actively to promote the idea of legal non-human personhood and see it come to fruition.”
Rights of Non-Human Persons Mission Statement:
Owing to advances in several fields, including the neurosciences, it is becoming increasingly obvious that the human species no longer can ignore the rights of non-human persons. A number of non-human animals, including the great apes, cetaceans (i.e. dolphins and whales), elephants, and parrots, exhibit characteristics and tendencies consistent with that of a person—traits like self-awareness, intentionality, creativity, symbolic communication, and many others. It is a moral and legal imperative that we now extend the protection of ‘human rights’ from our species to all beings with those characteristics.Feel free to contact me if you want to contribute; and join our new mailing list.
The Institute for Ethics and Emerging Technologies, as a promoter of non-anthropocentric personhood ethics, defends the rights of non-human persons to live in liberty, free from undue confinement, slavery, torture, experimentation, and the threat of unnatural death. Further, the IEET defends the right of non-human persons to live freely in their natural habitats, and when that’s not possible, to be given the best quality of life and welfare possible in captivity (such as sanctuaries).
Through the Rights of the Non-Human Person program, the IEET will strive to:
- Investigate and refine definitions of personhood and those criteria sufficient for the recognition of non-human persons.
- Facilitate and support further research in the neurosciences for the improved understanding and identification of those cognitive processes, functions and behaviors that give rise to personhood.
- Educate and persuade the public on the matter, spread the word, and increase awareness of the idea that some animals are persons.
- Produce evidence and fact-based argumentation in favor of non-human animal personhood to support the cause and other like-minded groups and individuals.
February 10, 2011
Time taps into transhumanism
Transhumanism doesn't get more mainstream than this.
Computers are getting faster. Everybody knows that. Also, computers are getting faster faster — that is, the rate at which they're getting faster is increasing.
True? True.
So if computers are getting so much faster, so incredibly fast, there might conceivably come a moment when they are capable of something comparable to human intelligence. Artificial intelligence. All that horsepower could be put in the service of emulating whatever it is our brains are doing when they create consciousness — not just doing arithmetic very quickly or composing piano music but also driving cars, writing books, making ethical decisions, appreciating fancy paintings, making witty observations at cocktail parties.
If you can swallow that idea, and Kurzweil and a lot of other very smart people can, then all bets are off. From that point on, there's no reason to think computers would stop getting more powerful. They would keep on developing until they were far more intelligent than we are. Their rate of development would also continue to increase, because they would take over their own development from their slower-thinking human creators. Imagine a computer scientist that was itself a super-intelligent computer. It would work incredibly quickly. It could draw on huge amounts of data effortlessly. It wouldn't even take breaks to play Farmville.
----
Not all of [the singularitarians] are Kurzweilians, not by a long chalk. There's room inside Singularitarianism for considerable diversity of opinion about what the Singularity means and when and how it will or won't happen. But Singularitarians share a worldview. They think in terms of deep time, they believe in the power of technology to shape history, they have little interest in the conventional wisdom about anything, and they cannot believe you're walking around living your life and watching TV as if the artificial-intelligence revolution were not about to erupt and change absolutely everything. They have no fear of sounding ridiculous; your ordinary citizen's distaste for apparently absurd ideas is just an example of irrational bias, and Singularitarians have no truck with irrationality. When you enter their mind-space you pass through an extreme gradient in worldview, a hard ontological shear that separates Singularitarians from the common run of humanity. Expect turbulence.
February 9, 2011
Solar wind bridge
This solar wind bridge concept could power 15,000 homes and grow vegetables. Via Engadget:
Why just use solar power or wind power when you can use both? Designed by Francesco Colarossi, Giovanna Saracino and Luisa Saracino as part of an Italian design contest to re-imagine a decommissioned bridge (for which it placed second), this so-called Solar Wind concept would have solar cells embedded in the roadway (an idea that's already catching on) and an array of 26 wind turbines underneath, which the designers say could produce enough energy combined to power 15,000 homes. To make the design greener still, the designers have even included a "green promenade" that would run alongside the road, which they suggest could be used to grow fruits and vegetables that'd then be sold to folks driving by.
February 8, 2011
From quantified to optimized
The quantified self movement is clearly on to something: converging technologies are finally allowing people to measure, record and track their biometric information in meaningful ways. People are increasingly wanting to do this—whether it be to measure their sleep patterns or reveal the deep intricacies of their DNA.
But it's not enough to just measure yourself. Left alone, this approach doesn't complete the loop. What matters is that this information be acted upon. Otherwise it's just useless data.
One approach that I see arising from all of this is what I'd like to call the optimized self movement. I don't necessarily agree with the complaint that "optimized" is too nebulous and subjective a word; individual people can come up with their own definition of the term as it applies to their own sets of needs and goals. One person's version of an optimized self will vary significantly from the next person's, and that doesn't make it invalid or somehow wrong. It's all about personal campaigns driven by personal goals and values.
Specifically, I imagine a future not too far from now in which handheld devices and other gadgetry will be preconfigured to monitor specific health and life style factors and make specific recommendations to users based on a predefined set of goals.
For example, your handheld device (or even some kind of augmented reality display), could advise you to consume more protein if it senses that you're below your goal. It could also alert you to problems, like elevated blood pressure or glucose levels, while also advising that you avoid the cheese cake. It could remind you to take your vitamins and supplements. The potential number of trackable and actionable factors are nearly endless.
We're pretty much there right now. There are already toilets in Japan that can measure sugar levels in urine, blood pressure, body fat and weight. This is the kind of thing we can expect more of in the near future.
Sure, you could ignore the advice of your virtual health coach, but if you're keen on hitting your goals you're more apt to listen to it. It could even give you positive feedback and bonus points for consistently hitting your daily lifestyle targets.
And if you're not hitting expected performance goals, you can recalibrate and experiment with different approaches. It's all measurable, so users will eventually know what works best for them. For the most part these are going to be very personal campaigns; individually, we'll be striving to maximize our genetic potential (physical, cognitive and emotional). It will also be possible to tap into the larger network and discover what's working best for other self optimizers.
Personally, my inner perfectionist and health-nut finds the idea of the optimized self particularly appealing. Books like Tim Ferris's Four Hour Body show that personal improvement is part of the new geek agenda. It has suddenly become quite cool and fashionable to apply the latest science to our bodies in order to get the best results possible. It's likely why transhumanists like myself, who are notorious early adopters, are increasingly getting involved in not just things like the quantified self, but also activities like CrossFit and the Paleo diet, both of which which claim to produce the best results in fitness and diet respectively.
I'm looking forward to seeing just how "optimized" I can get. Such a thing would be great for not just health purposes (especially life extension!), but it's also a worthwhile project in personal betterment and self-experimentation in general.
But it's not enough to just measure yourself. Left alone, this approach doesn't complete the loop. What matters is that this information be acted upon. Otherwise it's just useless data.
One approach that I see arising from all of this is what I'd like to call the optimized self movement. I don't necessarily agree with the complaint that "optimized" is too nebulous and subjective a word; individual people can come up with their own definition of the term as it applies to their own sets of needs and goals. One person's version of an optimized self will vary significantly from the next person's, and that doesn't make it invalid or somehow wrong. It's all about personal campaigns driven by personal goals and values.
Specifically, I imagine a future not too far from now in which handheld devices and other gadgetry will be preconfigured to monitor specific health and life style factors and make specific recommendations to users based on a predefined set of goals.
For example, your handheld device (or even some kind of augmented reality display), could advise you to consume more protein if it senses that you're below your goal. It could also alert you to problems, like elevated blood pressure or glucose levels, while also advising that you avoid the cheese cake. It could remind you to take your vitamins and supplements. The potential number of trackable and actionable factors are nearly endless.
We're pretty much there right now. There are already toilets in Japan that can measure sugar levels in urine, blood pressure, body fat and weight. This is the kind of thing we can expect more of in the near future.
Sure, you could ignore the advice of your virtual health coach, but if you're keen on hitting your goals you're more apt to listen to it. It could even give you positive feedback and bonus points for consistently hitting your daily lifestyle targets.
And if you're not hitting expected performance goals, you can recalibrate and experiment with different approaches. It's all measurable, so users will eventually know what works best for them. For the most part these are going to be very personal campaigns; individually, we'll be striving to maximize our genetic potential (physical, cognitive and emotional). It will also be possible to tap into the larger network and discover what's working best for other self optimizers.
Personally, my inner perfectionist and health-nut finds the idea of the optimized self particularly appealing. Books like Tim Ferris's Four Hour Body show that personal improvement is part of the new geek agenda. It has suddenly become quite cool and fashionable to apply the latest science to our bodies in order to get the best results possible. It's likely why transhumanists like myself, who are notorious early adopters, are increasingly getting involved in not just things like the quantified self, but also activities like CrossFit and the Paleo diet, both of which which claim to produce the best results in fitness and diet respectively.
I'm looking forward to seeing just how "optimized" I can get. Such a thing would be great for not just health purposes (especially life extension!), but it's also a worthwhile project in personal betterment and self-experimentation in general.
Elementary, my dear Watson: Jeopardy computer offers insight into human cognition
Being the astute Sentient Developments readers that you are, I'm sure you're up to speed on Watson, IBM's Jeopardy playing computer:
The more I think about Watson, the more I'm astounded about what IBM has done here. This isn't just some glorified answer engine. If you think about what this system has to do to get these questions right, you quickly realize that there's a lot more going on behind the scenes.
At its core, Watson is an expert answer engine that utilizes natural language processing technology.
And it's probably doing it in a way that's very, very close to how the human brain does it. I'd be willing to bet that the processes behind Watson's programming is very analogous to how the human mind goes about it. Watson, which has access to a massive repository of information, has to interpret all the nuances of language—synonyms, puns, slang, and all—and quickly come up with an answer. It typically builds a list of around four to five answers, and based on a probability analysis, selects what it thinks is the most likely answer. I'm almost certain that the human mind goes about it in the exact same way. It has been suggested, for example, that the mind applies Bayesian probabilism in its calculations. Wouldn't be amazing if we eventually discover that even the algorithms are the same? If this is the case, then IBM has actually created a stand-alone module of the human brain.
So, in terms of the rule based AI vesus whole brain emulation debate, you can strike this down as a victory for the former.
The big difference, of course, is that Watson is not conscious. But that doesn't make a difference. You are not conscious, either, of how you process natural language, access the memory stores in your brain, and come up with an answer. Your brain does this for you behind the scenes and presents the answer to your consciousness; you're none the wiser. You only think you're clever, and that "you" came up with the answer, but in reality the unconscious mechanistic parts of your brain did all the work.
Some people may complain or freak out about that, but I think it's rather cool. We're biological robots; get over it.
More on Watson:
The more I think about Watson, the more I'm astounded about what IBM has done here. This isn't just some glorified answer engine. If you think about what this system has to do to get these questions right, you quickly realize that there's a lot more going on behind the scenes.
At its core, Watson is an expert answer engine that utilizes natural language processing technology.
And it's probably doing it in a way that's very, very close to how the human brain does it. I'd be willing to bet that the processes behind Watson's programming is very analogous to how the human mind goes about it. Watson, which has access to a massive repository of information, has to interpret all the nuances of language—synonyms, puns, slang, and all—and quickly come up with an answer. It typically builds a list of around four to five answers, and based on a probability analysis, selects what it thinks is the most likely answer. I'm almost certain that the human mind goes about it in the exact same way. It has been suggested, for example, that the mind applies Bayesian probabilism in its calculations. Wouldn't be amazing if we eventually discover that even the algorithms are the same? If this is the case, then IBM has actually created a stand-alone module of the human brain.
So, in terms of the rule based AI vesus whole brain emulation debate, you can strike this down as a victory for the former.
The big difference, of course, is that Watson is not conscious. But that doesn't make a difference. You are not conscious, either, of how you process natural language, access the memory stores in your brain, and come up with an answer. Your brain does this for you behind the scenes and presents the answer to your consciousness; you're none the wiser. You only think you're clever, and that "you" came up with the answer, but in reality the unconscious mechanistic parts of your brain did all the work.
Some people may complain or freak out about that, but I think it's rather cool. We're biological robots; get over it.
More on Watson:
Watch and listen
Some recent podcasts and videos worth checking out:
- James Hughes interviews neuroscientist David Eagleman
- The latest RadioLab episode is pure win: Lost & found
- Cynthia Breazel gives a TED talk on the Rise of personal robots
- Richard Dawkins and Daniel Dennett discuss the meaning of life and death
February 5, 2011
SAI in the material world
Mondolithic Studios |
This runs contrary to the concerns of those in the Singularity camp who are gravely concerned that an SAI will be both uncontainable and capable of manipulating physical space in a non-trivial way.
I'd like to present a pair of arguments that will serve as a warning to those who would like to dismiss this possibility. The first is based on a recent technological breakthrough, the second being more of a thought experiment.
Robotic networking and self-replication
RoboEarth is a system that's allowing robots to build on and learn from the experiences of other robots. Think of it as an internet for robots. As it stands, robotics engineers have to teach their bots to navigate and function in the real world. RoboEarth, on the other hand, collects and centralizes information on objects and navigation, and in turn shares this information with other bots. What this means is that any new robot that's connected to this system will have immediate knowledge of its surroundings.
But it doesn't stop there. A recent breakthrough has endowed the TechUnited AMIGO robot with the ability to download all the information it needs for a specific task and then carry out that task. Check out the video below of AMIGO at work:
If this doesn't blow your mind then you're not paying attention. While the task was simple enough, that of autonomously picking up and serving a bottle of water to a person, the potential implications of this are huge. As Joris Peels of iMaterialize clarifies,
If you would combine Robo Earth, with genetic algorithms that automatically design robots and 3D printing you have a very powerful combination. It would be a system that could design a robot based on its experiences, then give that robot all the information it needed to navigate the world and carry out tasks. Anyone could then 3D print this robot anywhere around the world. And the system would be one of continuous learning and itteration with better robots being made every second. We’re still very far away from this but it is these kind of ongoing developments that make me think that I live in the future.The scenario I'm imagining, as I'm sure are other Singularity-concerned futurists, would see an SAI co-opt this system (or create versions of its own) and begin to fulfill its intentions through a myriad of self-designed, recursively improving, and remotely controlled agents disbursed around the world.
I think we should really consider the implications of this. I know, it sounds a bit sci fi and off piste. But, we will develop a Skynet at one point and we should consider the implications before we do so.
Plenty of room at the bottom
Okay, so there's that example. The next consideration is something a bit more fantastical (relatively speaking): the potential for an SAI to reshape the planet (or significant portions of it) from the molecular scale upwards. Before you tune out, watch this video, Molecular Visualizations of DNA:
What you're seeing in this video is a very small sampling of the kinds of molecular machinery that's capable of arising through the processes of natural selection. What you're not seeing here, however, is the space of all possible molecular machinery that's capable of arising through intentional design. And what you're definitely not seeing here is the space of all possible molecular machinery that's capable of arising through intentional super-intelligent design.
The kinds of molecular machinery that we're familiar with has come about solely for the purpose of maintaining and propagating complex organisms. We're only beginning to imagine the kinds of molecular-scale processes and devices that might be designed to perform other kinds of functions; the design space is massive.
And this is where an SAI comes in. It's easy to imagine a system similar to RoboEarth in which an SAI can design and disburse both macro and micro scale devices. The only limitations facing such a system would be inherent energy and material constraints, other human or SAI-driven countermeasures, and the laws of physics itself.
Okay, what exactly am I imagining? Given free reign, an SAI could potentially re-arrange all matter on the planet. One possibility is that it could turn the Earth into computronium or anything else it wants. Or, it could remove all toxins and other pollutants from the surface and atmosphere. It could turn the planet into a Venusian hell, or a verdant Utopian paradise. Whatever. In all honestly, I can't even really begin to speculate without knowing the intentionality of a Singularity-surviving intelligence. But suffice to say the scope of its impact on the material world needn't be subtle.
For those of us engaged in foresight activities, the risk is in thinking too small on this matter—or in denying it altogether.
February 2, 2011
Vgo, the telerobot
Having one of those radical presentism moments. Via Singularity Hub; Aaron Saenz writes:
While we haven’t covered the Vgo robot in the past, it reminds me of several other telerobots we have seen, especially Anybot’s QB. Only Vgo is supposedly retailing for around $6000 (including ~$1200/year for the service contract), considerably less than the QB’s $15k price tag. Differences in maneuverability, reliability, and video quality may make the cost difference appropriate, but that’s not really my concern. Vgo is representative of the telerobotics market as a whole right now: reasonable run times (battery life is between 6-12 hours depending on upgrade options), Skype-level video quality, and compatible with standard WiFi. If you can afford the $6k (or $15k) price tag, you can probably have this setup in your home or office right now. In other words, this isn’t the technology of tomorrow, it’s here today and ready to go. Vgo launched sales in 2010 and has been marketing their product to a variety of applications, as you’ll see in the following video:Read more.
Not to sound cynical, but I’m guessing that Lyndon Baty’s use of Vgo is just another part of that marketing plan. I’m totally fine with that, by the way. Giving a child (and a school district) a reasonable solution for a terrible predicament is great. If it comes with a moderate price tag, so be it. So, while Lyndon’s personal story of perseverance and increasing freedom is exceptional, the underlying technological implications are pretty mundane: telepresence is gearing up to try to make a big splash in the market.
We’ve seen plenty of indications of this. South Korea is testing telerobots in their schools. They could have one of these devices in every kindergarten classroom by 2013. Researchers in Japan are experimenting with robots aimed towards emotional connections (with mixed results). As we said above, Anybots has their own platform on the market already. iRobot recently unveiled a prototype robotic platform that would transform any teleconference-enabled tablet computer into a telerobot. I’m guessing that in the next five years, one or more of these attempts at telerobotics is going to actually gain some traction and start moving some serious product.
Education may be a natural market. As we learned from Fred Nikgohar, head of telerobotics firm RoboDynamics, there are some big hurdles in other applications of telepresence robots. Offices value secrecy. Medical facilities worry about patient privacy. There’s a lot of bureaucracy standing in the way of widespread adoption of telerobotics. Schools have some of the same problems, but (to be perfectly honest) they also have sick kids who you can’t say no to. Or they’re run by governments who have nationalistic goals in science and technology (exemplified by South Korea). Get the price of telerobotics low enough, and we could see it expand into different niches of education including homeschooling, remote expert instructors (like the English tutors in South Korea), or online universities.
In other telerobotics news, Anybots QB is now shipping.
Subscribe to:
Posts (Atom)