That said, I have noticed an increasing interest in the whole brain emulation (WBE) approach. Kurzweil's upcoming book, How the Mind Works and How to Build One, is a good example of this—but hardly the only one. Futurists with a neuroscientific bent have been advocating this approach for years now, most prominently by the European transhumanist camp headed by Nick Bostrom and Anders Sandberg.
While I believe that reverse engineering the human brain is the right approach, I admit that it's not going to be easy. Nor is it going to be quick. This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don't exist yet. And importantly, success won't come about all at once. This will be an incremental process in which individual developments will provide the foundation for overcoming the next conceptual hurdle.
But we have to start somewhere, and we have to start with a plan.
Rules-based AI versus whole brain emulation
Now, some computer theorists maintain that the rules-based approach to AI will get us there first. Ben Goertzel is one such theorist. I had a chance to debate this with him at the recent H+ Summit at Harvard. His basic argument is that the WBE approach over-complexifies the issue. "We didn't have to reverse engineer the bird to learn how to fly," he told me. Essentially, Goertzel is confident that the hard-coding of artificial general intelligence (AGI) is a more elegant and direct approach; it'll simply be a matter of identifying and developing the requisite algorithms sufficient for the emergence of the traits we're looking for in an AGI—things like learning and adaptation. As for the WBE approach, Goertzel thinks it's overkill and overly time consuming. But he did concede to me that he thinks the approach is sound in principle.
This approach aside, like Kurzweil, Bostrom, Sandberg and a growing number of other thinkers, I am drawn to the WBE camp. The idea of reverse engineering the human brain makes sense to me. Unlike the rules-based approach, WBE works off a tried-and-true working model; we're not having to re-invent the wheel. Natural selection, through excruciatingly tedious trial-and-error, was able to create the human brain—and all without a preconceived design. There's no reason to believe that we can't figure out how this was done; if the brain could come about through autonomous processes, then it can most certainly come about through the diligent work of intelligent researchers.
Emulation, simulation and cognitive functionalism
Emulation refers to a 1-to-1 model where all relevant properties of a system exist. This doesn't mean recreating the human brain in exactly the same way as it resides inside our skulls. Rather, it implies the recreation of all its properties in an alternative substrate, namely a computer system.
Moreover, emulation is not simulation. We're not looking to give the appearance of human-equivalent cognition. A simulation implies that not all properties of a model are present. Again, it's a complete 1:1 emulation that we're after.
Now, given that we're looking to model the human brain in digital substrate, we have to work according to a rather fundamental assumption: computational functionalism. This goes back to the Church-Turing thesis which states that a Turing machine can emulate any other Turing machine. Essentially, this means that every physically computable function can be computed by a Turing machine. And if brain activity is regarded as a function that is physically computed by brains, then it should be possible to compute it on a Turing machine. Like a computer.
So, if you believe that there's something mystical or vital about human cognition you should probably stop reading now.
Or, if you believe that there's something inherently physical about intelligence that can't be translated into the digital realm, you've got your work cut out for you to explain what that is exactly—keeping in mind that any informational process is computational, including those brought about by chemical reactions. Moreover, intelligence, which is what we're after here, is something that's intrinsically non-physical to begin with.
The roadmap to whole brain emulation
A number of critics point out that we'll never emulate a human brain on account of the chaos and complexity inherent in such a system. On this point I'll disagree. As Bostrom and Sandberg have pointed out, we will not need to understand the whole system in order to emulate it. What's required is a functional understanding of all necessary low-level information about the brain and knowledge of the local update rules that change brain states from moment to moment. What is meant by low-level at this point is an open question, but it likely won't involve a molecule-by-molecule understanding of cognition. And as Ray Kurzweil has revealed, the brain contains masterful arrays of redundancy; it's not as complicated as we currently think.
In order to gain this "low-level functional understanding" of the human brain we will need to employ a series of interdisciplinary approaches (most of which are currently underway). Specifically, we're going to require advances in:
- Computer science: We have to improve the hardware component; we're going to need machines with the processing power required to host a human brain; we're also going to need to improve the software component so that we can create algorithmic correlates to specific brain function.
- Microscopy and scanning technologies: We need to better study and map the brain at the physical level; brain slicing techniques will allow us to visibly study cognitive action down to the molecular scale; specific areas of inquiry will include molecular studies of individual neurons, the scanning of neural connection patterns, determining the function of neural clusters, and so on.
- Neurosciences: We need more impactful advances in the neurosciences so that we may better understand the modular aspects of cognition and start mapping the neural correlates of consciousness (what is currently a very grey area).
- Genetics: We need to get better at reading our DNA for clues about how the brain is constructed. While I agree that our DNA will not tell us how to build a fully functional brain, it will tell us how to start the process of brain-building from scratch.
Time-frames
Inevitably the question as to 'when' crops up. Personally, I could care less. I'm more interested in viability than timelines. But, if pressed for an answer, my feeling is that we are still quite a ways off. Kurzweil's prediction of 2030 is uncomfortably short in my opinion; his analogies to the human genome project are unsatisfying. This is a project of much greater magnitude, not to mention that we're still likely heading down some blind alleys.
My own feeling is that we'll likely be able to emulate the human brain in about 50 to 75 years. I will admit that I'm pulling this figure out of my butt as I really have no idea. It's more a feeling than a scientifically-backed estimate.
Lastly, it's worth noting that, given the capacity to recreate a human brain in digital substrate, we won't be too far off from creating considerably greater than human intelligence. Computer theorist Eliezer Yudkowsky has claimed that, because of the brain's particular architecture, we may be able to accelerate its processing speed by a factor of a million relatively easily. Consequently, predictions as to when we may hit the Singularity will likely co-incide with the advent of a fully emulated human brain.
11 comments:
I wouldn't expect rule-based AI to be equal to a human in creativity, and rigid systems tend not to mix well with the real world.
Then again a brain emulation-based AI might go insane and kill us all, though at the very least it might be less likely to kill us by accident trying to fill it's programmed goals.
It will be interesting to see if Kurzweil replies to Meyer's reply to Kurzweil's reply. To answer Meyer's scepticism, Kurzweil needs to provide some data to back up his claim that we are coming to understand the human brain at an exponential rate. That will be hard to do.
He could enumerate the growth in the number of scientific papers published in neuroscience, cognitive science, AI, etc., but that would indeed be a huge undertaking. If Kurzweil already had that data, it would probably be part of his usual slides.
Even if the growth rate in the number of papers turned out to look like an exponential growth curve when plotted on a graph, that doesn't mean it will necessarily continue at that rate. It also wouldn't tell us anything about how much work there is left to be done.
Of course, as the aviation/bird analogy suggests, we do not need a thorough understanding of every aspect of the brain in order to emulate one. We just need the core principles, especially those which allow for effective self-adaptation, learning and growth.
The problem with emulating a human brain is that in order to function properly, you need to give it human-like senses and experiences. Otherwise you'd just end up with a blind, deaf, and deranged infant.
I've put some links together here:
http://xixidu.net/2009/08/21/reverse-engineering-the-brain-astrocytes-microtubule/
I think what some people claim is not that there's something mystical or vital about human cognition but that the brain is a function that is physically computed by the universe. That is, there is no substrate-independence. Or at least not the kind subspected: http://en.wikipedia.org/wiki/Church%E2%80%93Turing%E2%80%93Deutsch_principle
[q]"We didn't have to reverse engineer the bird to learn how to fly," he told me.[/q]
Not quit true, The Wright brothers did look at the wings of seagulls to shape the wings of their first ever flying plane.
Nature is most the best teacher and has evolved at maximum efficiency.
Reversed engineering of the brain can be don (imho) by doing it piece by piece, learning how to understand hearing, seeing, learning, and so on. We could learn from autism, but also from Alzheimer's disease (and so on).
Maybe we could try to start understandig a very small childs brain, and figure out how the "clean and input free brain" will process it's information.
Our brains are filled with information, memories, feelings. That's not what we want from an AI, we want it to be "clean"....
What catches my eye is, that repeatedly someone refers to Kurzweil as an expert in the area of, well, neuroscience or something like that. According to the title of his book "How the Mind Works and How to Build One" Kurzweil himself claims being an expert -- he "knows how". Looking at Kurzweil's biography this all is rather questionable.
I agree with Ben that a) reverse engineering brains is possible in principle and b) it's likely a lot more work than creating an AGI (using a simpler approach). Sure the human brain is an existing example, but actually emulating it would take a lot of research and technology that hasn't been done/made yet.
I'm a contributor to Ben's AGI project (http://opencog.org/). People who understand an AGI approach in detail find it easier to imagine the work being finished soon; people who don't, usually imagine it taking decades. (There are other reasons as well, i.e. being biased in either direction)
Engineers are not opposed to a bit of reverse engineering here and there - if it helps. Preliminary audio and video sensory transforms could perhaps be profitably cribbed, and the basic r-l mechanisms could possibly be copied too. However, scanning whole minds is much too far out to come first - IMHO.
Hi. As a double major in Neuroscience and Cognitive Science at Vassar College, and one who is pretty on top of their shit, I'd venture to advance the opinion that you adopt a hopeful, somewhat world-blind, extremist view of a poorly understood issue.
One or two things to consider:
- The most powerful (but also painful, and for this reason not yet universally accepted) version of developmental psych adopts the evo-devo stance, where genes do not act (CANNOT act) independently of their environment: each brain in specific both to the body in which it is placed and the world in which that body grew up and currently exists. This sentence is a combination of basic embodiment and situatedness (theories that have been around, and dominant, for 35 years) and ecoological psychology (check out Henry Plotkin, from back in the 80's) and evolutionary development theory (Susam Oyama, '83).
Also: to the rule-based AI people/your treatment of that issue: this kind of AI has not been taken seriously by the majority of progressive thinkers in cog sci for decades. It doesn't work. brain don't operate this way. Also, see above comment about how every brain is specific to its environment. The correct alternative to whole-brain emulation (which is almost 100 percent impossible, in a 1:1 way) are dynamical models of AI-like systems. Check out Tim van Gelder, tge DYnamical Hypothesis in Cognitive Science.
Sorry that sounded aggressive. I just literally spend my entire day thinking about this stuff, and to see you adopt such a niave viewpoint because you'd rather hope than think pragmatically is frustrating.
Even if it takes a very large and costly computer to run the very first approximation/simulation of a physical brain, once the process has been validated and if even some of the basic brain processes are emulated, then we have a start.
At this point, we can do things with this simulation that we can't do with living tissue: run subsystems or subunits independently. Rerun from a specific checkpoint or from a synthetic input or internal conformation(a "what-if" situation). Run the simulation at various speeds, increments or granularity levels. See what happens when we merge (simplify) or further define some areas of the brain.
We also have unlimited access to every single connection and internal state/element, and we can run "aggregation" reports to check or validate vaguely-defined events inside this complex construction.
My guess is that we're currently learning how to process and analyze huge amounts of data from the LHC experiment. Just the way research on encryption and on data compression influenced (and was in return influenced by) the Human Genome Project, the LHC and a large-scale brain simulation project on a very powerful computer system can share common data analysis goals. This is especially true if we choose to run the brain simulation project on a highly parallel and asynchronous computing substrate; in that case, we wouldn't have fine control over snapshot abilities (much like "taking the picture of a swarm of birds taking off with a late 18th century camera") and the system would generate huge amounts of asynchronous data that's hard to sort or analyze globally.
In the end, we might discover that some parts of the brain must be very exactly emulated, almost at the quantum level, and with a very fine time slicing strategy. While other parts will be simulated with high-level input/output components, or even with a "black box" programmed approach.
A few years after the first successful simulation, the actual simulation complexity and size might very well decrease, as we get better at fine-tuning the functional/hardwired parts, while we give more computing power to the delicate mechanisms that govern higher intelligence, insight, personality and emotion.
Another problem I see is that synapses and neurones, like any other biological cell, have internal processes that actively influence interaction with their environment. Most notably gene expression and protein production, which is controlled ultimately by the cell's nucleus, e.g. by the cell's reading and "execution" of its own DNA.
The good news is that each cell in someone's body use the same DNA and have very uniformized reading/execution subsystems. Thus once we know how to store, read and execute virtual DNA to produce virtual proteins (which we don't even need to fully represent, since we can test/infer their functions in vitro), we can run a single, centralized version of a cell in a separate server, and each brain cell in our simulation could send queries to that server.
Just my humble opinion. I have a degree in Cognitive Sciences (published articles about modelling of ideas and computer-human interaction), and taken a course in bioinformatics in the Dark Ages (1998).
Post a Comment