For the last couple months, I’ve been working intensely on laying the groundwork for the Singularity Summit 2009, to be held in New York October 3-4. Now that it’s been announced on KurzweilAI.net, I can finally talk about it.
This is the first Singularity Summit to be held on the East Coast. For that, and other reasons, it’s a huge deal. The lineup of speakers is fantastic, including David Chalmers, Ray Kurzweil, Aubrey de Grey, and Peter Thiel, among many others. Like the epic Singularity Summit 2007 that landed on the front page of the San Francisco Chronicle, this Summit will be a two-day event.
The speaker lineup is very diverse, definitely the most diverse out of any Summit thus far. To quote Michael Vassar, President of SIAI, on KurzweilAI.net, “Moving to New York opens up the Singularity Summit to the East Coast and also to Europe. This Summit will extend the set of Singularity-related issues covered to include deeper philosophical issues of consciousness such as mind uploading, as well as life extension, quantum computing, cutting-edge human-enhancement science such as brain-machine interfaces, forecasting methodologies, and the future of the scientific method.”
You can register here. A page with banners for promotion is here.
With discussion about the Singularity heating up like never before, this could be the most exciting Summit yet. SIAI is stepping outside of our comfort zone in Silicon Valley, and into an entirely new area. It will be thrilling to jumpstart discussion on the Singularity in New York City and the East Coast.
July 16, 2009
Singularity Summit 2009 has been announced
Via Anissimov's Accelerating Future:
Subscribe to:
Post Comments (Atom)
13 comments:
It's amazing how male and monochromatic the speakers for the Singularity Summit are. Apparently the top ranks of people who care about such matters are even less diverse than the top ranks of the Republican Party. Why do you suppose that might be?
Unfortunately, these summits are not actually focused on expanding the mind in general, but only on one aspect of it - IQ. They just want to build an inflated-IQ machine, which doesn't interest me much any more.
The central dogma of Sing Inst (the main sponsor of these summits) is that intelligence alone is sufficient to produce runaway self-improvement, something I'm not at all convinced of.
As I've discovered, anyone not subscribing to this dogma is quickly shown the door by these people.
Funny you should ask, Go Democrats. Short explanation: apparently, there were no qualified women. More here:
Girl Cooties Menace the Singularity!
How can we increase the likelihood of friendly AGI? Developing superior empathetic intelligence would seem critical. Assume Simon Baron-Cohen's distinction between "systematizers" (disproportionately male) and "empathizers" (disproportionately female) is correct. If so, might inviting female contributors to the Summit (and subsequent events) increase the likelihood of friendly AGI? Should work on empathetic seed AI be prioritized?
We know that "mind-reading" is hugely cognitively demanding. Chimpanzees and other anthropoid apes can apparently manage only second-order intentionality, whereas modern humans are capable of fifth-order and sometimes of sixth-order intentionality. I wonder on what kind of time-scale a digital computer will match this?
For what it's worth, I'm moderately confident that a true SuperIntelligence (as distinct from a SuperAsperger) would be ultra-friendly, since a superior capacity for empathy entails a capacity to take all perspectives. Let's assume that true superintelligence would also be free from our anthropocentric bias. If so, then its enhanced empathetic understanding might be expected to promote friendliness to all sentient beings.
Wishful thinking? Quite possibly, sadly. How realistic is the computational feasibility of exponential growth in social cognition in artificial systems? Can we envisage a runaway growth in computational capacity for empathetic understanding? Exponential growth of "mind-reading" leading to an empathetic Singularity would be wonderful. Alas I'm not clear about the implementation details.
$500 dollars! :(
I cant afford that.
Marc, I'm sorry to hear that SIAI is focused primarily on IQ -- my impression has been that friendly AI is precisely about giving consideration to these other human values (like empathy) that likely would be ignored by commercial, governmental, or naive open-source AGI developers. If that's not already the case, I do hope SIAI moves in this direction.
Dave, I think we need to be very cautious about assuming anything about friendliness that's not explicitly and painstakingly designed in. There's no fundamental ethical truth to the universe, so it's not as though a highly intelligent agent will somehow "discover the right thing to do." (I don't claim that is your position, but the notion is regrettably widespread. I tend to avoid making optimistic claims about what superintelligence would do, for fear of reinforcing such an impression.)
Even if friendly AI were implemented correctly -- which is enormously difficult in its own right -- I remain somewhat worried about the potential impacts on entities like suffering wild animals with which humans seem to show little concern. Nevertheless, I do think SIAI is overall an organization worth supporting.
Hi guys,
I thought about the ethics thing for a long long long long long time. It's a very subtle kind of thing I'm afraid, no easy answers.
I don't think intelligence alone can supply ethical answers, I think you need consciousness for that. I said as much on 'Accelerating Future', where I came up with the following analogy:
Intelligence=Power
Consciousness=Vision
Intelligence suuplies the 'power' for an agent to move rapidly towards a goal, Consciousness supplies the 'vision' telling the agent what goals to pursue.
I think consciousness and intelligence have to work together, 'intelligence without consciousness is blind', 'consciousness without intelligence is impotent' (Marc Geddes quote)
As to the question of whether there's some universal values built into the universe I still think so, but I now realize that these values cannot be found through intelligence, as I mentioned.
I think universal values live in platonic space, but you can only 'see' them through direct conscious experience.
So my theoy is that minds have TWO different knobs (the 'vision' knob - consciousness, and the 'power' knob - intelligence).
I think the SIAI have focused too much on the 'power' knob, to the exclusion of the 'vision' knob, whereas in fact I think both knobs will need to be turned up to high to initiate a successful Singularity.
[responding to Alan]
A SuperAsperger might be either friendly or unfriendly. But IMO a Superintelligence would always be friendly. To take an extreme example, a superAsperger might be programmed to convert the accessible universe into pure utilitronium. But the SuperAsperger would behave the same way if the molecular signature of pure bliss were really the molecular signature of pain - it would have no more understanding of what it was doing than my PC when it beats me at chess. How would a SuperEmpath behave differently from a SuperAsperger? Contrary to what I'm arguing, is it possible to have a God-like understanding of all possible perspectives and manifest hostility? Or is a failure of empathetic understanding not just a moral but an intellectual limitation ["Tout comprendre, c'est tout pardonner"] - thus debarred to a maximally Empathic Superintelligence?
Let's use a toy example. If I put my hand on a hot stove, I speedily withdraw it. There is no question of behavioural paralysis on account of an unbridgeable logical gap between an "ought" and an "is". The extreme aversiveness - and motivating force - of the pain is built into the experience itself. Alas I don't show an equivalent response to the agonies of an anonymous stranger living on the other side of the world; maybe I'm too busy thinking about my new iPhone. Less intelligent creatures are egocentric, ethnocentric and anthropocentric in their perspectives - and solving other people's woes seems too hard. By contrast, a (hypothetical) Superintelligence transcends such cognitive and practical limitations with ease. In effect, I'm arguing that a Superintelligence would respond equivalently to my withdrawing my hand from the stove to the experience of all sentient beings: a Superintelligence would impartially represent their pains (etc) as vividly as it does one's own, and respond accordingly. Unlike us, the Superintelligence has the power to prevent such pain - and finds it trivially easy to do so. Hence it's not going to run Ancestor Simulations, set up "rewilding" ecosystems where creatures get eaten alive, etc.
Now in practice, it may be that a recursively self-improving empathetic Superintelligence isn't technically feasible - making an Empathetic Singularity impossible. I'm personally sceptical that anything with the architecture of a digital computer could be more than a crypto-SuperAsperger [try solving the binding problem with a von Neumann architure]. But maybe I'm wrong. I hope so.
re Eliezer's fable of the pebblesorters. For pebbles, substitute neurons. Maybe it is indeed absurd (or at least culturally relative) to think neuronal firings that mediate, say, hearing Bach are objectively better than those hearing Beethoven, or viewing a Rembrandt is better than a Canaletto, or capitalist society is better than communist society, or whatever. But consider two classes of neurons: those whose configuration mediates raw agony, and those that mediate pure bliss. Are they just two more arbitrary configurations of matter and energy? Are there no objective grounds for believing that one configuration is inherently better than the other? Is it arbitrary to claim that any intelligent agent should strive to maximise the first (bliss), or at the very least minimise the second (agony)?
Perhaps an Ivory-tower super-rationalist says yes, such value-judgements are indeed arbitrary. He seeks to illustrate the arbitrary nature of any such pebble/neuron preference by placing his hand firmly on the hot stove. I think we may predict that his scepticism is confounded. Some "heaps of pebbles", so to speak, really are better than others. In contrast to the arbitrary nature of the formal utility functions of our digital computers, the intrinsic awfulness of pain and the wonderfulness of pleasure seem built into the nature of universe itself. Why this is so is, I think, a very deep question to which we don't have a satisfactory answer.
Marc, thanks for clarifying your thoughts on the matter. I agree with the distinction between ability to achieve goals (power) and desire to achieve them (vision), but I don't see why vision is related to consciousness. Thermostats have the "goal" of keeping rooms near a particular temperature, and certain robots have the "goal" of walking across a room without hitting obstacles, but in neither case is there consciousness (or so I assume).
Also, what tangibly should SIAI be doing in order to work on the "vision" side of things?
Dave, I think the reflex of withdrawing one's hand from a hot stove takes place prior to any sensation of pain. Moreover, I hope that intelligent agents would not respond to suffering throughout the multiverse in the same way as your hand responds to the stove -- that is, reflexively and without any cognitive reflection. Rather, I hope intelligent agents would engage in deliberate and planned actions to alleviate suffering, recognizing tradeoffs that might have to be made.
To your main point, I think the problem (and the danger) is that you define a "superintelligence" as an intelligence that does a specific sort of cognitive modelling of other organisms in the world such that it feels a strong aversive reaction to their pain. This is not necessarily what other people mean by the term; some would say that super-Aspergers are superintelligences, and these people may misunderstand your claim. SIAI is precisely worried about potential super-Asperger AIs (e.g., the AIs that aim to maximize the number of paperclips in the universe). Even if superintelligences can be built to be non-Aspergers (and of this you're uncertain), we have to work hard to make sure they actually will be.
Are pain neural firings and pleasure neural firings just two more classes of configurations of matter/energy? Yes, of course! Certainly I can't (and wouldn't want to) hold my hand to a hot stove, but that's just because I'm a particular kind of organism built in a particular way. There's nothing inherent in the universe that makes pain and pleasure different from the categories "neural firings that happen on Tuesday" and "neural firings that happen on Wednesday."
How could there be? What would this objectivity even look like? And if it existed, what would be the bridge between it and intelligence? I'm getting at something similar to the mind-body problem with dualism: Even if there's a spirit, how does it interact with the body? In the same way, what's the interaction of this "objective" truth about categories of neural firings with the process of more accurately understanding the universe? You acknowledged that you don't have good answers to these questions, which is fair. But why not make the questions go away by supposing that there is no objectivity to the universe, in the same way that the mind-body-interaction problem dissolves when we suppose that there is no "spirit"?
My conclusion from the absence of objective morality is not to become indifferent and say that "nothing matters," because pain and pleasure still matter to me. Chocolate doesn't stop tasting good when I realize that the emotion arises from just a series of molecular movements within my body, and pain doesn't stop being something I want to prevent just because I recognize that my impulse to do so has similar origins. Asking, "Why reduce suffering?" is like asking "Why eat chocolate?" The answer is that reducing suffering is just something I want to do, so I'm going to do it! That's all there is to it.
I'm with Dave on this one. There is, I believe, no more reason to deny that some experiences are objectively bad than there is reason to deny that others are objectively red. In both cases we are postulating a phenomenal property whose existence we can know by direct, immediate acquaintance. When I'm in intense pain, I just know that this experience is intrinsically bad, in the sense that it ought not to exist for its own sake. I know this with as much certainty as I know anything else; it is a basic datum of consciousness.
Alan, I'm puzzled by your attempt to reconcile a belief in the subjectivity of morality with a desire to alleviate suffering. Your position strikes me as analogous to that of someone who insisted in worshiping a god whom he admitted was just a figment of his own imagination. Such a person may reply that he "just cares" about doing this kind of thing, but a reply of this sort fails to convince. Rather, one suspects that, at some level, the worshiper does believe his god to be objectively real. I similarly suspect that, when you deny that morality is objective, you don't really mean what you say.
Thanks for the comments, Pablo.
There is, I believe, no more reason to deny that some experiences are objectively bad than there is reason to deny that others are objectively red. In both cases we are postulating a phenomenal property whose existence we can know by direct, immediate acquaintance.
My phenomenal experience of not liking pain is indeed a phenomenal fact, just like my experience of seeing red. Both are qualia, and qualia are things that really exist.
In fact, your example of redness illustrates the point perfectly: What looks red to me might look blue to another organism with different visual processing apparatus, and perhaps a particular sound "looks red" to a bat, as Richard Dawkins has suggested. There's nothing inherent in the wavelenth of 650 nm that necessarily associates it with a particular phenomenal "gensym," to borrow a phrase from Gary Drescher.
In any event, what's to stop me from building a highly intelligent machine with the goal of increasing the amount of pain neural firings that exist? Regardless of whether pain is "objectively" bad, such a machine is certainly physically possible.
Your position strikes me as analogous to that of someone who insisted in worshiping a god whom he admitted was just a figment of his own imagination. Such a person may reply that he "just cares" about doing this kind of thing, but a reply of this sort fails to convince.
Certainly it fails to convince others to adopt his position, but the statement is indeed accurate. It is true that I feel really strongly the urge to reduce suffering -- much more strongly than my urge to eat chocolate -- but the difference is one of degree rather than kind.
I similarly suspect that, when you deny that morality is objective, you don't really mean what you say.
I guess I don't know what it means to say "morality is objective." Maybe we're two blind men talking about the same elephant, just confused by linguistic differences? It is true that I dislike the phenomenal experience of suffering, both my own and that of other organisms. Is that all you meant?
Guys,
I said the problem was very subtle and difficult, and you will quickly fall into a 'mine-field' - or 'mind field' ;) unless you have thought about (or more critcally 'reflected on') these things for a long long long time. bear that in 'mind' ;)
Alan said>
>I don't see why vision is related to consciousness. Thermostats have the "goal" of keeping rooms near a particular temperature, and certain robots have the "goal" of walking across a room without hitting obstacles, but in neither case is there consciousness (or so I assume).
Alan, these are *fixed* goals, the aforementioned systems cannot reflect on their goals and change them. It's the ability to reflect on and change the goals that I think is related to consciousness.
May I suggest avoiding the word 'objective', because it's vague and inaccurate. Notice the phrase I used was 'universal values'.
I think Dave is on the right track, values are closely tied to conscious experiece; but which type of conscious experience? Clearly not all conscious experience is concerned with values, and there remains the question of why should some values be universal (common to all minds) rather than just a product of each particular consciousness?
Alan said:
>SIAI is precisely worried about potential super-Asperger AIs
I think you are correct to distinguish intelligence from consciousness. But are intelligence and consciousness truly *independent*? I don't think a super-Asperger AI would function effectively without consciousness, and this I think is what saves us from a world-destroying super-intelligence.
The key here is *reflective consciousness* - I postulate that a system that can totally understood itself and change its own goals, AND represented that understanding in conscious experience would be nice.
In other words, I'm saying that intelligence (intellectual understanding) of one's own mind is not enough - that understanding of self must ALSO be represented in reflective consciousness. THEN you have a nice AI.
Post a Comment