Michael Shermer may be right, but he certainly doesn’t make the case for it. It just looks like an unsupported assertion to me. And he seems to conflate AI with the Singularity.
14 thoughts on “The Singularity Is Not Near”
Comments are closed.
Michael Shermer may be right, but he certainly doesn’t make the case for it. It just looks like an unsupported assertion to me. And he seems to conflate AI with the Singularity.
Comments are closed.
I have remarked before that we are actually in the middle of a singularity. It’s been going on since humans first use fire, and the speed of change has been accelerating ever since. This takes the long view of history. Those who first used fire could have had no concept of the modern world.
You’d probably have to come up with a different term for your view Lee… Singularity seems to be taken.
At first blush, I thought you were being kind of harsh Rand, but as usual your statements are on target. But I think the fuzzy thought of the article is also on target. Self awareness is not understood, but to the extent that it is we can assert that machines don’t have it and are unlikely to regardless of how fast they become. We just don’t understand what self awareness is with regard to electromechanical things (or at all really.)
we are 10 years away … and always will be.
You really can’t weasel better than that 😉
Lee: I’ve been saying the same thing! We did the Singularity already, when we started using tools, fire, and writing. Everything since then has just been refinement and expansion.
Rand: I have to agree with Shermer. I’m married to a neuroscientist, and you could fill every library in existence with what we DON’T know about the brain. We’re nowhere near even knowing what we don’t know yet. Artificial intelligence is like space travel: it seems so easy when it’s just a theoretical concept.
You, too, are confusing the Singularity with AI. And AI may take a form completely unlike the human brain. In fact, it likely will.
Not trying to be snarky here, but how can one have a Singularity without AI? What other change would make such a fundamental shift as to make the world incomprehensible to pre-Singularity minds? I can comprehend a world with cheap space travel or fusion power.
So please, let’s define our terms. What do you mean by a Singularity if it doesn’t include AI?
Bingo. We’re slowly accumulating an ever more diverse set of smart objects, each one capable of doing a very small number of tasks better than humans can. The set of objects will likely subsume most human expertise–and labor–long before anybody (or anything) figures out how to glue them together into something conscious. And even then, that consciousness is unlikely to be anything like human consciousness.
Want a nice, simple definition for the Singularity? It’s the point at which the productivity growth rate permanently exceeds the economic output growth rate. After this point, the jobs available for humans continuously decline. If you want to start a pool, I’ll pick 2025.
I can tell you exactly when the Singularity will occur: at exactly the same moment that we achieve both air-breathing SSTO and sustained, positive nuclear fusion…
1. There are several ideas of just how we might mark “the Singularity”. A common one is that it’s the point where machines actually begin to out think human beings, after which we might presume that intelligent machines will inform us how even more intelligent machines might be constructed, and so on, leading to an explosion in intellectual capability on this planet, leading quickly to…. IOW, artificial intelligence by itself isn’t the Singularity, but it’s an essential precondition, and barring human recalcitrance, could lead quickly to the Singularity.
2. The question of whether of a computer-software-database creature can be said tp be aware isn’t exactly new. Those with curiosity and time on their hands might choose to google “Searle Chinese Room” for six or seven hundred thousand references.
I’m not sure I understand what you mean about confusing the singularity with AI. I thought the idea of the singularity is the postulate that machine intelligence will shortly exceed human intelligence. But isn’t AI, broadly defined, simply the attempt to produce machine intelligence? So if AI is really, really hard and we don’t yet know how to produce true machine intelligence, doesn’t that imply that any singularity event is a long way off? I always felt Kurzweil focused too much on rapid hardware advances and not enough on the relatively slow pace of software advance.
The computers of the year 2045 will be asking us the Jeopardy questions.
Compare a Commodore 64 to today’s PCs, and the HP-41C to an iPhone. The differences are much more than just cosmetic.
There are different notions of what constitutes a Singularity. One of them is the idea that computer systems become exponentially more capable without end. But an older notion, going back to Vernor Vinge, is the Singularity is the point at which scientific knowledge grows exponentially faster than humans can master the new learning; there’s an implication that “the” singularity might actually be a considerable period in which one discipline after another (geology, geometry, geophysics…) progresses to the point that it outstretches human abillity.
Different definitions of AI exist as well. Defining “intelligence” unambiguously turns out to be difficult, so for some time AI was defined as giving machinery the capability of performing various human mental tasks. Playing chess, for example, or aligning printed text. By this definition, some aspects of AI have already been reached, while others are still open.
What most of us would think of as “intelligence” — or so I suspect — is consciousness. Self-awareness and free will. We can give indeterminacy to most programs easily enough by making use of random number generators or outside sensors to provide unpredicatable inputs; alas this is not free will, however much it may resemble it. OTOH, the link between “intelligence” and “consciousness” is not as simple as one might think, either. Consider “idiot savants” for example. Consider a master pianist, a la Van Cliburne, who plays complicated music — even unfamiliar complicated music — with minmal apparent attention to a score, because so much of what he has learned has been internalized. Is the pianist “intelligent” because he can transpose a Bach harsichord concerto on sight infront of a large audience, or because he can grumble about his sore toes until two in the morning?
The idea of an AI singularity is a *very* old one.
The Primitive Expounder, 1847:
But in general the (mathematically misnamed) Singularity idea is just: scientific advances beget technological advances which increase the rate of further scientific advances; at some point this rate may become so steep that it is impossible to guess at the future afterward. AI is a popular hypothetical mechanism, but so are nanotech assemblers and biotech (particularly brain modification) advances.
AI is a popular mechanism perhaps because of simplicity of explanation. You make something smarter which in turn figures out how to make something smarter. Repeat. You don’t have to explain how once the process gets going. The loop only stops when physical law constraints restrict how much smart you can pack in your piece of the universe.
*something* has to be driving progress. Technology can’t very well outstrip the human mind’s capacity to understand it if human minds are responsible for creating it! Hence the AI as precondition argument.